Skip to content

Commit d62125a

Browse files
committed
docs
1 parent 143df0c commit d62125a

File tree

1 file changed

+37
-0
lines changed
  • docs/source/en/api/pipelines

1 file changed

+37
-0
lines changed

docs/source/en/api/pipelines/flux.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,43 @@ images = pipe(
268268
images[0].save("flux-redux.png")
269269
```
270270

271+
## Combining Flux Turbo LoRAs with Flux Control, Fill, and Redux
272+
273+
We can combine Flux Turbo LoRAs with Flux Control and other pipelines like Fill and Redux to enable few-steps' inference. The example below shows how to do that for Flux Control LoRA for depth and turbo LoRA from [`ByteDance/Hyper-SD`](https://hf.co/ByteDance/Hyper-SD).
274+
275+
```py
276+
from diffusers import FluxControlPipeline
277+
from image_gen_aux import DepthPreprocessor
278+
from diffusers.utils import load_image
279+
from huggingface_hub import hf_hub_download
280+
import torch
281+
282+
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
283+
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
284+
control_pipe.load_lora_weights(
285+
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
286+
)
287+
control_pipe.set_adapters(["depth", "hyper-sd"], adapter_weights=[0.85, 0.125])
288+
control_pipe.enable_model_cpu_offload()
289+
290+
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
291+
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
292+
293+
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
294+
control_image = processor(control_image)[0].convert("RGB")
295+
296+
image = control_pipe(
297+
prompt=prompt,
298+
control_image=control_image,
299+
height=1024,
300+
width=1024,
301+
num_inference_steps=8,
302+
guidance_scale=10.0,
303+
generator=torch.Generator().manual_seed(42),
304+
).images[0]
305+
image.save("output.png")
306+
```
307+
271308
## Running FP16 inference
272309

273310
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.

0 commit comments

Comments
 (0)