You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "wearing sunglasses"</figcaption>
356
356
</div>
357
357
358
+
## Optimize
358
359
359
-
## Running FP16 inference
360
+
Flux is a very large model and requires ~50GB of RAM. Enable some of the optimizations below to lower the memory requirements.
361
+
362
+
### Group offloading
363
+
364
+
[Group offloading](../../optimization/memory#group-offloading) saves memory by offloading groups of internal layers rather than the whole model or weights. Use [`~hooks.apply_group_offloading`] on a model and you can optionally specify the `offload_type`. Setting it to `leaf_level` offloads the lowest leaf-level parameters to the CPU instead of offloading at the module-level.
365
+
366
+
```py
367
+
import torch
368
+
from diffusers import FluxPipeline
369
+
from diffusers.hooks import apply_group_offloading
370
+
371
+
model_id ="black-forest-labs/FLUX.1-dev"
372
+
dtype = torch.bfloat16
373
+
pipe = FluxPipeline.from_pretrained(
374
+
model_id,
375
+
torch_dtype=dtype,
376
+
)
377
+
378
+
apply_group_offloading(
379
+
pipe.transformer,
380
+
offload_type="leaf_level",
381
+
offload_device=torch.device("cpu"),
382
+
onload_device=torch.device("cuda"),
383
+
)
384
+
apply_group_offloading(
385
+
pipe.text_encoder,
386
+
offload_device=torch.device("cpu"),
387
+
onload_device=torch.device("cuda"),
388
+
offload_type="leaf_level"
389
+
)
390
+
apply_group_offloading(
391
+
pipe.text_encoder_2,
392
+
offload_device=torch.device("cpu"),
393
+
onload_device=torch.device("cuda"),
394
+
offload_type="leaf_level"
395
+
)
396
+
apply_group_offloading(
397
+
pipe.vae,
398
+
offload_device=torch.device("cpu"),
399
+
onload_device=torch.device("cuda"),
400
+
offload_type="leaf_level"
401
+
)
402
+
403
+
prompt="A cat wearing sunglasses and working as a lifeguard at pool."
404
+
405
+
generator = torch.Generator().manual_seed(181201)
406
+
image = pipe(
407
+
prompt,
408
+
width=576,
409
+
height=1024,
410
+
num_inference_steps=30,
411
+
generator=generator
412
+
).images[0]
413
+
image
414
+
```
415
+
416
+
### Running FP16 inference
360
417
361
418
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.
362
419
@@ -385,7 +442,7 @@ out = pipe(
385
442
out.save("image.png")
386
443
```
387
444
388
-
## Quantization
445
+
###Quantization
389
446
390
447
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.
0 commit comments