Skip to content

Commit 4b370d2

Browse files
committed
docs.
1 parent 7185138 commit 4b370d2

File tree

2 files changed

+21
-1
lines changed

2 files changed

+21
-1
lines changed

docs/source/en/api/pipelines/flux.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -305,6 +305,10 @@ image = control_pipe(
305305
image.save("output.png")
306306
```
307307

308+
## Note about `unload_lora_weights()` when using Flux LoRAs
309+
310+
When unloading the Control LoRA weights, call `pipe.unload_lora_weights(reset_to_overwritten_params=True)` to reset the `pipe.transformer` completely back to its original form. The resultant pipeline can then be used with methods like [`DiffusionPipeline.from_pipe`]. More details about this argument are available in [this PR](https://github.com/huggingface/diffusers/pull/10397).
311+
308312
## Running FP16 inference
309313

310314
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.

src/diffusers/loaders/lora_pipeline.py

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2277,8 +2277,24 @@ def unfuse_lora(self, components: List[str] = ["transformer", "text_encoder"], *
22772277

22782278
super().unfuse_lora(components=components)
22792279

2280-
# We override this here account for `_transformer_norm_layers`.
2280+
# We override this here account for `_transformer_norm_layers` and `_overwritten_params`.
22812281
def unload_lora_weights(self, reset_to_overwritten_params=False):
2282+
"""
2283+
Unloads the LoRA parameters.
2284+
2285+
Args:
2286+
reset_to_overwritten_params (`bool`, defaults to `False`): Whether to reset the LoRA-loaded modules
2287+
to their original params. Refer to the [Flux
2288+
documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) to learn more.
2289+
2290+
Examples:
2291+
2292+
```python
2293+
>>> # Assuming `pipeline` is already loaded with the LoRA parameters.
2294+
>>> pipeline.unload_lora_weights()
2295+
>>> ...
2296+
```
2297+
"""
22822298
super().unload_lora_weights()
22832299

22842300
transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer

0 commit comments

Comments
 (0)