Skip to content

Commit 06011e6

Browse files
committed
lora metadata
1 parent 1c3137e commit 06011e6

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

docs/source/en/using-diffusers/other-formats.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,6 +203,24 @@ pipeline = DiffusionPipeline.from_single_file(
203203
)
204204
```
205205

206+
If you're using a checkpoint trained with a Diffusers training script, metadata such as the LoRA configuration, is automatically saved. When the file is loaded, the metadata is parsed to correctly configure the LoRA and avoid missing or incorrect LoRA configs. Inspect the metadata of a safetensors file by clicking on the <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/logo.png" alt="safetensors logo" height="15em" style="vertical-align: middle;"> logo next to the file on the Hub.
207+
208+
Save the metadata for LoRAs that aren't trained with Diffusers with the `transformer_lora_adapter_metadata` and `text_encoder_lora_adapter_metadata` arguments in [`~loaders.FluxLoraLoaderMixin.save_lora_weights`]. This is only supported for safetensors files.
209+
210+
```py
211+
import torch
212+
from diffusers import FluxPipeline
213+
214+
pipeline = FluxPipeline.from_pretrained(
215+
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
216+
).to("cuda")
217+
pipeline.load_lora_weights("linoyts/yarn_art_Flux_LoRA")
218+
pipeline.save_lora_weights(
219+
transformer_lora_adapter_metadata={"r": 16, "lora_alpha": 16},
220+
text_encoder_lora_adapter_metadata={"r": 8, "lora_alpha": 8}
221+
)
222+
```
223+
206224
### ckpt
207225

208226
Older model weights are commonly saved with Python's [pickle](https://docs.python.org/3/library/pickle.html) utility in a ckpt file.

0 commit comments

Comments
 (0)