Skip to content

Commit 716f446

Browse files
Extend docstrings
1 parent 204f521 commit 716f446

File tree

1 file changed

+39
-5
lines changed

1 file changed

+39
-5
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 39 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,6 @@ def load_lora_weights(
132132
https://huggingface.co/docs/peft/main/en/package_reference/hotswap
133133
kwargs (`dict`, *optional*):
134134
See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`].
135-
136135
"""
137136
if not USE_PEFT_BACKEND:
138137
raise ValueError("PEFT backend is required for this method.")
@@ -1200,9 +1199,11 @@ def load_lora_weights(
12001199
adapter weights and replace them with the weights of the new adapter. This can be faster and more
12011200
memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
12021201
torch.compile, loading the new adapter does not require recompilation of the model. When using
1203-
hotswapping, the passed `adapter_name` should be the name of an already loaded adapter. If the new
1204-
adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an
1205-
additional method before loading the adapter:
1202+
hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.
1203+
1204+
If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
1205+
to call an additional method before loading the adapter:
1206+
12061207
```py
12071208
pipeline = ... # load diffusers pipeline
12081209
max_rank = ... # the highest rank among all LoRAs that you want to load
@@ -1211,6 +1212,7 @@ def load_lora_weights(
12111212
pipeline.load_lora_weights(file_name)
12121213
# optionally compile the model now
12131214
```
1215+
12141216
Note that hotswapping adapters of the text encoder is not yet supported. There are some further
12151217
limitations to this technique, which are documented here:
12161218
https://huggingface.co/docs/peft/main/en/package_reference/hotswap
@@ -1295,7 +1297,23 @@ def load_lora_into_transformer(
12951297
memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
12961298
torch.compile, loading the new adapter does not require recompilation of the model. When using
12971299
hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.
1298-
"""
1300+
1301+
If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
1302+
to call an additional method before loading the adapter:
1303+
1304+
```py
1305+
pipeline = ... # load diffusers pipeline
1306+
max_rank = ... # the highest rank among all LoRAs that you want to load
1307+
# call *before* compiling and loading the LoRA adapter
1308+
pipeline.enable_lora_hotswap(target_rank=max_rank)
1309+
pipeline.load_lora_weights(file_name)
1310+
# optionally compile the model now
1311+
```
1312+
1313+
Note that hotswapping adapters of the text encoder is not yet supported. There are some further
1314+
limitations to this technique, which are documented here:
1315+
https://huggingface.co/docs/peft/main/en/package_reference/hotswap
1316+
"""
12991317
if low_cpu_mem_usage and is_peft_version("<", "0.13.0"):
13001318
raise ValueError(
13011319
"`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`."
@@ -1841,6 +1859,22 @@ def load_lora_into_transformer(
18411859
memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
18421860
torch.compile, loading the new adapter does not require recompilation of the model. When using
18431861
hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.
1862+
1863+
If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
1864+
to call an additional method before loading the adapter:
1865+
1866+
```py
1867+
pipeline = ... # load diffusers pipeline
1868+
max_rank = ... # the highest rank among all LoRAs that you want to load
1869+
# call *before* compiling and loading the LoRA adapter
1870+
pipeline.enable_lora_hotswap(target_rank=max_rank)
1871+
pipeline.load_lora_weights(file_name)
1872+
# optionally compile the model now
1873+
```
1874+
1875+
Note that hotswapping adapters of the text encoder is not yet supported. There are some further
1876+
limitations to this technique, which are documented here:
1877+
https://huggingface.co/docs/peft/main/en/package_reference/hotswap
18441878
"""
18451879
if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"):
18461880
raise ValueError(

0 commit comments

Comments
 (0)