Skip to content

Commit ef16bce

Browse files
committed
update
1 parent 1348463 commit ef16bce

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2163,6 +2163,7 @@ def load_lora_into_transformer(
21632163
network_alphas,
21642164
transformer,
21652165
adapter_name=None,
2166+
metadata=None,
21662167
_pipeline=None,
21672168
low_cpu_mem_usage=False,
21682169
hotswap: bool = False,
@@ -2184,6 +2185,7 @@ def load_lora_into_transformer(
21842185
adapter_name (`str`, *optional*):
21852186
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
21862187
`default_{i}` where i is the total number of adapters being loaded.
2188+
metadata: TODO
21872189
low_cpu_mem_usage (`bool`, *optional*):
21882190
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
21892191
weights.

0 commit comments

Comments
 (0)