We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 1348463 commit ef16bceCopy full SHA for ef16bce
src/diffusers/loaders/lora_pipeline.py
@@ -2163,6 +2163,7 @@ def load_lora_into_transformer(
2163
network_alphas,
2164
transformer,
2165
adapter_name=None,
2166
+ metadata=None,
2167
_pipeline=None,
2168
low_cpu_mem_usage=False,
2169
hotswap: bool = False,
@@ -2184,6 +2185,7 @@ def load_lora_into_transformer(
2184
2185
adapter_name (`str`, *optional*):
2186
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
2187
`default_{i}` where i is the total number of adapters being loaded.
2188
+ metadata: TODO
2189
low_cpu_mem_usage (`bool`, *optional*):
2190
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
2191
weights.
0 commit comments