We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent ef16bce commit 9cba78eCopy full SHA for 9cba78e
src/diffusers/loaders/lora_pipeline.py
@@ -2756,6 +2756,7 @@ def load_lora_into_transformer(
2756
network_alphas,
2757
transformer,
2758
adapter_name=None,
2759
+ metadata=None,
2760
_pipeline=None,
2761
low_cpu_mem_usage=False,
2762
hotswap: bool = False,
@@ -2777,6 +2778,7 @@ def load_lora_into_transformer(
2777
2778
adapter_name (`str`, *optional*):
2779
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
2780
`default_{i}` where i is the total number of adapters being loaded.
2781
+ metadata: TODO
2782
low_cpu_mem_usage (`bool`, *optional*):
2783
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
2784
weights.
0 commit comments