Skip to content

Commit 9cba78e

Browse files
committed
style fix
1 parent ef16bce commit 9cba78e

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2756,6 +2756,7 @@ def load_lora_into_transformer(
27562756
network_alphas,
27572757
transformer,
27582758
adapter_name=None,
2759+
metadata=None,
27592760
_pipeline=None,
27602761
low_cpu_mem_usage=False,
27612762
hotswap: bool = False,
@@ -2777,6 +2778,7 @@ def load_lora_into_transformer(
27772778
adapter_name (`str`, *optional*):
27782779
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
27792780
`default_{i}` where i is the total number of adapters being loaded.
2781+
metadata: TODO
27802782
low_cpu_mem_usage (`bool`, *optional*):
27812783
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
27822784
weights.

0 commit comments

Comments
 (0)