Skip to content

Commit a4eaa42

Browse files
committed
fix-copies
1 parent 8ffe3be commit a4eaa42

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -805,7 +805,7 @@ def load_lora_into_unet(
805805
adapter_name (`str`, *optional*):
806806
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
807807
`default_{i}` where i is the total number of adapters being loaded.
808-
Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.:
808+
Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights.
809809
"""
810810
if not USE_PEFT_BACKEND:
811811
raise ValueError("PEFT backend is required for this method.")

0 commit comments

Comments
 (0)