We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 8ffe3be commit a4eaa42Copy full SHA for a4eaa42
src/diffusers/loaders/lora_pipeline.py
@@ -805,7 +805,7 @@ def load_lora_into_unet(
805
adapter_name (`str`, *optional*):
806
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
807
`default_{i}` where i is the total number of adapters being loaded.
808
- Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.:
+ Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights.
809
"""
810
if not USE_PEFT_BACKEND:
811
raise ValueError("PEFT backend is required for this method.")
0 commit comments