Skip to content

Commit 730e3b6

Browse files
authored
Fix typo in LoRA
Fix formatting in using_peft_for_inference.md
1 parent 22b229b commit 730e3b6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ pipeline = AutoPipelineForText2Image.from_pretrained(
9494
pipeline.unet.load_lora_adapter(
9595
"jbilcke-hf/sdxl-cinematic-1",
9696
weight_name="pytorch_lora_weights.safetensors",
97-
adapter_name="cinematic"
97+
adapter_name="cinematic",
9898
prefix="unet"
9999
)
100100
# use cnmt in the prompt to trigger the LoRA
@@ -688,4 +688,4 @@ Browse the [LoRA Studio](https://lorastudio.co/models) for different LoRAs to us
688688
689689
You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.
690690

691-
Check out the [Fast LoRA inference for Flux with Diffusers and PEFT](https://huggingface.co/blog/lora-fast) blog post to learn how to optimize LoRA inference with methods like FlashAttention-3 and fp8 quantization.
691+
Check out the [Fast LoRA inference for Flux with Diffusers and PEFT](https://huggingface.co/blog/lora-fast) blog post to learn how to optimize LoRA inference with methods like FlashAttention-3 and fp8 quantization.

0 commit comments

Comments
 (0)