You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview)| A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
115
-
|[Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview)| Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116
-
|[Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview)| Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
115
+
|[Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading)| Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116
+
|[Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/overview_techniques)| Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
117
117
|[Optimization](https://huggingface.co/docs/diffusers/optimization/fp16)| Guides for how to optimize your diffusion model to run faster and consume less memory. |
118
118
|[Training](https://huggingface.co/docs/diffusers/training/overview)| Guides for how to train a diffusion model for different tasks with different training techniques. |
Copy file name to clipboardExpand all lines: docs/source/en/using-diffusers/loading_adapters.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -134,14 +134,16 @@ The [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method loads L
134
134
- the LoRA weights don't have separate identifiers for the UNet and text encoder
135
135
- the LoRA weights have separate identifiers for the UNet and text encoder
136
136
137
-
But if you only need to load LoRA weights into the UNet, then you can use the [`~loaders.UNet2DConditionLoadersMixin.load_attn_procs`] method. Let's load the [jbilcke-hf/sdxl-cinematic-1](https://huggingface.co/jbilcke-hf/sdxl-cinematic-1) LoRA:
137
+
To directly load (and save) a LoRA adapter at the *model-level*, use [`~PeftAdapterMixin.load_lora_adapter`], which builds and prepares the necessary model configuration for the adapter. Like [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`], [`PeftAdapterMixin.load_lora_adapter`] can load LoRAs for both the UNet and text encoder. For example, if you're loading a LoRA for the UNet, [`PeftAdapterMixin.load_lora_adapter`] ignores the keys for the text encoder.
138
+
139
+
Use the `weight_name` parameter to specify the specific weight file and the `prefix` parameter to filter for the appropriate state dicts (`"unet"` in this case) to load.
Save an adapter with [`~PeftAdapterMixin.save_lora_adapter`].
159
+
156
160
To unload the LoRA weights, use the [`~loaders.StableDiffusionLoraLoaderMixin.unload_lora_weights`] method to discard the LoRA weights and restore the model to its original weights:
0 commit comments