Skip to content

Commit ad9c45a

Browse files
authored
Merge branch 'main' into flux_emb
2 parents 542ffbf + 98d0cd5 commit ad9c45a

File tree

3 files changed

+9
-3
lines changed

3 files changed

+9
-3
lines changed

docs/source/en/using-diffusers/loading_adapters.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,14 +134,16 @@ The [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method loads L
134134
- the LoRA weights don't have separate identifiers for the UNet and text encoder
135135
- the LoRA weights have separate identifiers for the UNet and text encoder
136136

137-
But if you only need to load LoRA weights into the UNet, then you can use the [`~loaders.UNet2DConditionLoadersMixin.load_attn_procs`] method. Let's load the [jbilcke-hf/sdxl-cinematic-1](https://huggingface.co/jbilcke-hf/sdxl-cinematic-1) LoRA:
137+
To directly load (and save) a LoRA adapter at the *model-level*, use [`~PeftAdapterMixin.load_lora_adapter`], which builds and prepares the necessary model configuration for the adapter. Like [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`], [`PeftAdapterMixin.load_lora_adapter`] can load LoRAs for both the UNet and text encoder. For example, if you're loading a LoRA for the UNet, [`PeftAdapterMixin.load_lora_adapter`] ignores the keys for the text encoder.
138+
139+
Use the `weight_name` parameter to specify the specific weight file and the `prefix` parameter to filter for the appropriate state dicts (`"unet"` in this case) to load.
138140

139141
```py
140142
from diffusers import AutoPipelineForText2Image
141143
import torch
142144

143145
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
144-
pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors")
146+
pipeline.unet.load_lora_adapter("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", prefix="unet")
145147

146148
# use cnmt in the prompt to trigger the LoRA
147149
prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration"
@@ -153,6 +155,8 @@ image
153155
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_attn_proc.png" />
154156
</div>
155157

158+
Save an adapter with [`~PeftAdapterMixin.save_lora_adapter`].
159+
156160
To unload the LoRA weights, use the [`~loaders.StableDiffusionLoraLoaderMixin.unload_lora_weights`] method to discard the LoRA weights and restore the model to its original weights:
157161

158162
```py

src/diffusers/models/model_loading_utils.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,6 +176,8 @@ def load_model_dict_into_meta(
176176
hf_quantizer=None,
177177
keep_in_fp32_modules=None,
178178
) -> List[str]:
179+
if device is not None and not isinstance(device, (str, torch.device)):
180+
raise ValueError(f"Expected device to have type `str` or `torch.device`, but got {type(device)=}.")
179181
if hf_quantizer is None:
180182
device = device or torch.device("cpu")
181183
dtype = dtype or torch.float32

src/diffusers/models/modeling_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -836,7 +836,7 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
836836
param_device = "cpu"
837837
# TODO (sayakpaul, SunMarc): remove this after model loading refactor
838838
elif is_quant_method_bnb:
839-
param_device = torch.cuda.current_device()
839+
param_device = torch.device(torch.cuda.current_device())
840840
state_dict = load_state_dict(model_file, variant=variant)
841841
model._convert_deprecated_attention_blocks(state_dict)
842842

0 commit comments

Comments
 (0)