-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
all of a sudden im getting errors on using a lora with z-image pipeline. its the same pipeline i been using with the same loras...
Reproduction
from diffusers import FlowMatchEulerDiscreteScheduler, ZImagePipeline
import torch
model_name = "dimitribarbot/Z-Image-Turbo-BF16"
lora_repo = "cptsl/MysticXXX2"
lora_file = "Mystic-XXX-ZIT-v2.safetensors"
lora_strength = 1.0
prompt = "a beautiful black and white landscape"
width = 1024
height = 1024
steps = 9
seed = 420
output_file = "output.png"
device = "cpu"
if torch.backends.mps.is_available():
device = "mps"
elif torch.cuda.is_available():
device = "cuda"
pipeline = ZImagePipeline.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=False,
)
pipeline.load_lora_weights(
lora_repo,
weight_name=lora_file,
adapter_name="lora1"
)
pipeline.set_adapters(["lora1"], adapter_weights=[lora_strength])
pipeline.scheduler = FlowMatchEulerDiscreteScheduler.from_config(
pipeline.scheduler.config
)
pipeline.enable_model_cpu_offload()
generator = torch.Generator(device=pipeline.device).manual_seed(seed)
output = pipeline(
prompt=prompt,
num_inference_steps=steps,
width=width,
height=height,
guidance_scale=0, # Turbo model: keep at 0
generator=generator,
num_images_per_prompt=1,
)
output.images[0].save(output_file, format="PNG")
print("Saved:", output_file)
Logs
pipeline.load_lora_weights(
File ".venv/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5506, in load_lora_weights
state_dict, metadata = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5475, in lora_state_dict
state_dict = _convert_non_diffusers_z_image_lora_to_diffusers(state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 2572, in _convert_non_diffusers_z_image_lora_to_diffusers
scale_down, scale_up = get_alpha_scales(down_weight, alpha_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 2541, in get_alpha_scales
alpha = state_dict.pop(alpha_key).item()
^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'layers.0.adaLN_modulation.0.alpha'System Info
diffusers cli env
i tried it on 0.37 and also on main dev
- 🤗 Diffusers version: 0.38.0.dev0
- Platform: Linux-6.1.0-43-amd64-x86_64-with-glibc2.36
- Running on Google Colab?: No
- Python version: 3.12.3
- PyTorch version (GPU?): 2.9.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 1.6.0
- Transformers version: 5.3.0
- Accelerate version: 1.12.0
- PEFT version: 0.18.1.dev0
- Bitsandbytes version: 0.48.2
- Safetensors version: 0.7.0
- xFormers version: not installed
Who can help?
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working