Skip to content

Conversation

@sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Feb 26, 2025

What does this PR do?

Fixes:
#10866

Cc: @nitinmukesh

Results (under the same seed)

LoRA No LoRA
image image
Code
import torch
from diffusers import Lumina2Text2ImgPipeline
from peft.tuners.tuners_utils import BaseTunerLayer

pipe = Lumina2Text2ImgPipeline.from_pretrained(
    "Alpha-VLLM/Lumina-Image-2.0", torch_dtype=torch.bfloat16
)

# Art Style of Hitoshi Ashinano https://civitai.com/models/1269546/art-style-of-hitoshi-ashinano-lumina-image-20
pipe.load_lora_weights("newgenai79/lumina2_lora",weight_name="Art_Style_of_Hitoshi_Ashinano.safetensors")
pipe.enable_model_cpu_offload()

prompt = "Hitoshi Ashinano style A young girl with vibrant green hair and large purple eyes peeks out from behind a white wooden door. She is wearing a white shirt and have a curious expression on her face. The background shows a blue sky with a few clouds, and there's a white fence visible. Green leaves hang down from the top left corner, and a small white circle can be seen in the sky. The scene captures a moment of innocent curiosity and wonder."
image = pipe(
    prompt, 
    negative_prompt="blurry, ugly, bad, deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, cropped, out of frame, worst quality, low quality, jpeg artifacts, fused fingers, morbid, mutilated, extra fingers, mutated hands, bad anatomy, bad proportion, extra limbs", 
    guidance_scale=6,
    num_inference_steps=35, 
    generator=torch.manual_seed(0)
).images[0]
image.save("lumina2_lora.png")

@sayakpaul sayakpaul added lora roadmap Add to current release roadmap labels Feb 26, 2025
@sayakpaul sayakpaul requested a review from hlky February 26, 2025 13:19
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@sayakpaul
Copy link
Member Author

For https://github.com/huggingface/diffusers/actions/runs/13544909753/job/37854074489?pr=10909, I have opened #10911

For 2nd and third failures, pinged @DN6 internally. I don't think they are triggered by the changes of this PR.

Fourth one seems Hub-related.

Any objections to going ahead with merging?

@nitinmukesh
Copy link

Awesome, thank you @sayakpaul .

Looking forward to getting it merged.

@sayakpaul
Copy link
Member Author

@DN6 the failing tests seem to be unrelated to this PR?

@sayakpaul sayakpaul requested a review from DN6 February 27, 2025 11:53
# conversion.
non_diffusers = any(k.startswith("diffusion_model.") for k in state_dict)
if non_diffusers:
state_dict = _convert_non_diffusers_lumina2_lora_to_diffusers(state_dict)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this prefix specific to Lumina? Should we always just remove it?

Copy link
Member Author

@sayakpaul sayakpaul Mar 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not specific to Lumina2 but specific to external trainer libraries. In all the past iterations where we have supported non-diffusers LoRA checkpoints, we have removed it because it's not in the diffusers compatible format.

We are not removing the prefix. We are using it to detect if the state dict is non-diffusers. If so, we are converting the state dict.

This is how rest of the non-diffusers checkpoints across different models have been supported in diffusers.

@sayakpaul sayakpaul merged commit 97fda1b into main Mar 4, 2025
29 of 30 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in Diffusers Roadmap 0.36 Mar 4, 2025
@sayakpaul sayakpaul deleted the non-diffusers-lumina2-lora branch March 4, 2025 09:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lora roadmap Add to current release roadmap

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants