Skip to content

Conversation

@Trgtuan10
Copy link
Contributor

@Trgtuan10 Trgtuan10 commented Aug 23, 2025

What does this PR do?

This PR introduces support for the Qwen-Image-Edit model in Inpainting tasks, expanding the model’s creative capabilities and integration within the Diffusers library.

Example code

import torch
from diffusers import QwenImageEditInpaintPipeline
from diffusers.utils import load_image
import os
os.environ["HF_ENABLE_PARALLEL_LOADING"] = "YES"

pipe = QwenImageEditInpaintPipeline.from_pretrained("Qwen/Qwen-Image-Edit", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "change the hat to red"
negative_prompt = " "
source = load_image("https://github.com/Trgtuan10/Image_storage/blob/main/cute_cat.png?raw=true")
mask = load_image("https://github.com/Trgtuan10/Image_storage/blob/main/mask_cat.png?raw=true")

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=source,
    mask_image=mask,
    strength=1.0,
    num_inference_steps=35,
    true_cfg_scale=4.0,
    generator=torch.Generator(device="cuda").manual_seed(422)
).images[0]
image.save(f"qwen_inpainting.png")

Comparison about performance of QwenImage-Edit Inpaint

Init image
Mask
QwenImage Inpaint
QwenImage-Edit
QwenImage-Edit Inpaint

Who can review?

cc @a-r-r-o-w @sayakpaul

@sayakpaul sayakpaul requested review from asomoza and yiyixuxu August 23, 2025 15:57
@sayakpaul
Copy link
Member

Cc: @naykun as well.

@naykun
Copy link
Contributor

naykun commented Aug 24, 2025

Hi @Trgtuan10 , that would be an amazing feature! It seems to preserve details better than the current QwenImageInpaintPipeline—great improvement!

I’ll dive in and run some more extensive tests on this PR shortly. Thanks so much for your contribution, and for helping push this forward! 🙌

@TuanNT-ZenAI
Copy link
Contributor

Thanks @naykun! Please feel free to test it out and let me know if you run into any issues.

@TuanNT-ZenAI
Copy link
Contributor

Hi @naykun @sayakpaul, have you tested it yet? I'd appreciate any feedback or reviews!

@naykun
Copy link
Contributor

naykun commented Aug 27, 2025

@Trgtuan10 I've tested it, and things are working well overall. This approach shows promise for handling complex editing scenarios while keeping the rest of the content unchanged. There are a few unstable cases where instructions don't always take effect, but I believe we can mitigate those with better prompts and strategic masking. I suggest we move forward with this PR.
cc @sayakpaul @yiyixuxu

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this! I have two minor comments.

I think we should also add tests and docs. Thanks to @naykun for running the tests as well.

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for your PR!!
left some very small comments about #Copied from

@Trgtuan10
Copy link
Contributor Author

Thanks for the reviews @sayakpaul @yiyixuxu. I’ve updated the code to stay up to date with the latest changes from the main branch. Please check it again.

@yiyixuxu
Copy link
Collaborator

@bot /style

@github-actions
Copy link
Contributor

Style fix is beginning .... View the workflow run here.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yiyixuxu
Copy link
Collaborator

oh CI still fail and style bot cannot fix
would you be able to fix it? just run make style and fix the format issues as listed

thanks!

“Trgtuan10” added 2 commits August 30, 2025 09:43
@TuanNT-ZenAI
Copy link
Contributor

@yiyixuxu I just fixed

@yiyixuxu yiyixuxu merged commit 67ffa70 into huggingface:main Aug 31, 2025
10 checks passed
@xduzhangjiayu
Copy link
Contributor

@yiyixuxu I just fixed

Hi,
I have read the code, and i'm confused why masked_image_latents is not used when send to DiT?

@Trgtuan10
Copy link
Contributor Author

@xduzhangjiayu No problem, I don’t think anyone is using these parameters. Feel free to open a PR to make the code clearer

@xduzhangjiayu
Copy link
Contributor

xduzhangjiayu commented Sep 1, 2025

@Trgtuan10 We don't need to concat the masked_image_latents and latents for inpainting model?

@Trgtuan10
Copy link
Contributor Author

Yes, we don't need. The mask is only used when creating latents after each step, code

@xduzhangjiayu
Copy link
Contributor

Yes, we don't need. The mask is only used when creating latents after each step, code

Thanks! I understand, it's like a BLD inpainting

@huan085128
Copy link

Yes, we don't need. The mask is only used when creating latents after each step, code

I tried to find that there was a problem with edge generation when adding a object, and it also worked when trying multiple random seeds, I have no idea what caused this.

import torch
from diffusers import QwenImageEditInpaintPipeline
from diffusers.utils import load_image
import os

pipe = QwenImageEditInpaintPipeline.from_pretrained("Qwen-Image-Edit", torch_dtype=torch.bfloat16)

pipe.to("cuda:5")
prompt = "添加半个木瓜"
negative_prompt = ""
source = load_image("glasses-1285273_1280 (1).jpg")
mask = load_image("ComfyUI_temp_zqkkx_00002_.png")

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=source,
    mask_image=mask,
    strength=1.0,
    num_inference_steps=35,
    true_cfg_scale=4.0,
    generator=torch.Generator(device="cuda").manual_seed(41412)
).images[0]
image.save(f"qwen_inpainting_6.png")

@Trgtuan10
Copy link
Contributor Author

What's wrong with this results @huan085128 ?

@huan085128
Copy link

What's wrong with this results @huan085128 ?

Sorry, I didn’t explain it clearly. The issue is at the edges of the generated object—here’s a zoomed-in view:

@Trgtuan10
Copy link
Contributor Author

@huan085128 I’d suggest trying to blur the mask for better results. However, Qwen-Edit is more suited for editing existing content rather than generating entirely new elements.

@huan085128
Copy link

@huan085128 I’d suggest trying to blur the mask for better results. However, Qwen-Edit is more suited for editing existing content rather than generating entirely new elements.

Thanks for your reply. I’ve already tried using a blurred mask, but got the same result. Then I tried text editing, and I still noticed some issues around the edges:
prompt: replace the '国' to '打'

Source Mask Qwen edit

@Trgtuan10
Copy link
Contributor Author

May be the inpainting area is too small, model need a bigger space to achive good result. You can think about Adetailer method, using padding_mask_crop option

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants