-
Couldn't load subscription status.
- Fork 6.5k
Add Qwen-Image-Edit Inpainting pipeline #12225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Cc: @naykun as well. |
|
Hi @Trgtuan10 , that would be an amazing feature! It seems to preserve details better than the current I’ll dive in and run some more extensive tests on this PR shortly. Thanks so much for your contribution, and for helping push this forward! 🙌 |
|
Thanks @naykun! Please feel free to test it out and let me know if you run into any issues. |
|
Hi @naykun @sayakpaul, have you tested it yet? I'd appreciate any feedback or reviews! |
|
@Trgtuan10 I've tested it, and things are working well overall. This approach shows promise for handling complex editing scenarios while keeping the rest of the content unchanged. There are a few unstable cases where instructions don't always take effect, but I believe we can mitigate those with better prompts and strategic masking. I suggest we move forward with this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this! I have two minor comments.
I think we should also add tests and docs. Thanks to @naykun for running the tests as well.
src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for your PR!!
left some very small comments about #Copied from
|
Thanks for the reviews @sayakpaul @yiyixuxu. I’ve updated the code to stay up to date with the latest changes from the main branch. Please check it again. |
|
@bot /style |
|
Style fix is beginning .... View the workflow run here. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
oh CI still fail and style bot cannot fix thanks! |
|
@yiyixuxu I just fixed |
Hi, |
|
@xduzhangjiayu No problem, I don’t think anyone is using these parameters. Feel free to open a PR to make the code clearer |
|
@Trgtuan10 We don't need to concat the |
|
Yes, we don't need. The mask is only used when creating latents after each step, code |
Thanks! I understand, it's like a BLD inpainting |
I tried to find that there was a problem with edge generation when adding a object, and it also worked when trying multiple random seeds, I have no idea what caused this. import torch
from diffusers import QwenImageEditInpaintPipeline
from diffusers.utils import load_image
import os
pipe = QwenImageEditInpaintPipeline.from_pretrained("Qwen-Image-Edit", torch_dtype=torch.bfloat16)
pipe.to("cuda:5")
prompt = "添加半个木瓜"
negative_prompt = ""
source = load_image("glasses-1285273_1280 (1).jpg")
mask = load_image("ComfyUI_temp_zqkkx_00002_.png")
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=source,
mask_image=mask,
strength=1.0,
num_inference_steps=35,
true_cfg_scale=4.0,
generator=torch.Generator(device="cuda").manual_seed(41412)
).images[0]
image.save(f"qwen_inpainting_6.png") |
|
What's wrong with this results @huan085128 ? |
Sorry, I didn’t explain it clearly. The issue is at the edges of the generated object—here’s a zoomed-in view: |
|
@huan085128 I’d suggest trying to blur the mask for better results. However, Qwen-Edit is more suited for editing existing content rather than generating entirely new elements. |
Thanks for your reply. I’ve already tried using a blurred mask, but got the same result. Then I tried text editing, and I still noticed some issues around the edges:
|
|
May be the inpainting area is too small, model need a bigger space to achive good result. You can think about Adetailer method, using |








What does this PR do?
This PR introduces support for the Qwen-Image-Edit model in Inpainting tasks, expanding the model’s creative capabilities and integration within the Diffusers library.
Example code
Comparison about performance of QwenImage-Edit Inpaint
Who can review?
cc @a-r-r-o-w @sayakpaul