tutorials/image/qwen/qwen-image #327
Replies: 15 comments 25 replies
-
Will there be an i2i workflow soon? Thanks |
Beta Was this translation helpful? Give feedback.
-
First: Thx for the fast integration! :) After about halfway through the steps, the preview image in the sampler turns completely black, and the output is also black. I'm using a RTX 5090 and the fp8 scaled version. Other models like the Flux Krea continue to work even after the ComfyUI update. Any advice? |
Beta Was this translation helpful? Give feedback.
-
I have used exact workflow and models that you have listed and keep getting error: Prompt executed in 0.06 seconds Prompt executed in 0.15 seconds |
Beta Was this translation helpful? Give feedback.
-
Replaced the Qwen model of fp8_e4m3fn with the fp16 and definitely a much better output ! (but takes 4-6 times the processing time) (running on a 4090) Does anyone know if there's a fp16 text encoder ? Currently using, for the text encoder, the qwen_2.5_vl_7b_fp8_scaled |
Beta Was this translation helpful? Give feedback.
-
will this work with Mac M4? |
Beta Was this translation helpful? Give feedback.
-
For those with low VRAM (8GB)... This tutorial works perfectly (Minute 25): Use city96 K_Q2 GGUF models and example workflow: Relevant file locations:
|
Beta Was this translation helpful? Give feedback.
-
using comfyUI verison: ComfyUI v0.3.47 i downloaded and used the sample workflow json file, in the Load CLIP node its mentioned qwen_image which is missing in my computer & showing error, how to add this? thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
all i use is the Qwen-Image_fp8 (20.4 GB) ,and the offical workflow,my computer is 4090,24G vram,120G ram,i9-14900k,i ran the defual workflow take me more than 20min everytime,why? |
Beta Was this translation helpful? Give feedback.
-
Will this work with M2 Mac 32 gb ??? |
Beta Was this translation helpful? Give feedback.
-
Does QWEN need or benefit from a Refiner, such as the xd_sl_refiner in xdsl? |
Beta Was this translation helpful? Give feedback.
-
Thank you! Using Q4_K_S quant (not distilled) with sageattention, works well on 8 GB 4060 laptop. It's long generation time, but results are good. 1328x1328, 19 s per step |
Beta Was this translation helpful? Give feedback.
-
Hi, I’m testing Qwen-Image in ComfyUI for img2img and noticed a consistent issue: Using the official ComfyUI text2image workflow template → works great from pure noise latent: sharp, detailed, no artifacts. This behavior does not occur with Flux under the same conditions. |
Beta Was this translation helpful? Give feedback.
-
The Inpaint model doesn't seem to be working for me. Downloaded the workflow and made it identical to the example workflow screen shot, used the input image on the page, masked the hair and prompted "red hair", but the output is identical to the input with the white hair. Canny works fine and checked the console and there's no errors in there. |
Beta Was this translation helpful? Give feedback.
-
It seems that all Qwen-Image Controlnet and Control-Loras produce faint Cross-Hatch Patterns on the image. Are there plans to fix this issue? |
Beta Was this translation helpful? Give feedback.
-
Pause each time after clicking 'run', no matter intalled comfyui or portable edition (updated to newest by running the bat file 'update_comfyui_and_python_dependencies' ) New installed Windows 10 22H2 Aug with 32GB RAM portable comfyui shows: To see the GUI go to: http://127.0.0.1:8188 E:\AI\ComfyUI_windows_portable>pause |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
tutorials/image/qwen/qwen-image
Qwen-Image is a 20B parameter MMDiT (Multimodal Diffusion Transformer) model open-sourced under the Apache 2.0 license.
https://docs.comfy.org/tutorials/image/qwen/qwen-image
Beta Was this translation helpful? Give feedback.
All reactions