Speed comparison #12176
Replies: 4 comments 1 reply
-
I think you forgot to set --medvram that's why it's so slow, you're out of enough VRAM to launch it with default setting. For 8GB VRAM --medvram is mandatory. You probably also want to use --xformers or set one of the sdp optimization settings in settings menu. With correct configuration speed is equal between all 3 tools. |
Beta Was this translation helpful? Give feedback.
-
--xformers is included. Also worth noting - that time was only for base model in A1111. Both the others were base and refiner passes. With --medvram added the times come down to just under 2 minutes for the base pass. The other two ran without changes. |
Beta Was this translation helpful? Give feedback.
-
The other two are just setting different defaults |
Beta Was this translation helpful? Give feedback.
-
can we force comfyui to use pagefile (its on a very fast drive and i got 100's of gigs free). Im trying to do an ULTRA upscaler... probably will need 100gb ram |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Having finally gotten Automatic1111 to run SDXL on my system (after disabling scripts and extensions etc) I have run the same prompt and settings across A1111, ComfyUI and InvokeAI (GUI). The prompt was a simple "A steampunk airship landing on a snow covered airfield". The style for the base and refiner was "Photograph". The negative prompt was "Blurry" and the negative style was "illustration, painting" (note - for A1111 these were combined into the positive and negative prompts). Euler_A, 26 total steps, 8 refiner steps. 1024 x 1024 image size. My system is an 8GB 3060TI and 32GB system ram. All timings are an average of the second and third images generated to remove model load times.
Automatic1111 took 6 minutes 21 seconds
ComfyUI took 36.8 seconds
InvokeAI (GUI) took 16 seconds
Notes - the ComfyUI node setup generates internally at 4096 x 4096 for a 1024 x 1024 output size. I tried to get InvokeAI's nodes to use the same settings, and the image took over 10 minutes to render. At 1024 x 1024 InvokeAI (Nodes) took 16 seconds, but the output was not comparable in quality to the GUI output, or to ComfyUI's output.
For the moment I will not be using A1111 for SDXL experimentation. I have to knobble too many features to get it working, and the speed is way to slow. I know this will change over time, and hopefully quite quickly, but for the moment, certainly on older hardware, ComfyUI is the better option for SDXL work.
The image below comes from ComfyUI and contains the nodes used, for anyone interested.
Beta Was this translation helpful? Give feedback.
All reactions