Replies: 1 comment 1 reply
-
In SD you do this with img2img. Send the image you want to vary to the img2img pane. Copy and adjust your text prompt as you see fit. The "denoising strength" slider will govern how far from your original image the variants will go. 0.75 is a good starting point. The higher you go, the more your text prompt will govern the variation and less your source image will contribute. Numbers below 0.5 will be very close to the source image. You can also toy with CFG Scale. I stick to numbers between 5-15, but different diffusers will react differently. If you see the variants don't look "strong" enough, up the number steps. 20 steps is a good place to start for the Euler diffusers. Unlike MJ's variation system, you have more control over what how far the variants stray from the text, but it make take a little practice to get the hang of it. ;) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is any way after Generate the image in txt2img can varlations? If you use same tag as last time
In NovelAI and MidJourney, will have a varlations button, but I can't find anything about it in [stable-diffusion-webui]
Beta Was this translation helpful? Give feedback.
All reactions