Replies: 1 comment
-
Ok the trick for inpainting is: For standard models (not inpaint model)Use "Set Latent Noise Mask" to set the mask and denoise cannot be 1 or it will be totally different, something like 0.6-0.8 is good to have enough noise to make the changes, you can go lower but the lower you get the harder it will be to have changes, it is also good to blur the mask to avoid seams, there is no node to blur mask but you can convert the mask to image to blur it and convert back to mask. For inpaint modelsUse "VAE Decode (for Inpainting)" to set the mask and the denoise must be 1, inpaint models only accept denoise 1, anything else will result in a trash image. The conditioning set mask is not for inpaint workflows, if you want to generate images with objects in a specific location based on the conditioning you can see the examples in here. Another thing, the official stable diffusion model is not good for inpainting, I suggest you get a custom model, for realistic pictures I like to use the Realistic Vision it is good for inpaint even with the version not made for inpaint, but that is just my opinion, you can use any model you want, just find a more recent model, the official ones are not the top quality models at all. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've got the following workflow which I hoped would result in inpainting of one image into an area of another.
But the result is a mess:
What am I missing here? Using the SD 1.5 pruned checkpoint from https://huggingface.co/runwayml/stable-diffusion-v1-5
(I was after img2img like behaviour here)
Joining the conditioning does the opposite, but the inpainted stuff is missing instead.
Trying to use an inpainting model fails similarly:
Beta Was this translation helpful? Give feedback.
All reactions