Replies: 2 comments 2 replies
-
Check denoising strength and masked content. Denoising strength is how much to affect the masked area. 0 is not at all and 1 is completely. |
Beta Was this translation helpful? Give feedback.
-
That doesn't answer my question. I know what inpainting is, my question is about a very specific kind of problem: Can I generate latent noise (i.e. The thing that you see in the first denoising steps as image noise) using an input image (or alternatively: upload latent noise in addition to the input image)? I think HSV image noise and other variants that look like the pattern in the denoising steps does not mean that the latent space will contain the right kind of noise. I suppose that as an diffuser the latent space for an HSV noise image with high denoising setting will contain more or less gray, as the image noise is smoothed before the denoising steps. With several steps and different masks and strenghs one could possibly achieve something that looks similar and has a slightly worse quality, but it will be much more work and one has to create all the masks and run the diffusion often. The overall idea is that generating (i.e. using an own script) the initialization for the latent space or an image that results in the right latent space would make more sense. Maybe it's more a question for the diffusers project, but I use A1111 and not raw diffusers and need a way to inject it here. With raw diffusers you can generate the latent noise yourself just as an numpy array. But the a1111 API only accepts images and I wonder if there even is an option to get direct access to the latent space data. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'd like to run img2img on an image and have an empty area filled with latent noise or something similiar and some areas with existing images.
Using inpaint I could mask the existing images and fill the rest with latent noise, but then the existing images do not change at all. In the end I would like the background to be filled with a new image and the existing images integrated with smaller changes.
I imagine that filling the background (I have the mask) with latent noise or an equivalent that I can generate before in the uploaded image and then running img2img on the image with noise and pasted existing images would create a smooth result with smaller changes in the existing elements to make them fit the overall image, but I guess latent noise has to be generated in latent space, or would image noise map to latent noise when the image is imported?
Beta Was this translation helpful? Give feedback.
All reactions