Seed from the mathematical point of view #6941
Replies: 2 comments 1 reply
-
You're overthinking it way too much. It's just torch.randn with fixed seed which outputs gaussian noise with standard normal distribution N(0,1) stable-diffusion-webui/modules/devices.py Lines 84 to 88 in b165e34 Seed is just a number that allows the generator to repeat it's output deterministically. Function that uses devices.randn i mention above, generates final noise that goes into the model. stable-diffusion-webui/modules/processing.py Line 345 in 6073456 |
Beta Was this translation helpful? Give feedback.
-
Thank you, @mezotaken ! Indeed it's different than I awaited. But my main question is still unanswered. Is it possible to find out which pixel is influenced by what part of the seed? Can this be calculated backwards? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everybody!
I would like to understand more about the way how the ssed is generated. I thought that it cannot be so hard to google the answers for my questions but - yes, it is. Or I'm too stupid to enter the matching search phrases. I would be happy if someone could help me to get a better understanding. Please allow me to write down here my two questions:
For a 512x512 diffusion model the seed is a number between 0 and 4,294,967,295. If I divide 4,294,967,296 through (512x512) I get 16,384. And this is 2^14. But why the 14 in the exponent? I would have awaited 16 to describe "high color". 14 bit is something new for me. Or is that the standard for color depth in the Linux world?
How exactly is the generated seed distributed to the 512 pixels? Can I derive from the seed for example the color of a certain pixel (121;333) ? In a test I set the seed to 4,294,967,295, so to the highest available seed (1.5 SD model) and simply generated "a dog" image. Then I reduced the seed by 1 to 4,294,967,294. I my logic this should have very small effect to the generaded image because only the pixel no. (512;512) would change it's color by one unit. Or, depending on the form of structure, changing another pixel. But only this one pixel! This would in the very most cases not influence the final result after 20 steps of generation. Yes, sure, it could, but as the dog from my both tests were centered in the middle of the image and the only a pixel in the corner would have changed, this would - this is my conviction, which of course can be wrong - not make a pug out of a pincher as in my test. Or is the seed processed in a further step by the cpu, so that one cannot deduce the pixel and the color from the seed directly? But even then, it should be possible. As the same seed, under the same other parameters, generates the same image on every computer, a further re-calculation of the seed cannot be (pseudo-)randomized and so this re-calculation, if really existing, must be basing on a known algorithm. (aside from a little variation of the final image with same parameters if using xformers).
Marc
Beta Was this translation helpful? Give feedback.
All reactions