Curious about the differences between latent sampling methods in training #7760
morphinapg
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I was able to find where this was included here which says:
I'm still fairly new at this, so I'm curious about this. To me, it sounds like random would be the method that most closely matches how the generation works? So why would deterministic work better for training, and what exactly does "the mean of latent space" actually mean?
Perhaps it would be useful to explain more in depth of what latent sampling actually is. I have some fairly basic understanding of the way the model and VAE works, but if this specific thing could be explained in more depth that would help me understand exactly what's going on here in relation to the training.
I see a lot of people recommending deterministic and I'm just curious as to why that would produce better results, when random seems like it would make more sense. Also, why is "once" the default, and what does Dreambooth use?
Beta Was this translation helpful? Give feedback.
All reactions