The theory of negative prompts #9842
feiyangsuo
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
To my deficient knowledge, positive prompts are embedded by CLIP encoder and sent into the denoiser(Unet) model by cross-attention. During training, guided with the ground truth image, the model can learn the association between positive prompts and the image's content.
My question is: How does stable-diffusion add negative prompts guidance during inference, while there's no negative-prompt-training procedure? And how to put the negative embbedings into the model as condition?
I wish I could just read the code to check what happened with negative prompts, but the code is too hard for me to go through.
Beta Was this translation helpful? Give feedback.
All reactions