Replies: 1 comment 2 replies
-
You should never caption the images when using the text_encoder, stick to the rule, use only one nonexistent word as the instance images filename. if you disable the text_encoder the whole training, you will never get good results. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I always skipped the Text_Encoder_Training since I never got any good results using it. Either the CKPT always produced similar images or not training related at all.
Since the option to skip the Text_Encoder_Training training is gone, I wounder how to use the training without it now. Do I use all the steps with UNet_Training_Steps and leave the Text_Encoder_Training at zero, or was the old Text_Encoder_Training only the option to limit the Text_Encoder and I have to leave the UNet_Training_Steps at zero.
Results are not good, resulds are not following the prompt
Also, I found little Info on the purpose of the Text_Encoder_Training. What "Text" is it referring to? Why should I use it, and what it the difference?
I found a post on reddit saying, in order to make the Text_Encoder_Training work, I need to name the images like the prompt like "drawing of a house in the woods.jpg", but I doubt that.
Beta Was this translation helpful? Give feedback.
All reactions