You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to train a SD model for a concept that it has definitely not seen before, where quality is slightly more important than variety. The caption for each image is the same word since I only want to generate something that looks like my training data. The goal here is to train a foundation-like model for that new concept (which will work better with more data i guess).
I have seen in this discussion that its recommended to cherry pick images as instance images and put the rest into concept images (#965 (comment)).
How would it affect my model to do it like that (similar number of samples) or just pure instance images?
How does the simplicity of the text input relate to the text encoder training steps? I mean overfitting the text encoder would not really hurt since I will use the exact same text input in the future aswell.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am currently trying to train a SD model for a concept that it has definitely not seen before, where quality is slightly more important than variety. The caption for each image is the same word since I only want to generate something that looks like my training data. The goal here is to train a foundation-like model for that new concept (which will work better with more data i guess).
I have seen in this discussion that its recommended to cherry pick images as instance images and put the rest into concept images (#965 (comment)).
Beta Was this translation helpful? Give feedback.
All reactions