Style training with new method #403
Replies: 4 comments 7 replies
-
The text_encoder % is basically the weight of the trained subject, for a style or a subject that is trained for styling, keep the % at 10-20% at it should be fine. How do you find the results with the new method ? (requires more complicated prompting) |
Beta Was this translation helpful? Give feedback.
-
@TheLastBen |
Beta Was this translation helpful? Give feedback.
-
Hi, Sorry if asking noob questions, but I feel completely lost in random outcomes. I have the following picture set: 30 full body pics + 30 face closeups of one woman, naked, in a specific bdsm stance (which is not "covered" by standard SD model vocabulary), on a white background. I have named them "personaltoken_woman_specificbdsmstance_white_background (1)..(60)" and the session is named likewise. I have used 100 .. 150 steps x 60 pics, with a 30 .. 20% text encoding - did 2 models. Which training parameters, prompt and cfg should I use to get:
TY |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Im writing it during the training process, but is goes pretty badly in comparison with old methods.
So.
1.How do I let the program know that I'm teaching it a style?
2.My images contantain buildings, nature and some people. Do i need to use Contain_Faces: Both or No? "No" is disabling prior preservation?
3.How much iters i need for 14 images and how much Train_text_encoder_for percents?
Please, give me short tutorial of some tips on style training with new method.
UPD.
I think i finished training.
I used: Contain_Faces: Both, 14 images, 10%, 7000 steps.
Token is weak. To use it, you have to put it at the very beginning and set ~1.3 weight. With each new word in promt this value has to be changed.
Results are gut, i think, but i still want some tips! :)
Beta Was this translation helpful? Give feedback.
All reactions