Latest optimal values for teaching a face? #1667
Unanswered
FurkanGozukara
asked this question in
Q&A
Replies: 1 comment 2 replies
-
For naming, use only the token "ohwx", don't use any other word and make sure the token is unknown by the model. Do not use concept images for a face, concept training will just parasite the training. Try 150 UNet steps per image, then resume the training for more if the result isn't enough keep the lr at 2e-6, keep the number of images below 20 for the text encoder keep the total steps to 350, if you resume training set it to 0 as it is already trained. lr 1e-6 if you set any steps to 0, it will not train with that specific process (eg; Text_Encoder_Concept_Training_Steps) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
first question
I am teaching a face
naming images as "ohwx man" right?
then
using concept images generated by "photo of man" ?
how many concept images do you suggest?
when using Offset_Noise
how many UNet_Training_Steps per training image?
is this UNet_Learning_Rate= "2e-6" good?
how many Text_Encoder_Training_Steps per training image?
Text_Encoder_Learning_Rate= "1e-6" is this good?
what is this Text_Encoder_Concept_Training_Steps?
this means that model is doing extra training on concept images? if this is 0, concept images never used during training?
Beta Was this translation helpful? Give feedback.
All reactions