Replies: 4 comments 10 replies
-
15k images is a bit much, do this :
For what I understand, you want to train a style and not specific faces, so tick the box Style_training. Test the model, and report about the result. |
Beta Was this translation helpful? Give feedback.
-
So with that many images, sorry Ben, id suggest looking at somethibg like every dream, do note these bigger fine tunners require 24 gigs to run. Now to force Ben's to work "with limitations" check out my Futurama post and read the images, Text encoder needs to be REAL low, like 2 scans of every picture low. You will need to add class images to an extent, just throw them in your instance images folder, for 15k use 300 The amount of subjects and styles you are trying to attempt will change this, at the end of the day it's doable but your looking for a needle in a hay stack, once images are similar you will need to slowly progress at 15000 steps at a time. If things bleed start over, and do less text encoder. Nows a good time to mention colab will likley kick you around 50k steps depending on how busy they are. Not saying either is better but you can only push so much outta the vram that Ben is shooting for. If this must be done one Ben's, in the train unet change batch size to 2 and learning rate to 1e-6. Set your check points to start at 30k and do one every 15k after wards, Set your total steps to infinite and text encoder to 300 steps, best to under train now and get your 30k in. Not going to lie its probably going to take weeks of tweaking Welp nvm i have no idea now lol |
Beta Was this translation helpful? Give feedback.
-
Honestly i have not played with the concept folder, it's new, as far as the
text encoder goes I had forgotten it was 15k images that's way too many
steps, I'd forgo training the text encoder itself and maybe throw half your
images into the concept folder and train that encoder as it seems it would
use more broad terms.
Since we are not training the text encoder make sure you name your files,
in this case id put them all in separate folders grouping like ones then
label them tag1_tag2_tag3.png.
Bulk rename utility on windows or bulk file renamer on gdrive are your
friend here
…On Sun, Dec 11, 2022, 6:14 PM zylervega ***@***.***> wrote:
Thank you, I will also try your suggestion. And I am looking into
borrow/renting some time on an a100 or something to try EveryDream.
Text encoder needs to be REAL low, like 2 scans of every picture low.
So, the value to 2, or to 2 * [total number of images]? I imagine it's the
2nd one but wanted to check.
You will need to add class images to an extent, just throw them in your
instance images folder, for 15k use 300
These are images separate from my dataset? or are these images
characteristic of my data set / the "hand-picked" that Ben mentioned?
you will need to slowly progress at 15000 steps at a time
15,000 steps is a lot more steps than I tried on the first round; good to
know.
If this must be done one Ben's, in the train unet change batch size to 2
and learning rate to 1e-6. Set your check points to start at 30k and do one
every 15k after wards,
Got it.
Set your total steps to infinite and text encoder to 300 steps, best to
under train now and get your 30k in.
This seems to be different than the amount you suggested above for text
encoder; am I misunderstanding?
-
And one more q- for the method you are proposing, do all 15k images go
into Instance Images / ignore Concept Images?
Thanks!
—
Reply to this email directly, view it on GitHub
<#965 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZPYLSA6MKKGISZJH6LTBYLWMZU7FANCNFSM6AAAAAAS2WPVSE>
.
You are receiving this because you commented.Message ID:
<TheLastBen/fast-stable-diffusion/repo-discussions/965/comments/4374012@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
https://github.com/victorchall/EveryDream
Use autoCaption ipynb. May need to change q factor and length sadly it's
not a tagger but a full captioner.
However if you can separate them into diffrent sub categories bulk rename
utility works very well too.
If you find another good autocaption for bulk files please share.
…On Wed, Dec 14, 2022, 2:05 PM zylervega ***@***.***> wrote:
Since we are not training the text encoder make sure you name your files,
in this case id put them all in separate folders grouping like ones then
label them tag1_tag2_tag3.png.
I can't manually name/tag 15,000 images, that's why I was hoping to use an
automated process utilizing AI to tag them
—
Reply to this email directly, view it on GitHub
<#965 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZPYLSB22LYLNP73BSOJWUTWNISAXANCNFSM6AAAAAAS2WPVSE>
.
You are receiving this because you commented.Message ID:
<TheLastBen/fast-stable-diffusion/repo-discussions/965/comments/4403588@
github.com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have ~15,000 images.
What would you suggest I use for UNet_Training_Steps and Text_Encoder_Training_Steps?
Beta Was this translation helpful? Give feedback.
All reactions