You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to start a thread where we share our training configs. This can maybe help newcomers start more quickly, and also enable us to get feedback on our configs. I'll start by sharing my most recent training.
This is a rather large (rank=128) flux lora that I'm training on 48 GB card using a dataset of size approximately 20k images (split approximately evenly between train and reg datasets).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to start a thread where we share our training configs. This can maybe help newcomers start more quickly, and also enable us to get feedback on our configs. I'll start by sharing my most recent training.
This is a rather large (rank=128) flux lora that I'm training on 48 GB card using a dataset of size approximately 20k images (split approximately evenly between train and reg datasets).
This works okayish. The foregrounds are good, but I get quality decay in the backgrounds. Some observations are:
I'm eager to get your critique and suggestions!
Beta Was this translation helpful? Give feedback.
All reactions