-
Notifications
You must be signed in to change notification settings - Fork 251
Description
First - thank you for making it work. I have successfully trained a lora with 28 images. Took less than an hour for 80 epochs with a 5090. Lora is ok. Probably bad settings.
However, there was not a config, so I copied a wan config and it had 16 block swap and other wan settings. Probably needs to be tweaked, but not sure how. I will attach the config below.
Went to train a 2nd lora with Images and Videos. Forgot it was a wan dataset (so they are 5 seconds but 81 frames). It dropped all buckets, but I changed my frame range up to 121 and it started training. However it took a full 30 mins per epoch for 16 images and 55 videos. And it was only using very little ram, so it was not tapped out. I removed block swap and that increased usage some, but still took a crazy amount of time for 1 epoch. Then I enabled shift 3 and it started training a bit faster, but still the GPU isnt close to maxed out (20gb of 32gb). Im training with the 720p model if it matters.
Im training videos at 512 resolution. Would probably go faster if I changed to 256, but gpu isnt close to topped out.
EDIT: For reference, I have trained Wan 2.2 loras with with both 800 images at 1500 resolution and 200 videos at 640 resolution and its still faster than this.
Had to convert these to text files.
Would love a suggestion on better settings. Im sure i have them wrong. Thank you.
Current utilization:
