How to speed up cache_latents with SDXL finetuning? #908
a-l-e-x-d-s-9
started this conversation in
General
Replies: 1 comment
-
I'm no great expert, but looking through the section of code that caches latents in the train_util.py file, there's a bit that says this:
Maybe comment out those two lines and see if you get a speed boost? Those lines are probably there to help people who don't have much vram, which may not be useful for your 45GB card. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have 33k images with SDXL finetuning, using A40 45GB Vram, cache_latents took me 2 hours with average 3.7 it/s.
I'm using vae_batch_size=0. When I tried 8/4, the it/s was even smaller.
Is there a way to speed up the process?
Beta Was this translation helpful? Give feedback.
All reactions