Modified scripts to train LoRA with < 10 GB VRAM #196
Replies: 3 comments
-
@ChuxiJ In your code you've written something about gradient checkpointing but it's not enabled in the end. Is there a reason that you did not enable it? |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for your contribution to the community. The environment setup on my end differs significantly from the LoRA training experiments for consumer GPUs, so it may not be directly reusable. I've found the checkpoint gradient setup particularly useful—thank you for proposing it. If convenient, you can submit a PR; feel free to ask an LLM for guidance on how to do so. |
Beta Was this translation helpful? Give feedback.
-
Ok I've made a PR for gradient checkpointing, see #197 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
https://github.com/woct0rdho/ACE-Step
Basically I did 3 things:
It's a pretty big change so for now I'm not sure how to make a PR.
Beta Was this translation helpful? Give feedback.
All reactions