Finetuning Models/Multi-round Training Best Practices #8342
Unanswered
ardywibowo
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am just beginning to have a look at using Pytorch Lightning for my next project. I was thinking about some cases where I may need to finetune my model or use my model differently between each round of training.
For example, I may want to finetune on a different dataset after training on an initial dataset, I may want to enable quantization aware training after initially training in full-precision, or I may want to add additional modules to my model after an initial training round.
I typically do this by creating a new script for the different cases, but I was wondering if there are best practices that Pytorch Lightning enables that would make these things easier.
The main issue I see with these is in organization. It's sometimes hard to keep track which models you feed in to which script (and which version of the script).
Thanks so much!
Beta Was this translation helpful? Give feedback.
All reactions