-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
callbackfeatureIs an improvement or enhancementIs an improvement or enhancementhelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority task
Milestone
Description
🐛 Bug
When using the QuantizationAwareTraining() callback, one cannot call Trainer.fit twice.
To Reproduce
Call Trainer.fit twice on any model gives an exception (which is pretty qpaque, I might add)
Expected behavior
Resumes training
Additional context
I imagine here is a tradeoff between two goals:
- Have fit return a model that's ready for inference,
- don't catapult yourself irreversibly out of the option to continue training.
With Quantization Aware Training's conversion step at the end (moving from fake quantization for QAT to quantized layers), we have to choose one. Currently, the QAT hook converts, so it has 1 but not 2.
To my mind, the conversion step is "preparing for export / inference" rather than part of training, so I would suggest to drop it from the fitting part.
Metadata
Metadata
Assignees
Labels
callbackfeatureIs an improvement or enhancementIs an improvement or enhancementhelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority task