Stop training if high enough accuracy isn’t reached #7305
-
Hi, I know that there is EarlyStopping if validation metrics are deteriorating. But I was wondering if it was possible to stop training if after say epoch 10, the accuracy hasn’t reached say 20%. If such a callback doesn’t exist, any thoughts on how I can get started on the implementation of it? For context I am running a distributed hyper-parameter optimizer and I know that the “good” hyper-parameter set will get me to 50% accuracy by epoch 5. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
You could write a callback similar to early stopping which checks your metric for the target value by whatever epoch. if the metric isn't good enough, you can signal to the trainer to stop, like this: https://github.com/PyTorchLightning/pytorch-lightning/blob/490cc57809ebeba19003b4101393a8a058217c31/pytorch_lightning/callbacks/early_stopping.py#L194-L196 |
Beta Was this translation helpful? Give feedback.
You could write a callback similar to early stopping which checks your metric for the target value by whatever epoch. if the metric isn't good enough, you can signal to the trainer to stop, like this: https://github.com/PyTorchLightning/pytorch-lightning/blob/490cc57809ebeba19003b4101393a8a058217c31/pytorch_lightning/callbacks/early_stopping.py#L194-L196