monitoring metric and model saving #13633
-
In pytorch-lightning, we often monitor the metric at the current batch level,validation_step. Does this mean that the model parameters we save are optimal for the current batch rather than the whole validation set? Does this mean that in order to get the best model for the validation set, I have to monitor the metric in validation_epoch_end?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
f1, acc = self.share_val_step(batch)
metrics = {'val_f1': f1, 'val_acc': acc}
self.log_dict(metrics, prog_bar=True, logger=True, on_epoch=True) here you are using |
Beta Was this translation helpful? Give feedback.
here you are using
on_epoch=True
, which means the metric will be aggregated across the validation set and will be used to monitor the checkpoints if you setModelCheckpoint(..., monitor='val_acc')
.