-
Notifications
You must be signed in to change notification settings - Fork 3.6k
fix(callbacks): Defer step/time-triggered ModelCheckpoint saves until validation metrics are available #21106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(callbacks): Defer step/time-triggered ModelCheckpoint saves until validation metrics are available #21106
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #21106 +/- ##
=======================================
Coverage 87% 87%
=======================================
Files 269 269
Lines 23520 23545 +25
=======================================
+ Hits 20508 20542 +34
+ Misses 3012 3003 -9 |
@Borda , I cannot see the error in the two failed jobs. Can you help me or point me to what the error is ? |
these tests are optional for now as the same are failing also on master |
… validation metrics are available Root cause: - With `every_n_train_steps` (or `train_time_interval`), checkpoints could save at train batch end before validation ran, so the monitored val metric was missing/stale and `best_model_score` was incorrect. (Refs Lightning-AI#20919) Fix: - In [src/lightning/pytorch/callbacks/model_checkpoint.py:ModelCheckpoint.on_train_batch_end]: - Defer saves when the monitored key is missing from [trainer.callback_metrics] - If on the last train batch and not saving at train-epoch-end, defer only when validation will run next: - `trainer.enable_validation` is True - `trainer.num_val_batches` > 0 - `trainer.check_val_every_n_epoch` schedule matches the upcoming epoch - Perform deferred saves in [on_validation_end], ensuring fresh validation metrics are used. - Allow zero `timedelta` for `train_time_interval` and broadcast the time-trigger decision across ranks. - Do not defer when monitoring a train metric or when no validation is scheduled. Tests: - Repro (previously failing, now passing): - [tests/tests_pytorch/callbacks/test_model_checkpoint_step_interval_val_metric.py] - Additional validations: - [tests/tests_pytorch/callbacks/test_model_checkpoint_additional_cases.py] - [tests/tests_pytorch/callbacks/test_model_checkpoint_edge_cases.py] Outcome: - `best_model_score` matches the validation metric after the epoch. - Step/time-interval checkpointing behaves correctly without premature or skipped saves.
094b278
to
6c1554a
Compare
… validation metrics are available (#21106) * fix(callbacks): defer step/time-triggered ModelCheckpoint saves until validation metrics are available Root cause: - With `every_n_train_steps` (or `train_time_interval`), checkpoints could save at train batch end before validation ran, so the monitored val metric was missing/stale and `best_model_score` was incorrect. (Refs #20919) Fix: - In [src/lightning/pytorch/callbacks/model_checkpoint.py:ModelCheckpoint.on_train_batch_end]: - Defer saves when the monitored key is missing from [trainer.callback_metrics] - If on the last train batch and not saving at train-epoch-end, defer only when validation will run next: - `trainer.enable_validation` is True - `trainer.num_val_batches` > 0 - `trainer.check_val_every_n_epoch` schedule matches the upcoming epoch - Perform deferred saves in [on_validation_end], ensuring fresh validation metrics are used. - Allow zero `timedelta` for `train_time_interval` and broadcast the time-trigger decision across ranks. - Do not defer when monitoring a train metric or when no validation is scheduled. Tests: - Repro (previously failing, now passing): - [tests/tests_pytorch/callbacks/test_model_checkpoint_step_interval_val_metric.py] - Additional validations: - [tests/tests_pytorch/callbacks/test_model_checkpoint_additional_cases.py] - [tests/tests_pytorch/callbacks/test_model_checkpoint_edge_cases.py] Outcome: - `best_model_score` matches the validation metric after the epoch. - Step/time-interval checkpointing behaves correctly without premature or skipped saves. * test: disable logger in model checkpoint tests to avoid side effects * chlog --------- Co-authored-by: Jirka B <[email protected]> (cherry picked from commit b1cc925)
… validation metrics are available (#21106) * fix(callbacks): defer step/time-triggered ModelCheckpoint saves until validation metrics are available Root cause: - With `every_n_train_steps` (or `train_time_interval`), checkpoints could save at train batch end before validation ran, so the monitored val metric was missing/stale and `best_model_score` was incorrect. (Refs #20919) Fix: - In [src/lightning/pytorch/callbacks/model_checkpoint.py:ModelCheckpoint.on_train_batch_end]: - Defer saves when the monitored key is missing from [trainer.callback_metrics] - If on the last train batch and not saving at train-epoch-end, defer only when validation will run next: - `trainer.enable_validation` is True - `trainer.num_val_batches` > 0 - `trainer.check_val_every_n_epoch` schedule matches the upcoming epoch - Perform deferred saves in [on_validation_end], ensuring fresh validation metrics are used. - Allow zero `timedelta` for `train_time_interval` and broadcast the time-trigger decision across ranks. - Do not defer when monitoring a train metric or when no validation is scheduled. Tests: - Repro (previously failing, now passing): - [tests/tests_pytorch/callbacks/test_model_checkpoint_step_interval_val_metric.py] - Additional validations: - [tests/tests_pytorch/callbacks/test_model_checkpoint_additional_cases.py] - [tests/tests_pytorch/callbacks/test_model_checkpoint_edge_cases.py] Outcome: - `best_model_score` matches the validation metric after the epoch. - Step/time-interval checkpointing behaves correctly without premature or skipped saves. * test: disable logger in model checkpoint tests to avoid side effects * chlog --------- Co-authored-by: Jirka B <[email protected]> (cherry picked from commit b1cc925)
Defer step/time-triggered ModelCheckpoint saves until validation metrics are available
Fixes #20919
Root cause
every_n_train_steps
(ortrain_time_interval
), checkpoints could save at train-batch end before validation ran. The monitored validation metric was missing/stale, sobest_model_score
could be incorrect.Fix
trainer.enable_validation
is Truetrainer.num_val_batches
> 0trainer.check_val_every_n_epoch
schedule matches the upcoming epochtimedelta
fortrain_time_interval
and broadcast the time-trigger decision across ranks viatrainer.strategy.broadcast
.Tests
Outcome
best_model_score
matches the latest validation metric.📚 Documentation preview 📚: https://pytorch-lightning--21106.org.readthedocs.build/en/21106/