Validation loss: Eval frequency, Eval Loss tracked, Train-Eval gap monitored#565
Open
Sualeh77 wants to merge 3 commits intorefactor/consolidationfrom
Open
Validation loss: Eval frequency, Eval Loss tracked, Train-Eval gap monitored#565Sualeh77 wants to merge 3 commits intorefactor/consolidationfrom
Sualeh77 wants to merge 3 commits intorefactor/consolidationfrom
Conversation
… only at predefined frequency
Train-eval gap monitored (overfitting detection)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Related Tasks
Summary
This PR implements mid-training evaluation and overfitting detection in the distributed pre-training pipeline.
Prior state: pretrainer.py already ran a validation pass at the end of every epoch and logged val_loss /
val_perplexity. However, there was no intra-epoch evaluation triggered at a configurable step cadence, no smoothed comparison between training and validation loss, and no automated mechanism to detect when the model was overfitting.Changes made:
llm/src/llm/config.py
overfit_patience: int = 5to TrainingConfig — number of consecutive evaluations without improvement before an overfitting alert is raised.overfit_threshold: float = 0.0 (Configure during training)to TrainingConfig — minimum required decrease in val_loss to count as an improvement and reset the counter.llm/src/llm/pretrainer.py
_train_loss_accum,_train_loss_count, _best_val_loss, _overfit_strikes) initialised in init.eval_intervalwindow and averaged at evaluation time, eliminating batch-level noise from the gap signal.train_eval_gap = val_loss − smoothed_train_lossis computed and logged at every eval trigger point.0when val_loss improves beyondoverfit_threshold; increments by1otherwise.overfit_strikes >= overfit_patience, anoverfitting_detected: Trueevent is sent to the observability backend, which can be consumed by the Watchdog to pause training.dist.all_reduce(SUM) / world_sizeto bothavg_lossandavg_perplexitytensors, ensuring the reported validation metrics are true global averages across all GPUs — not just the rank-0 shard.+=, comparisons, averaging) happens on already-available CPU scalars. No new GPU synchronisations are introduced in the per-step hot path.llm/tests/test_overfit_detection_integration.py (new file)
End-to-end integration test using
gpt2onwikitext-2-raw-v1via the project's existing get_dataloaders() factory. Trains for 20 steps witheval_interval=5entirely on CPU (no DeepSpeed required), exercising the exact same logic path asPreTrainer.run().Reviewers should focus on:
all_reduceblock — verify the averaging is correct for your world-size assumptions.overfit_patience/overfit_thresholddefaults — adjust to match the team's operational preferences.PAUSEvia watchdog.py.Checklist
11 passedin ~2m 21s on Mac CPU).validation_loss-eval_frequency.train_eval_gap,overfit_patience) follow the existingsnake_casestyle in TrainingConfig and PreTrainer.