Conversation
Signed-off-by: oliver könig <okoenig@nvidia.com>
📝 WalkthroughWalkthroughA single file modification that adds upfront NaN/Inf gradient-norm validation during convergence checks with improved error reporting, removes an early-return path for NaN/Inf detection, and adjusts missing golden value error message handling to append rather than replace. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/performance/utils/evaluate.py`:
- Around line 579-584: The grad-norm NaN/Inf check appends to error_msg but
doesn't affect the overall validation result; introduce a boolean flag (e.g.,
grad_norm_result or has_grad_norm_failure) set to False when has_nan_grad_norm
or has_inf_grad_norm is true (and True otherwise), and then include this flag in
the final aggregation that computes has_validation_failures alongside
convergence_result, performance_result, and memory_result so that a grad-norm
failure flips the overall status; update any text in error_msg accordingly and
ensure the final return uses the updated has_validation_failures.
🧹 Nitpick comments (1)
scripts/performance/utils/evaluate.py (1)
579-584: Consider handling the case where no grad norms were parsed.If
current_grad_normis an empty dict (e.g., the log format changed or grad norms weren't logged), theany()calls silently returnFalseand no warning is emitted. A missing grad norm might itself be worth flagging.
| # check for grad norm | ||
| has_nan_grad_norm = any(math.isnan(current_grad_norm[step]) for step in current_grad_norm) | ||
| has_inf_grad_norm = any(math.isinf(current_grad_norm[step]) for step in current_grad_norm) | ||
| if has_nan_grad_norm or has_inf_grad_norm: | ||
| error_msg += "Grad norm check failed. Found NaN or Inf in grad norm.\n" | ||
| error_msg += f"Grad norm values: {current_grad_norm}\n" |
There was a problem hiding this comment.
NaN/Inf grad norm detection doesn't propagate to the failure status.
The new check appends to error_msg but never influences has_validation_failures (line 694), which only considers convergence_result, performance_result, and memory_result. If grad norms are NaN/Inf but the other checks happen to pass, the function returns (True, error_msg) — signaling success despite the NaN grad norm error.
Proposed fix
+ has_nan_or_inf_grad_norm = False
# check for grad norm
has_nan_grad_norm = any(math.isnan(current_grad_norm[step]) for step in current_grad_norm)
has_inf_grad_norm = any(math.isinf(current_grad_norm[step]) for step in current_grad_norm)
if has_nan_grad_norm or has_inf_grad_norm:
+ has_nan_or_inf_grad_norm = True
error_msg += "Grad norm check failed. Found NaN or Inf in grad norm.\n"
error_msg += f"Grad norm values: {current_grad_norm}\n"Then at line 694:
has_validation_failures = not convergence_result["passed"] or not performance_result["passed"]
+ has_validation_failures = has_validation_failures or has_nan_or_inf_grad_norm
if not memory_metrics_missing:
has_validation_failures = has_validation_failures or not memory_result["passed"]Also applies to: 694-697
🤖 Prompt for AI Agents
In `@scripts/performance/utils/evaluate.py` around lines 579 - 584, The grad-norm
NaN/Inf check appends to error_msg but doesn't affect the overall validation
result; introduce a boolean flag (e.g., grad_norm_result or
has_grad_norm_failure) set to False when has_nan_grad_norm or has_inf_grad_norm
is true (and True otherwise), and then include this flag in the final
aggregation that computes has_validation_failures alongside convergence_result,
performance_result, and memory_result so that a grad-norm failure flips the
overall status; update any text in error_msg accordingly and ensure the final
return uses the updated has_validation_failures.
What does this PR do ?
Fixes an edge case where NaN grads aren't reported when golden values aren't present yet.
Changelog
GitHub Actions CI
See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.
Before your PR is "Ready for review"
Pre checks:
If you haven't finished some of the above items you can still open "Draft" PR.
Additional Information
Summary by CodeRabbit