Skip to content

fix: Check NaN grads first#2272

Merged
ko3n1g merged 2 commits intomainfrom
ko3n1g/fix/nan-grad-norm-check
Feb 9, 2026
Merged

fix: Check NaN grads first#2272
ko3n1g merged 2 commits intomainfrom
ko3n1g/fix/nan-grad-norm-check

Conversation

@ko3n1g
Copy link
Contributor

@ko3n1g ko3n1g commented Feb 7, 2026

What does this PR do ?

Fixes an edge case where NaN grads aren't reported when golden values aren't present yet.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

Summary by CodeRabbit

  • Bug Fixes
    • Improved gradient norm validation in performance checks with enhanced error reporting.
    • Fixed error message handling to preserve diagnostic information during validation failures.

Signed-off-by: oliver könig <okoenig@nvidia.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 7, 2026

📝 Walkthrough

Walkthrough

A single file modification that adds upfront NaN/Inf gradient-norm validation during convergence checks with improved error reporting, removes an early-return path for NaN/Inf detection, and adjusts missing golden value error message handling to append rather than replace.

Changes

Cohort / File(s) Summary
Gradient Norm Validation
scripts/performance/utils/evaluate.py
Adds centralized NaN/Inf validation at convergence check start with comprehensive error reporting; removes in-loop early-return for NaN/Inf; changes missing golden value error handling to append instead of replace error messages.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested labels

r0.3.0

Suggested reviewers

  • erhoo82
🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning PR modifies critical gradient norm validation logic affecting convergence checking without demonstrated test results, regression testing, or validation data. Update PR description with comprehensive test results, regression testing data, and specific test cases validating NaN/Inf detection and failure status propagation.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: Check NaN grads first' directly addresses the main change: adding NaN/Inf gradient-norm validation at the start of convergence checks and reordering the validation logic.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ko3n1g/fix/nan-grad-norm-check

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@scripts/performance/utils/evaluate.py`:
- Around line 579-584: The grad-norm NaN/Inf check appends to error_msg but
doesn't affect the overall validation result; introduce a boolean flag (e.g.,
grad_norm_result or has_grad_norm_failure) set to False when has_nan_grad_norm
or has_inf_grad_norm is true (and True otherwise), and then include this flag in
the final aggregation that computes has_validation_failures alongside
convergence_result, performance_result, and memory_result so that a grad-norm
failure flips the overall status; update any text in error_msg accordingly and
ensure the final return uses the updated has_validation_failures.
🧹 Nitpick comments (1)
scripts/performance/utils/evaluate.py (1)

579-584: Consider handling the case where no grad norms were parsed.

If current_grad_norm is an empty dict (e.g., the log format changed or grad norms weren't logged), the any() calls silently return False and no warning is emitted. A missing grad norm might itself be worth flagging.

Comment on lines +579 to +584
# check for grad norm
has_nan_grad_norm = any(math.isnan(current_grad_norm[step]) for step in current_grad_norm)
has_inf_grad_norm = any(math.isinf(current_grad_norm[step]) for step in current_grad_norm)
if has_nan_grad_norm or has_inf_grad_norm:
error_msg += "Grad norm check failed. Found NaN or Inf in grad norm.\n"
error_msg += f"Grad norm values: {current_grad_norm}\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

NaN/Inf grad norm detection doesn't propagate to the failure status.

The new check appends to error_msg but never influences has_validation_failures (line 694), which only considers convergence_result, performance_result, and memory_result. If grad norms are NaN/Inf but the other checks happen to pass, the function returns (True, error_msg) — signaling success despite the NaN grad norm error.

Proposed fix
+    has_nan_or_inf_grad_norm = False
     # check for grad norm
     has_nan_grad_norm = any(math.isnan(current_grad_norm[step]) for step in current_grad_norm)
     has_inf_grad_norm = any(math.isinf(current_grad_norm[step]) for step in current_grad_norm)
     if has_nan_grad_norm or has_inf_grad_norm:
+        has_nan_or_inf_grad_norm = True
         error_msg += "Grad norm check failed. Found NaN or Inf in grad norm.\n"
         error_msg += f"Grad norm values: {current_grad_norm}\n"

Then at line 694:

     has_validation_failures = not convergence_result["passed"] or not performance_result["passed"]
+    has_validation_failures = has_validation_failures or has_nan_or_inf_grad_norm

     if not memory_metrics_missing:
         has_validation_failures = has_validation_failures or not memory_result["passed"]

Also applies to: 694-697

🤖 Prompt for AI Agents
In `@scripts/performance/utils/evaluate.py` around lines 579 - 584, The grad-norm
NaN/Inf check appends to error_msg but doesn't affect the overall validation
result; introduce a boolean flag (e.g., grad_norm_result or
has_grad_norm_failure) set to False when has_nan_grad_norm or has_inf_grad_norm
is true (and True otherwise), and then include this flag in the final
aggregation that computes has_validation_failures alongside convergence_result,
performance_result, and memory_result so that a grad-norm failure flips the
overall status; update any text in error_msg accordingly and ensure the final
return uses the updated has_validation_failures.

Signed-off-by: oliver könig <okoenig@nvidia.com>
@ko3n1g ko3n1g added the r0.3.0 Cherry-pick label for r0.3.0 release branch label Feb 7, 2026
@ko3n1g ko3n1g merged commit 809a9ee into main Feb 9, 2026
54 checks passed
@ko3n1g ko3n1g deleted the ko3n1g/fix/nan-grad-norm-check branch February 9, 2026 14:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

r0.3.0 Cherry-pick label for r0.3.0 release branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants