Skip to content

Conversation

@chtruong814
Copy link
Contributor

@chtruong814 chtruong814 commented Jan 6, 2026

beep boop [🤖]: Hi @yfw 👋,

we've cherry picked #1717 into  for you! 🚀

Please review and approve this cherry pick by your convenience!

Summary by CodeRabbit

  • Bug Fixes

    • Improved stability during distributed training with activation checkpointing enabled by optimizing backend selection and resource handling.
    • Enhanced GPU backend selection logic to prevent computation inconsistencies during model checkpointing operations.
  • Documentation

    • Added explanatory comments clarifying backend selection configuration in distributed training scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

@chtruong814 chtruong814 requested a review from a team as a code owner January 6, 2026 10:29
@chtruong814 chtruong814 requested a review from yfw January 6, 2026 10:29
@chtruong814 chtruong814 requested a review from a team as a code owner January 6, 2026 10:29
@github-actions
Copy link

github-actions bot commented Jan 6, 2026

⚠️ File Consistency Check

Check based on commit: 3249e2e (PR #1727 from cherry-pick-1717-r0.5.0)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

This PR modifies DTensor policy worker initialization to adjust SDPA backend selection during activation checkpointing. It imports SDPBackend within the empty-weights block, extends backend selection logic to include SDPBackend.MATH when activation checkpointing is enabled (except under context parallelism), and globally disables the cuDNN SDPA backend to prevent recomputation issues. The same logic is applied in two initialization paths with added clarifying comments.

Changes

Cohort / File(s) Change Summary
DTensor Policy Worker SDPA Backend Configuration
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Extended SDPA backend selection logic to conditionally include SDPBackend.MATH when activation checkpointing is enabled (unless context parallelism is active); globally disabled cuDNN SDPA backend when activation checkpointing is active to prevent recomputation issues; duplicated the same logic across two initialization paths; added explanatory comments clarifying sequence-packing and backend selection rationale.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #1717: Addresses the same code-level issue of disabling cuDNN SDPA backend when activation checkpointing is enabled in the same file.
  • PR #1682: Directly modifies the same initialization logic to extend SDPBackend selection and disable cuDNN SDPA during activation checkpointing.
  • PR #1676: Changes SDPA backend selection when activation checkpointing is enabled to address SDPA recomputation concerns.

Suggested labels

CI:L1, r0.5.0

Suggested reviewers

  • yfw
  • terrykong

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning PR modifies SDPA backend selection with activation checkpointing but lacks required performance measurements, test results, or regression documentation. Add test results demonstrating no regressions, include performance measurements comparing original and fixed behavior, or document that testing was deferred.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: disabling cudnn sdpa backend when using activation checkpointing, which aligns with the code modifications in dtensor_policy_worker_v2.py.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between edfc23d and 3249e2e.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (3)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (3)

186-194: LGTM!

The added comment clarifies the rationale for the attention implementation selection logic and includes a helpful reference to the upstream NeMo Automodel code.


280-301: LGTM! Backend selection logic correctly handles activation checkpointing.

The conditional ordering is appropriate:

  • For cp_size > 1: Uses only FLASH + EFFICIENT (matching Automodel's CP constraints)
  • For activation_checkpointing with cp_size <= 1: Adds MATH as a fallback

The global cudnn disable at line 320 provides additional protection regardless of which branch is taken.


315-320: Global cudnn SDPA disable correctly prevents recomputation issues.

The fix appropriately disables the cuDNN SDPA backend using the torch.backends.cuda.enable_cudnn_sdp() API to ensure consistent backend selection between forward and recomputation passes during activation checkpointing. This is a process-wide setting, but since Ray workers are isolated processes, it does not affect other workloads.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yuki-97 yuki-97 added the CI:L1 Run doctests, unit tests, and functional tests label Jan 6, 2026
@yuki-97 yuki-97 enabled auto-merge (squash) January 6, 2026 11:15
@yuki-97 yuki-97 merged commit 70142eb into r0.5.0 Jan 7, 2026
79 of 88 checks passed
@yuki-97 yuki-97 deleted the cherry-pick-1717-r0.5.0 branch January 7, 2026 06:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-pick CI:L1 Run doctests, unit tests, and functional tests Run CICD

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants