Skip to content

Conversation

@jiemingz
Copy link
Contributor

@jiemingz jiemingz commented Dec 5, 2025

What does this PR do ?

  • improves training capture time
  • supports cudagraphing module functions (for partial moe cudagraphs)
  • supports partially cudagraphing moe router and postprocess
  • generalizes cudagraph input output buffer reuse
  • supports cudagraph input and output buffer recycling and deallocation to reduce memory overhead for training
  • misc improvements to reduce cudagraph replay overhead

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@jiemingz jiemingz requested review from a team as code owners December 5, 2025 17:49
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 5, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@jiemingz jiemingz changed the title partial cudagraph scopes and improvements for training draft: partial cudagraph scopes and improvements for training Dec 5, 2025
@jiemingz jiemingz self-assigned this Dec 5, 2025
@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch from 0cf9ab8 to 2b8f924 Compare December 8, 2025 04:09
@jiemingz jiemingz changed the title draft: partial cudagraph scopes and improvements for training partial cudagraph scopes and improvements for training Dec 8, 2025
@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch from d86d6db to 9f268c1 Compare December 8, 2025 14:14
@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch from 9f268c1 to 8b0fe08 Compare December 16, 2025 16:52
@mathemakitten
Copy link
Contributor

mathemakitten commented Dec 16, 2025

Can we throw an error or at least a warning if HAVE_TE_GRAPHS = False now since none of the new weakref-based code works without HAVE_TE_GRAPHS? Some of the older containers silently error on from transformer_engine.pytorch.utils import make_weak_ref but proceed like usual and cudagraphs will not execute but the logs provide no obvious reason why.

@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch 3 times, most recently from 5ad199d to 529107a Compare December 18, 2025 21:49
Copy link
Contributor

@mathemakitten mathemakitten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me % a pass for docstrings and some comments on clarity for the new CudagraphArtifacts class.

config: TransformerConfig,
base_module = None,
function_name = None,
need_backward = True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed when we have base_module.training and/or torch.is_grad_enabled() available in the same scope?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for when you want to capture something that doesnt have a backward pass but still is run inside a training loop. For instance, we might want to graph over a log_grad_norms or something. Hypothetically there isn't a backward pass to graph over. So this is a flag to do that, without having to set base_module.training and/or torch.is_grad_enabled() everytime we call that function

Copy link
Contributor

@mathemakitten mathemakitten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me % a pass for docstrings and some comments on clarity for the new CudagraphArtifacts class.

@jiemingz jiemingz requested a review from a team as a code owner December 23, 2025 16:43
Signed-off-by: Jimmy Zhang <[email protected]>
Signed-off-by: Jimmy Zhang <[email protected]>
Signed-off-by: Jieming Zhang <[email protected]>
@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch from 825453d to f275189 Compare January 7, 2026 02:52
@jiemingz
Copy link
Contributor Author

jiemingz commented Jan 7, 2026

/ok to test f275189

@kvareddy kvareddy requested a review from fanshiqing January 7, 2026 06:11
@deepakn94 deepakn94 changed the title partial cudagraph scopes and improvements for training Various CUDA graph improvements on capture time, replay time, memory footprint Jan 7, 2026
@jiemingz jiemingz force-pushed the jiemingz/mcore_cudagraph_improvements branch from 6ea46c5 to 6f13ce8 Compare January 7, 2026 18:10
@jiemingz
Copy link
Contributor Author

jiemingz commented Jan 7, 2026

/ok to test 6f13ce8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants