Skip to content

Merge main into dev#3865

Open
ilml wants to merge 154 commits intoNVIDIA:devfrom
ilml:merge-main-into-dev
Open

Merge main into dev#3865
ilml wants to merge 154 commits intoNVIDIA:devfrom
ilml:merge-main-into-dev

Conversation

@ilml
Copy link
Contributor

@ilml ilml commented Mar 13, 2026

What does this PR do ?

Sync the latest main branch changes into the development line and preserve ongoing dev-only work through the required conflict resolutions.

Summary

  • Pull the latest upstream main changes into the merge branch, including broad updates across CI/workflows, docs, examples, inference, FSDP/resharding, and test coverage.
  • Preserve active dev work around emerging optimizers and layer-wise optimizer refactoring, HyperConnection, Dynamic-CP / THD handling, MoE, MLA/MTP, and related attention and training paths.
  • Resolve merge conflicts in the key overlap areas so dev behavior is retained while adopting the latest main changes.
  • Keep the existing dev uv.lock during the merge because regenerating it with the installed uv currently fails on upstream nemo-run metadata.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

maanug-nv and others added 30 commits February 25, 2026 00:59
Signed-off-by: Maanu Grover <maanug@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: Maanu Grover <maanug@nvidia.com>
Co-authored-by: Xin Yao <xiny@nvidia.com>
Signed-off-by: Guyue Huang <guyueh@oci-hsg-cs-001-vscode-01.cm.cluster>
Co-authored-by: Guyue Huang <guyueh@oci-hsg-cs-001-vscode-01.cm.cluster>
Co-authored-by: Xin Yao <xiny@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: nvidia <nvidia@TRY-64956-gpu01.nvidialaunchpad.com>
Signed-off-by: Faradawn Yang <73060648+faradawn@users.noreply.github.com>
… Manager patch, docs (NVIDIA#3507)

Signed-off-by: Faradawn Yang <73060648+faradawn@users.noreply.github.com>
Signed-off-by: Keshav Santhanam <ksanthanam@nvidia.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 19, 2026

/ok to test 8096db6

Preseed uv-created environments with setuptools and related build tools so git-source dependencies resolve reliably in CI, align the Transformer Engine source pin with the preserved lockfile, and restore the missing grouped_gemm helper that broke install-time imports.

Made-with: Cursor
@ilml
Copy link
Contributor Author

ilml commented Mar 19, 2026

/ok to test c2cbe54

- Add InferenceGroupedMLP class to experts.py from main (was lost in
  merge conflict resolution while backends.py imported it)
- Add megatron/core/inference/moe/ module from main (dependency of
  InferenceGroupedMLP)
- Update mxfp8_tensor.py and add mxfp8_quantize.py from main (needed
  for triton backend support in inference MoE)
- Fix duplicate autodoc items: remove duplicate _EMERGING_OPTIMIZERS
  placeholder, duplicate fsdp_all_gather_in_start_param_sync field,
  duplicate logger assignments in spec_utils.py and token_dispatcher.py
- Fix docs warnings: replace H3 heading in moe_utils.py docstring with
  bold text, remove orphaned docs/source/api-guide/router_replay.md,
  remove redundant docs/api-guide/fine_grained_activation_offloading.md,
  exclude deepseek reproduce guide from Sphinx toctree check

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 21, 2026

/ok to test 57a5344

- param_and_grad_buffer.py: keep layerwise optimizer all_gather path
  from main and dev's grad_enabled caching + no_grad wrapper for the
  standard distributed optimizer path
- transformer_config.py: keep both fused_residual_rmsnorm (main) and
  use_transformer_engine_op_fuser (dev) config fields
- test_mamba_moe_model.py: keep golden config entries from both branches

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 21, 2026

/ok to test d057244

The process_mtp_loss function now passes input_ as keyword arg (from
dev branch changes), but the test mock expected a positional 'hidden'
parameter. Updated the mock signature to match.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 21, 2026

/ok to test 1e9a599

@ilml
Copy link
Contributor Author

ilml commented Mar 21, 2026

/ok to test 1e9a599

The float32 variant deterministically times out with an NCCL ALLREDUCE
timeout (SeqNum=361) in some CI shards while passing in others. The test
and fusion code are identical to dev branch, indicating a pre-existing
infrastructure issue with multi-GPU JIT compilation timing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 21, 2026

/ok to test fc2d334

…nction

The router.py passes dense_output=True for inference mode but the
merge took dev's version of moe_utils.py which lacks this parameter.
Added from main to fix TypeError in InferenceTopKRouter.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 22, 2026

/ok to test 6521ee2

@ilml
Copy link
Contributor Author

ilml commented Mar 22, 2026

/ok to test bd44a67

The test was passing layer_wise_distributed_optimizer as a keyword arg
to get_megatron_muon_optimizer(), but that function doesn't accept it.
Set it on the OptimizerConfig object instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml ilml force-pushed the merge-main-into-dev branch from bd44a67 to 5c04917 Compare March 22, 2026 02:52
@ilml
Copy link
Contributor Author

ilml commented Mar 22, 2026

/ok to test 5c04917

Pass async_allgather and model_chunks from optimizer config to
LayerWiseDistributedOptimizer constructor so overlap param gather
works correctly with layer-wise optimizers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilml
Copy link
Contributor Author

ilml commented Mar 22, 2026

/ok to test d8caf0a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.