Skip to content

Conversation

@daiyaanarfeen
Copy link

What does this PR do ?

Enables training with reduced TP degree on DP ranks with some failed GPUs. Failed ranks can be specified by their coordinate in the parallelism mapping. This can enable training jobs to continue without rescheduling and waiting for available resources.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 5, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g
Copy link
Contributor

ko3n1g commented Nov 6, 2025

/ok to test 7a33efd

@github-actions
Copy link
Contributor

github-actions bot commented Nov 6, 2025

Thank you for your contribution!

NVIDIA Megatron-LM is currently transitioning to development on Github. We will aim to review your PR after we complete our transition and stabilize our Github development process.

Thank you for your understanding.

Copy link
Contributor

@skyw skyw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommend to put everything into nununiform_tp.py, inherit from core classes and override member function when needed. That way code can be non-intrusive.

param.main_grad.add_(param.grad.data)
param.grad = None

# Nonuniform TP: gather grads from spare GPUs and scatter to core GPUs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inherit from DDP, make a new class and override _make_backward_post_hook().

delay_wgrad_compute: bool = False
"""Delay the weight gradient computation to improve batch-level communication overlapping"""

tp_base: int = 8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make a small config class just for NTP.

ntp_map(layer.mlp, ddp_config, layer.mlp.config.ffn_hidden_size)


def test_ntp():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move test to the right place under /tests.

communication_group = self.data_parallel_group

# Coalesce communication kernels across buckets in the bucket group.
# NOTE: only sync on core GPUs (not spares) for nonuniform TP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also subclass then override start_grad_sync

if hasattr(param, 'main_grad'):
param.grad = param.main_grad
# NOTE: need to make this contiguous for nonuniform TP
param.grad = param.main_grad.contiguous()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this to sync grad, don't touch widely used code.

nccl_comm_cfgs[pg_name][key_value_pair[0]] = key_value_pair[1]


def _get_active_ranks_for_ntp(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't touch parallel_state, move to NTP files.

if rank in ranks:
_MODEL_PARALLEL_GROUP = group
_MODEL_PARALLEL_GLOBAL_RANKS = ranks
_CONTEXT_PARALLEL_GROUP = group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update to latest, looks like you branched from very old code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants