-
Notifications
You must be signed in to change notification settings - Fork 3.5k
nonuniform tensor parallelism #2149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
/ok to test 7a33efd |
|
Thank you for your contribution! NVIDIA Megatron-LM is currently transitioning to development on Github. We will aim to review your PR after we complete our transition and stabilize our Github development process. Thank you for your understanding. |
skyw
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommend to put everything into nununiform_tp.py, inherit from core classes and override member function when needed. That way code can be non-intrusive.
| param.main_grad.add_(param.grad.data) | ||
| param.grad = None | ||
|
|
||
| # Nonuniform TP: gather grads from spare GPUs and scatter to core GPUs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inherit from DDP, make a new class and override _make_backward_post_hook().
| delay_wgrad_compute: bool = False | ||
| """Delay the weight gradient computation to improve batch-level communication overlapping""" | ||
|
|
||
| tp_base: int = 8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make a small config class just for NTP.
| ntp_map(layer.mlp, ddp_config, layer.mlp.config.ffn_hidden_size) | ||
|
|
||
|
|
||
| def test_ntp(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move test to the right place under /tests.
| communication_group = self.data_parallel_group | ||
|
|
||
| # Coalesce communication kernels across buckets in the bucket group. | ||
| # NOTE: only sync on core GPUs (not spares) for nonuniform TP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also subclass then override start_grad_sync
| if hasattr(param, 'main_grad'): | ||
| param.grad = param.main_grad | ||
| # NOTE: need to make this contiguous for nonuniform TP | ||
| param.grad = param.main_grad.contiguous() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this to sync grad, don't touch widely used code.
| nccl_comm_cfgs[pg_name][key_value_pair[0]] = key_value_pair[1] | ||
|
|
||
|
|
||
| def _get_active_ranks_for_ntp( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't touch parallel_state, move to NTP files.
| if rank in ranks: | ||
| _MODEL_PARALLEL_GROUP = group | ||
| _MODEL_PARALLEL_GLOBAL_RANKS = ranks | ||
| _CONTEXT_PARALLEL_GROUP = group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update to latest, looks like you branched from very old code?
What does this PR do ?
Enables training with reduced TP degree on DP ranks with some failed GPUs. Failed ranks can be specified by their coordinate in the parallelism mapping. This can enable training jobs to continue without rescheduling and waiting for available resources.
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.