-
Notifications
You must be signed in to change notification settings - Fork 1.1k
add npu megatron multi-node example #7321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add npu megatron multi-node example #7321
Conversation
Summary of ChangesHello @addsubmuldiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request expands the NPU (Ascend) training examples by adding new multi-node configurations for Megatron SFT. It also refactors the directory structure for existing Qwen3 LoRA training scripts, consolidating them under a dedicated Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary of ChangesHello @addsubmuldiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Ascend NPU training examples by adding multi-node support for Megatron, enabling distributed fine-tuning of large language models like Qwen3-8B. Concurrently, it improves the overall organization of existing Qwen3 training scripts and updates one of them to leverage the latest Qwen3-8B model, streamlining the development and deployment of advanced AI models on Ascend hardware. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a multi-node training example for Megatron on Ascend NPUs, along with some file restructuring. The example scripts are a good starting point, but have some issues. I've pointed out a critical error in node1.sh where MASTER_ADDR is incorrectly set for a multi-node scenario. I've also suggested improvements for placeholder values to make the scripts more user-friendly. Furthermore, I've highlighted significant code duplication between node1.sh and node2.sh and recommended refactoring them into a single, parameterized script for better maintainability. The other changes in the PR are fine.
| ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ | ||
| NNODES=2 \ | ||
| NODE_RANK=0 \ | ||
| MASTER_ADDR=127.0.0.1 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MASTER_ADDR is set to 127.0.0.1, which is incorrect for a multi-node setup as it refers to the local machine. For this example to work across multiple nodes, this should be the IP address of the master node, which must be reachable from all other nodes. Please use a placeholder like in node2.sh.
| MASTER_ADDR=127.0.0.1 \ | |
| MASTER_ADDR=xxx.xxx.xxx.xxx \ |
| MASTER_ADDR=127.0.0.1 \ | ||
| MASTER_PORT=29500 \ | ||
| NPROC_PER_NODE=8 \ | ||
| HCCL_SOCKET_IFNAME=xxx \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ | ||
| NNODES=2 \ | ||
| NODE_RANK=0 \ | ||
| MASTER_ADDR=127.0.0.1 \ | ||
| MASTER_PORT=29500 \ | ||
| NPROC_PER_NODE=8 \ | ||
| HCCL_SOCKET_IFNAME=xxx \ | ||
| megatron sft \ | ||
| --model 'Qwen/Qwen3-8B' \ | ||
| --dataset 'AI-ModelScope/alpaca-gpt4-data-zh#1000' \ | ||
| --save './SAVE' \ | ||
| --train_type 'lora' \ | ||
| --lora_rank 8 \ | ||
| --lora_alpha 32 \ | ||
| --target_modules 'all-linear' \ | ||
| --tensor_model_parallel_size 2 \ | ||
| --pipeline_model_parallel_size 1 \ | ||
| --context_parallel_size 1 \ | ||
| --sequence_parallel true \ | ||
| --micro_batch_size 1 \ | ||
| --global_batch_size 64 \ | ||
| --recompute_granularity selective \ | ||
| --recompute_modules core_attn \ | ||
| --cross_entropy_loss_fusion true \ | ||
| --no_gradient_accumulation_fusion true \ | ||
| --lr 1e-4 \ | ||
| --lr_warmup_fraction 0.05 \ | ||
| --min_lr 1e-5 \ | ||
| --max_epochs 1 \ | ||
| --log_interval 5 \ | ||
| --num_workers 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scripts node1.sh and node2.sh are nearly identical, which introduces code duplication and can make maintenance difficult. Consider merging them into a single script that accepts NODE_RANK and MASTER_ADDR as command-line arguments. This would make the example cleaner, more robust, and easier for users to adapt. For example, a single run.sh could be used as bash run.sh <NODE_RANK> <MASTER_ADDR>.
| MASTER_ADDR=xxx.xxx.xxx.xxx \ | ||
| MASTER_PORT=29500 \ | ||
| NPROC_PER_NODE=8 \ | ||
| HCCL_SOCKET_IFNAME=xxx \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive NPU (Ascend) training examples for Qwen3 models, including a new multi-node Megatron setup and additional training configurations.
- Updates existing Qwen3 Megatron example from Qwen2.5-7B to Qwen3-8B
- Adds new FSDP training configuration for Qwen3-32B
- Adds new DeepSpeed Zero3 training configuration for Qwen3-32B
- Introduces multi-node Megatron training examples for 2-node setups with 8 cards per node
Reviewed changes
Copilot reviewed 3 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
examples/ascend/train/qwen3/qwen3_lora_megatron.sh |
Updates model reference from Qwen2.5-7B-Instruct to Qwen3-8B and corresponding output path |
examples/ascend/train/qwen3/qwen3_lora_fsdp/train.sh |
Adds new FSDP training script for Qwen3-32B with 8-device parallelism |
examples/ascend/train/qwen3/qwen3_lora_fsdp/fsdp.json |
Adds FSDP configuration with Qwen3DecoderLayer wrapping and full sharding strategy |
examples/ascend/train/qwen3/qwen3_lora_deepspeed.sh |
Adds new DeepSpeed Zero3 training script for Qwen3-32B |
examples/ascend/multi-node/megatron/node1.sh |
Adds multi-node master node script with tensor parallelism and sequence parallelism |
examples/ascend/multi-node/megatron/node2.sh |
Adds multi-node worker node script with matching configuration to node1 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| MASTER_ADDR=xxx.xxx.xxx.xxx \ | ||
| MASTER_PORT=29500 \ | ||
| NPROC_PER_NODE=8 \ | ||
| HCCL_SOCKET_IFNAME=xxx \ |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The placeholder values 'xxx.xxx.xxx.xxx' for MASTER_ADDR and 'xxx' for HCCL_SOCKET_IFNAME need to be replaced with actual values. Consider adding a comment explaining that users must replace these placeholders with their actual master node IP address and network interface name (e.g., eth0, ens33).
| NODE_RANK=0 \ | ||
| MASTER_ADDR=127.0.0.1 \ | ||
| MASTER_PORT=29500 \ | ||
| NPROC_PER_NODE=8 \ |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The placeholder value 'xxx' for HCCL_SOCKET_IFNAME needs to be replaced with the actual network interface name. Consider adding a comment explaining that users must replace this placeholder with their actual network interface name (e.g., eth0, ens33).
| NPROC_PER_NODE=8 \ | |
| NPROC_PER_NODE=8 \ | |
| # Replace 'xxx' with your actual network interface name (e.g., eth0, ens33). |
PR type
PR information
Write the detail information belongs to this PR.
Experiment results
Paste your experiment result here(if needed).