Skip to content

Commit f41a9f7

Browse files
committed
add ds doc
Signed-off-by: wangli <[email protected]>
1 parent 96089b5 commit f41a9f7

File tree

1 file changed

+227
-0
lines changed

1 file changed

+227
-0
lines changed
Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
# Multi-Node-DP (DeepSeek)
2+
3+
## Verify Multi-Node Communication Environment
4+
5+
### Physical Layer Requirements:
6+
7+
- The physical machines must be located on the same WLAN, with network connectivity.
8+
- All NPUs are connected with optical modules, and the connection status must be normal.
9+
10+
### Verification Process:
11+
12+
Execute the following commands on each node in sequence. The results must all be `success` and the status must be `UP`:
13+
14+
```bash
15+
# Check the remote switch ports
16+
for i in {0..7}; do hccn_tool -i $i -lldp -g | grep Ifname; done
17+
# Get the link status of the Ethernet ports (UP or DOWN)
18+
for i in {0..7}; do hccn_tool -i $i -link -g ; done
19+
# Check the network health status
20+
for i in {0..7}; do hccn_tool -i $i -net_health -g ; done
21+
# View the network detected IP configuration
22+
for i in {0..7}; do hccn_tool -i $i -netdetect -g ; done
23+
# View gateway configuration
24+
for i in {0..7}; do hccn_tool -i $i -gateway -g ; done
25+
# View NPU network configuration
26+
cat /etc/hccn.conf
27+
```
28+
29+
### NPU Interconnect Verification:
30+
#### 1. Get NPU IP Addresses
31+
32+
```bash
33+
for i in {0..7}; do hccn_tool -i $i -ip -g | grep ipaddr; done
34+
```
35+
36+
#### 2. Cross-Node PING Test
37+
38+
```bash
39+
# Execute on the target node (replace with actual IP)
40+
hccn_tool -i 0 -ping -g address 10.20.0.20
41+
```
42+
43+
## Run with docker
44+
Assume you have two Atlas 800 A2(64G*8) nodes, and want to deploy the `deepseek-v3.1-w8a8` quantitative model across multi-node.
45+
46+
```{code-block} bash
47+
:substitutions:
48+
# Update the vllm-ascend image
49+
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version|
50+
export NAME=vllm-ascend
51+
52+
# Run the container using the defined variables
53+
# Note if you are running bridge network with docker, Please expose available ports for multiple nodes communication in advance
54+
docker run --rm \
55+
--name $NAME \
56+
--net=host \
57+
--device /dev/davinci0 \
58+
--device /dev/davinci1 \
59+
--device /dev/davinci2 \
60+
--device /dev/davinci3 \
61+
--device /dev/davinci4 \
62+
--device /dev/davinci5 \
63+
--device /dev/davinci6 \
64+
--device /dev/davinci7 \
65+
--device /dev/davinci8 \
66+
--device /dev/davinci9 \
67+
--device /dev/davinci10 \
68+
--device /dev/davinci11 \
69+
--device /dev/davinci12 \
70+
--device /dev/davinci13 \
71+
--device /dev/davinci14 \
72+
--device /dev/davinci15 \
73+
--device /dev/davinci_manager \
74+
--device /dev/devmm_svm \
75+
--device /dev/hisi_hdc \
76+
-v /usr/local/dcmi:/usr/local/dcmi \
77+
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
78+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
79+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
80+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
81+
-v /etc/ascend_install.info:/etc/ascend_install.info \
82+
-v /mnt/sfs_turbo/.cache:/root/.cache \
83+
-it $IMAGE bash
84+
```
85+
86+
:::::{tab-set}
87+
::::{tab-item} DeepSeek-V3.1-BF16
88+
89+
Run the following scripts on two nodes respectively
90+
91+
:::{note}
92+
Before launch the inference server, ensure the following environment variables are set for multi node communication
93+
:::
94+
95+
**node0**
96+
97+
```shell
98+
#!/bin/sh
99+
100+
# this obtained through ifconfig
101+
# nic_name is the network interface name corresponding to local_ip
102+
nic_name="xxxx"
103+
local_ip="xxxx"
104+
105+
export VLLM_USE_MODELSCOPE=True
106+
export HCCL_IF_IP=$local_ip
107+
export GLOO_SOCKET_IFNAME=$nic_name
108+
export TP_SOCKET_IFNAME=$nic_name
109+
export HCCL_SOCKET_IFNAME=$nic_name
110+
export OMP_PROC_BIND=false
111+
export OMP_NUM_THREADS=100
112+
export HCCL_BUFFSIZE=1024
113+
114+
# The w8a8 weight can obtained from https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3.1-W8A8
115+
# If you want to the quantization manually, please refer to https://vllm-ascend.readthedocs.io/en/latest/user_guide/feature_guide/quantization.html
116+
vllm serve unsloth/DeepSeek-V3.1-BF16 \
117+
--host 0.0.0.0 \
118+
--port 8004 \
119+
--data-parallel-size 2 \
120+
--data-parallel-size-local 1 \
121+
--data-parallel-address $local_ip \
122+
--data-parallel-rpc-port 13389 \
123+
--tensor-parallel-size 16 \
124+
--seed 1024 \
125+
--served-model-name deepseek_v3.1 \
126+
--enable-expert-parallel \
127+
--max-num-seqs 16 \
128+
--max-model-len 32768 \
129+
--max-num-batched-tokens 4096 \
130+
--trust-remote-code \
131+
--no-enable-prefix-caching \
132+
--gpu-memory-utilization 0.9 \
133+
--additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}'
134+
```
135+
136+
**node1**
137+
138+
```shell
139+
#!/bin/sh
140+
141+
nic_name="xxx"
142+
local_ip="xxx"
143+
144+
export VLLM_USE_MODELSCOPE=True
145+
export HCCL_IF_IP=$local_ip
146+
export GLOO_SOCKET_IFNAME=$nic_name
147+
export TP_SOCKET_IFNAME=$nic_name
148+
export HCCL_SOCKET_IFNAME=$nic_name
149+
export OMP_PROC_BIND=false
150+
export OMP_NUM_THREADS=100
151+
export VLLM_USE_V1=1
152+
export HCCL_BUFFSIZE=1024
153+
154+
vllm serve vllm-ascend/DeepSeek-V3.1-W8A8 \
155+
--host 0.0.0.0 \
156+
--port 8004 \
157+
--headless \
158+
--data-parallel-size 2 \
159+
--data-parallel-size-local 1 \
160+
--data-parallel-start-rank 1 \
161+
--data-parallel-address { node0 ip } \
162+
--data-parallel-rpc-port 13389 \
163+
--tensor-parallel-size 16 \
164+
--seed 1024 \
165+
--served-model-name deepseek_v3.1 \
166+
--max-num-seqs 16 \
167+
--max-model-len 32768 \
168+
--max-num-batched-tokens 32768 \
169+
--enable-expert-parallel \
170+
--trust-remote-code \
171+
--no-enable-prefix-caching \
172+
--gpu-memory-utilization 0.92 \
173+
--additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}'
174+
```
175+
176+
::::
177+
178+
::::{tab-item} DeepSeek-V3.1-W8A8
179+
180+
```shell
181+
#!/bin/sh
182+
183+
nic_name="xxx"
184+
local_ip="xxx"
185+
186+
export VLLM_USE_MODELSCOPE=True
187+
export HCCL_IF_IP=$local_ip
188+
export GLOO_SOCKET_IFNAME=$nic_name
189+
export TP_SOCKET_IFNAME=$nic_name
190+
export HCCL_SOCKET_IFNAME=$nic_name
191+
export OMP_PROC_BIND=false
192+
export OMP_NUM_THREADS=100
193+
export VLLM_USE_V1=1
194+
export HCCL_BUFFSIZE=1024
195+
196+
vllm serve vllm-ascend/DeepSeek-V3.1-W8A8 \
197+
--host 0.0.0.0 \
198+
--port 8004 \
199+
--tensor-parallel-size 16 \
200+
--seed 1024 \
201+
--quantization ascend \
202+
--served-model-name deepseek_v3.1 \
203+
--max-num-seqs 16 \
204+
--max-model-len 32768 \
205+
--max-num-batched-tokens 32768 \
206+
--enable-expert-parallel \
207+
--trust-remote-code \
208+
--no-enable-prefix-caching \
209+
--gpu-memory-utilization 0.92 \
210+
--additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}'
211+
```
212+
213+
::::
214+
:::::
215+
216+
Once your server is started, you can query the model with input prompts:
217+
218+
```shell
219+
curl http://{ node0 ip:8004 }/v1/completions \
220+
-H "Content-Type: application/json" \
221+
-d '{
222+
"model": "deepseek_v3.1",
223+
"prompt": "The future of AI is",
224+
"max_tokens": 50,
225+
"temperature": 0
226+
}'
227+
```

0 commit comments

Comments
 (0)