You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/sglang/multinode-examples.md
+12-24Lines changed: 12 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,9 @@
4
4
5
5
SGLang allows you to deploy multi-node sized models by adding in the `dist-init-addr`, `nnodes`, and `node-rank` arguments. Below we demonstrate and example of deploying DeepSeek R1 for disaggregated serving across 4 nodes. This example requires 4 nodes of 8xH100 GPUs.
6
6
7
-
**Step 1**: Start NATS/ETCD on your head node. Ensure you have the correct firewall rules to allow communication between the nodes as you will need the NATS/ETCD endpoints to be accessible by all other nodes.
7
+
**Step 1**: Use the provided helper script to generate commands to start NATS/ETCD on your head prefill node. This script will also give you environment variables to export on each other node. You will need the IP addresses of your head prefill and head decode node to run this script.
8
8
```bash
9
-
# node 1
10
-
docker compose -f lib/runtime/docker-compose.yml up -d
9
+
./utils/gen_env_vars.sh
11
10
```
12
11
13
12
**Step 2**: Ensure that your configuration file has the required arguments. Here's an example configuration that runs prefill and the model in TP16:
Node 2: Run the remaining 8 shards of the prefill worker
37
37
```bash
38
-
# nats and etcd endpoints
39
-
export NATS_SERVER="nats://<node-1-ip>"
40
-
export ETCD_ENDPOINTS="<node-1-ip>:2379"
41
-
42
-
# worker
43
38
python3 components/worker.py \
44
39
--model-path /model/ \
45
40
--served-model-name deepseek-ai/DeepSeek-R1 \
46
41
--tp 16 \
47
42
--dp-size 16 \
48
-
--dist-init-addr HEAD_PREFILL_NODE_IP:29500 \
43
+
--dist-init-addr ${HEAD_PREFILL_NODE_IP}:29500 \
49
44
--nnodes 2 \
50
45
--node-rank 1 \
51
46
--enable-dp-attention \
52
47
--trust-remote-code \
53
48
--skip-tokenizer-init \
54
49
--disaggregation-mode prefill \
55
50
--disaggregation-transfer-backend nixl \
51
+
--disaggregation-bootstrap-port 30001 \
56
52
--mem-fraction-static 0.82
57
53
```
58
54
59
55
Node 3: Run the first 8 shards of the decode worker
60
56
```bash
61
-
# nats and etcd endpoints
62
-
export NATS_SERVER="nats://<node-1-ip>"
63
-
export ETCD_ENDPOINTS="<node-1-ip>:2379"
64
-
65
-
# worker
66
57
python3 components/decode_worker.py \
67
58
--model-path /model/ \
68
59
--served-model-name deepseek-ai/DeepSeek-R1 \
69
60
--tp 16 \
70
61
--dp-size 16 \
71
-
--dist-init-addr HEAD_DECODE_NODE_IP:29500 \
62
+
--dist-init-addr ${HEAD_DECODE_NODE_IP}:29500 \
72
63
--nnodes 2 \
73
64
--node-rank 0 \
74
65
--enable-dp-attention \
75
66
--trust-remote-code \
76
67
--skip-tokenizer-init \
77
68
--disaggregation-mode decode \
78
69
--disaggregation-transfer-backend nixl \
70
+
--disaggregation-bootstrap-port 30001 \
79
71
--mem-fraction-static 0.82
80
72
```
81
73
82
74
Node 4: Run the remaining 8 shards of the decode worker
83
75
```bash
84
-
# nats and etcd endpoints
85
-
export NATS_SERVER="nats://<node-1-ip>"
86
-
export ETCD_ENDPOINTS="<node-1-ip>:2379"
87
-
88
-
# worker
89
76
python3 components/decode_worker.py \
90
77
--model-path /model/ \
91
78
--served-model-name deepseek-ai/DeepSeek-R1 \
92
79
--tp 16 \
93
80
--dp-size 16 \
94
-
--dist-init-addr HEAD_DECODE_NODE_IP:29500 \
81
+
--dist-init-addr ${HEAD_DECODE_NODE_IP}:29500 \
95
82
--nnodes 2 \
96
83
--node-rank 1 \
97
84
--enable-dp-attention \
98
85
--trust-remote-code \
99
86
--skip-tokenizer-init \
100
87
--disaggregation-mode decode \
101
88
--disaggregation-transfer-backend nixl \
89
+
--disaggregation-bootstrap-port 30001 \
102
90
--mem-fraction-static 0.82
103
91
```
104
92
105
93
**Step 3**: Run inference
106
94
SGLang typically requires a warmup period to ensure the DeepGEMM kernels are loaded. We recommend running a few warmup requests and ensuring that the DeepGEMM kernels load in.
0 commit comments