Skip to content

Commit bf84f2d

Browse files
authored
[Doc] Support kimi-k2-w8a8 (#2162)
### What this PR does / why we need it? In fact, the kimi-k2 model is similar to the deepseek model, and we only need to make a few changes to support it. what does this pr do: 1. Add kimi-k2-w8a8 deployment doc 2. Update quantization doc 3. Upgrade torchair support list ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.10.0 - vLLM main: vllm-project/vllm@9edd1db --------- Signed-off-by: wangli <[email protected]>
1 parent 875a86c commit bf84f2d

File tree

8 files changed

+192
-38
lines changed

8 files changed

+192
-38
lines changed

docs/source/assets/multi_node_dp.png

-115 KB
Binary file not shown.
90.3 KB
Loading
129 KB
Loading

docs/source/tutorials/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,5 @@ multi_npu_qwen3_moe
1313
multi_npu_quantization
1414
single_node_300i
1515
multi_node
16+
multi_node_kimi
1617
:::

docs/source/tutorials/multi_node.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -90,12 +90,12 @@ docker run --rm \
9090
-it $IMAGE bash
9191
```
9292

93+
Run the following scripts on two nodes respectively
94+
9395
:::{note}
94-
Before launch the inference server, ensure some environment variables are set for multi node communication
96+
Before launch the inference server, ensure the following environment variables are set for multi node communication
9597
:::
9698

97-
Run the following scripts on two nodes respectively
98-
9999
**node0**
100100

101101
```shell
@@ -178,7 +178,7 @@ vllm serve /root/.cache/ds_v3 \
178178
```
179179
180180
The Deployment view looks like:
181-
![alt text](../assets/multi_node_dp.png)
181+
![alt text](../assets/multi_node_dp_deepseek.png)
182182
183183
Once your server is started, you can query the model with input prompts:
184184
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# Multi-Node-DP (Kimi-K2)
2+
3+
## Verify Multi-Node Communication Environment
4+
5+
referring to [multi_node.md](https://vllm-ascend.readthedocs.io/en/latest/tutorials/multi_node.html#verification-process)
6+
7+
## Run with docker
8+
Assume you have two Atlas 800 A3(64G*16) nodes(or 4 *A2* 8), and want to deploy the `Kimi-K2-Instruct-W8A8` quantitative model across multi-node.
9+
10+
```{code-block} bash
11+
:substitutions:
12+
# Update the vllm-ascend image
13+
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version|
14+
export NAME=vllm-ascend
15+
16+
# Run the container using the defined variables
17+
# Note if you are running bridge network with docker, Please expose available ports for multiple nodes communication in advance
18+
docker run --rm \
19+
--name $NAME \
20+
--net=host \
21+
--device /dev/davinci0 \
22+
--device /dev/davinci1 \
23+
--device /dev/davinci2 \
24+
--device /dev/davinci3 \
25+
--device /dev/davinci4 \
26+
--device /dev/davinci5 \
27+
--device /dev/davinci6 \
28+
--device /dev/davinci7 \
29+
--device /dev/davinci8 \
30+
--device /dev/davinci9 \
31+
--device /dev/davinci10 \
32+
--device /dev/davinci11 \
33+
--device /dev/davinci12 \
34+
--device /dev/davinci13 \
35+
--device /dev/davinci14 \
36+
--device /dev/davinci15 \
37+
--device /dev/davinci_manager \
38+
--device /dev/devmm_svm \
39+
--device /dev/hisi_hdc \
40+
-v /usr/local/dcmi:/usr/local/dcmi \
41+
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
42+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
43+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
44+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
45+
-v /etc/ascend_install.info:/etc/ascend_install.info \
46+
-v /mnt/sfs_turbo/.cache:/home/cache \
47+
-it $IMAGE bash
48+
```
49+
50+
Run the following scripts on two nodes respectively
51+
52+
:::{note}
53+
Before launch the inference server, ensure the following environment variables are set for multi node communication
54+
:::
55+
56+
**node0**
57+
58+
```shell
59+
#!/bin/sh
60+
61+
# this obtained through ifconfig
62+
# nic_name is the network interface name corresponding to local_ip
63+
nic_name="xxxx"
64+
local_ip="xxxx"
65+
66+
export HCCL_IF_IP=$local_ip
67+
export GLOO_SOCKET_IFNAME=$nic_name
68+
export TP_SOCKET_IFNAME=$nic_name
69+
export HCCL_SOCKET_IFNAME=$nic_name
70+
export OMP_PROC_BIND=false
71+
export OMP_NUM_THREADS=100
72+
export VLLM_USE_V1=1
73+
export HCCL_BUFFSIZE=1024
74+
75+
# The w8a8 weight can obtained from https://www.modelscope.cn/models/vllm-ascend/Kimi-K2-Instruct-W8A8
76+
# If you want to the quantization manually, please refer to https://vllm-ascend.readthedocs.io/en/latest/user_guide/feature_guide/quantization.html
77+
vllm serve /home/cache/weights/Kimi-K2-Instruct-W8A8 \
78+
--host 0.0.0.0 \
79+
--port 8004 \
80+
--data-parallel-size 4 \
81+
--api-server-count 2 \
82+
--data-parallel-size-local 2 \
83+
--data-parallel-address $local_ip \
84+
--data-parallel-rpc-port 13389 \
85+
--seed 1024 \
86+
--served-model-name kimi \
87+
--quantization ascend \
88+
--tensor-parallel-size 8 \
89+
--enable-expert-parallel \
90+
--max-num-seqs 16 \
91+
--max-model-len 32768 \
92+
--max-num-batched-tokens 4096 \
93+
--trust-remote-code \
94+
--no-enable-prefix-caching \
95+
--gpu-memory-utilization 0.9 \
96+
--additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}'
97+
```
98+
99+
**node1**
100+
101+
```shell
102+
#!/bin/sh
103+
104+
nic_name="xxxx"
105+
local_ip="xxxx"
106+
107+
export HCCL_IF_IP=$local_ip
108+
export GLOO_SOCKET_IFNAME=$nic_name
109+
export TP_SOCKET_IFNAME=$nic_name
110+
export HCCL_SOCKET_IFNAME=$nic_name
111+
export OMP_PROC_BIND=false
112+
export OMP_NUM_THREADS=100
113+
export VLLM_USE_V1=1
114+
export HCCL_BUFFSIZE=1024
115+
116+
vllm serve /home/cache/weights/Kimi-K2-Instruct-W8A8 \
117+
--host 0.0.0.0 \
118+
--port 8004 \
119+
--headless \
120+
--data-parallel-size 4 \
121+
--data-parallel-size-local 2 \
122+
--data-parallel-start-rank 2 \
123+
--data-parallel-address $node0_ip \
124+
--data-parallel-rpc-port 13389 \
125+
--seed 1024 \
126+
--tensor-parallel-size 8 \
127+
--served-model-name kimi \
128+
--max-num-seqs 16 \
129+
--max-model-len 32768 \
130+
--quantization ascend \
131+
--max-num-batched-tokens 4096 \
132+
--enable-expert-parallel \
133+
--trust-remote-code \
134+
--no-enable-prefix-caching \
135+
--gpu-memory-utilization 0.92 \
136+
--additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}'
137+
```
138+
139+
The Deployment view looks like:
140+
![alt text](../assets/multi_node_dp_kimi.png)
141+
142+
Once your server is started, you can query the model with input prompts:
143+
144+
```shell
145+
curl http://{ node0 ip:8004 }/v1/completions \
146+
-H "Content-Type: application/json" \
147+
-d '{
148+
"model": "kimi",
149+
"prompt": "The future of AI is",
150+
"max_tokens": 50,
151+
"temperature": 0
152+
}'
153+
```

docs/source/user_guide/feature_guide/quantization.md

Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -8,54 +8,57 @@ Since 0.9.0rc2 version, quantization feature is experimentally supported in vLLM
88

99
To quantize a model, users should install [ModelSlim](https://gitee.com/ascend/msit/blob/master/msmodelslim/README.md) which is the Ascend compression and acceleration tool. It is an affinity-based compression tool designed for acceleration, using compression as its core technology and built upon the Ascend platform.
1010

11-
Currently, only the specific tag [modelslim-VLLM-8.1.RC1.b020_001](https://gitee.com/ascend/msit/blob/modelslim-VLLM-8.1.RC1.b020_001/msmodelslim/README.md) of modelslim works with vLLM Ascend. Please do not install other version until modelslim master version is available for vLLM Ascend in the future.
12-
1311
Install modelslim:
1412

1513
```bash
16-
git clone https://gitee.com/ascend/msit -b modelslim-VLLM-8.1.RC1.b020_001
14+
git clone https://gitee.com/ascend/msit
15+
# Optional, this commit has been verified
16+
git checkout f8ab35a772a6c1ee7675368a2aa4bafba3bedd1a
17+
1718
cd msit/msmodelslim
1819
bash install.sh
1920
pip install accelerate
2021
```
2122

2223
## Quantize model
2324

24-
Take [DeepSeek-V2-Lite](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Lite) as an example, you just need to download the model, and then execute the convert command. The command is shown below. More info can be found in modelslim doc [deepseek w8a8 dynamic quantization docs](https://gitee.com/ascend/msit/blob/modelslim-VLLM-8.1.RC1.b020_001/msmodelslim/example/DeepSeek/README.md#deepseek-v2-w8a8-dynamic%E9%87%8F%E5%8C%96).
25-
26-
```bash
27-
cd example/DeepSeek
28-
python3 quant_deepseek.py --model_path {original_model_path} --save_directory {quantized_model_save_path} --device_type cpu --act_method 2 --w_bit 8 --a_bit 8 --is_dynamic True
29-
```
30-
3125
:::{note}
32-
You can also download the quantized model that we uploaded. Please note that these weights should be used for test only. For example, https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8
26+
You can choose to convert the model yourself or use the quantized model we uploaded,
27+
see https://www.modelscope.cn/models/vllm-ascend/Kimi-K2-Instruct-W8A8
28+
This conversion process will require a larger CPU memory, please ensure that the RAM size is greater than 2TB
3329
:::
3430

35-
Once convert action is done, there are two important files generated.
31+
### Adapts and change
32+
1. Ascend does not support the `flash_attn` library. To run the model, you need to follow the [guide](https://gitee.com/ascend/msit/blob/master/msmodelslim/example/DeepSeek/README.md#deepseek-v3r1) and comment out certain parts of the code in `modeling_deepseek.py` located in the weights folder.
33+
2. The current version of transformers does not support loading weights in FP8 quantization format. you need to follow the [guide](https://gitee.com/ascend/msit/blob/master/msmodelslim/example/DeepSeek/README.md#deepseek-v3r1) and delete the quantization related fields from `config.json` in the weights folder
3634

37-
1. [config.json](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8/file/view/master/config.json?status=1). Please make sure that there is no `quantization_config` field in it.
35+
### Generate the w8a8 weights
3836

39-
2. [quant_model_description.json](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8/file/view/master/quant_model_description.json?status=1). All the converted weights info are recorded in this file.
37+
```bash
38+
cd example/DeepSeek
4039

41-
Here is the full converted model files:
40+
export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
41+
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:False
42+
export MODEL_PATH="/root/.cache/Kimi-K2-Instruct"
43+
export SAVE_PATH="/root/.cache/Kimi-K2-Instruct-W8A8"
44+
45+
python3 quant_deepseek_w8a8.py --model_path $MODEL_PATH --save_path $SAVE_PATH --batch_size 4
46+
```
47+
48+
Here is the full converted model files except safetensors:
4249

4350
```bash
4451
.
45-
├── config.json
46-
├── configuration_deepseek.py
47-
├── configuration.json
48-
├── generation_config.json
49-
├── quant_model_description.json
50-
├── quant_model_weight_w8a8_dynamic-00001-of-00004.safetensors
51-
├── quant_model_weight_w8a8_dynamic-00002-of-00004.safetensors
52-
├── quant_model_weight_w8a8_dynamic-00003-of-00004.safetensors
53-
├── quant_model_weight_w8a8_dynamic-00004-of-00004.safetensors
54-
├── quant_model_weight_w8a8_dynamic.safetensors.index.json
55-
├── README.md
56-
├── tokenization_deepseek_fast.py
57-
├── tokenizer_config.json
58-
└── tokenizer.json
52+
|-- config.json
53+
|-- configuration.json
54+
|-- configuration_deepseek.py
55+
|-- generation_config.json
56+
|-- modeling_deepseek.py
57+
|-- quant_model_description.json
58+
|-- quant_model_weight_w8a8_dynamic.safetensors.index.json
59+
|-- tiktoken.model
60+
|-- tokenization_kimi.py
61+
`-- tokenizer_config.json
5962
```
6063

6164
## Run the model
@@ -90,10 +93,7 @@ for output in outputs:
9093
9194
### Online inference
9295
93-
```bash
94-
# Enable quantization by specifying `--quantization ascend`
95-
vllm serve {quantized_model_save_path} --served-model-name "deepseek-v2-lite-w8a8" --max-model-len 2048 --quantization ascend --trust-remote-code
96-
```
96+
Enable quantization by specifying `--quantization ascend`, for more details, see DeepSeek-V3-W8A8 [tutorial](https://vllm-ascend.readthedocs.io/en/latest/tutorials/multi_node.html)
9797
9898
## FAQs
9999

vllm_ascend/ascend_config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
from vllm.logger import logger
1919

20-
TORCHAIR_MODEL_LIST = ["deepseek", "pangu"]
20+
TORCHAIR_MODEL_LIST = ["deepseek", "pangu", "kimi_k2"]
2121

2222

2323
def _check_torchair_supported(model_type: str):

0 commit comments

Comments
 (0)