You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[v0.9.1][doc] add 'How to get better performance in Non-MLA LLMs' in FAQs (#2730)
### What this PR does / why we need it?
This PR add 'How to get better performance in Non-MLA LLMs' in FAQs.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
CI passed with new added/existing test.
Signed-off-by: rjg-lyh <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/faqs.md
+23-15Lines changed: 23 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,15 @@ There are many channels that you can communicate with our community developers /
50
50
51
51
Find more details [<u>here</u>](https://vllm-ascend.readthedocs.io/en/v0.9.1-dev/user_guide/support_matrix/supported_features.html).
52
52
53
-
### 6. How to solve the problem of "Failed to infer device type" or "libatb.so: cannot open shared object file"?
53
+
### 6. How to get better performance in Non-MLA LLMs?
54
+
55
+
`Non-MLA` LLMs forcibly disable the `chunked prefill` feature, as the performance of operators supporting this feature functionality is currently suboptimal. Therefore, in this scenario, we enforce the `Ascend scheduler` and forcibly disable `chunked prefill`. It is important to note that when you launch a non-MLA model with a simple script, the underlying behavior deviates from vLLM’s default of enabling chunked prefill: chunked prefill is effectively turned off, and prefill and decode are scheduled separately. Consequently, inference performance may drop significantly compared to expectations.
56
+
57
+
Accordingly, we recommend the following serving configuration to achieve optimal performance on a single node:
58
+
1. We recommend `--max-model-len` to a value just slightly larger than `max_input_len + max_output_len`; this reserves more KV-cache allocation headroom and reduces the risk of OOM.
59
+
2. We recommend aligning `--max-num-batched-tokens` with `–-max-model-len`, or setting it a few times larger than the average input length in your dataset; this helps maintain a good load balance between prefill and decode phases.
60
+
61
+
### 7. How to solve the problem of "Failed to infer device type" or "libatb.so: cannot open shared object file"?
54
62
55
63
Basically, the reason is that the NPU environment is not configured correctly. You can:
56
64
1. try `source /usr/local/Ascend/nnal/atb/set_env.sh` to enable NNAL package.
@@ -67,26 +75,26 @@ import vllm
67
75
68
76
If all above steps are not working, feel free to submit a GitHub issue.
69
77
70
-
### 7. How does vllm-ascend perform?
78
+
### 8. How does vllm-ascend perform?
71
79
72
80
Currently, only some models are improved. Such as `Qwen2.5 VL`, `Qwen3`, `Deepseek V3`. Others are not good enough. From 0.9.0rc2, Qwen and Deepseek works with graph mode to play a good performance.
73
81
74
-
### 8. How vllm-ascend work with vllm?
82
+
### 9. How vllm-ascend work with vllm?
75
83
vllm-ascend is a plugin for vllm. Basically, the version of vllm-ascend is the same as the version of vllm. For example, if you use vllm 0.9.1, you should use vllm-ascend 0.9.1 as well. For main branch, we will make sure `vllm-ascend` and `vllm` are compatible by each commit.
76
84
77
-
### 9. Does vllm-ascend support Prefill Disaggregation feature?
85
+
### 10. Does vllm-ascend support Prefill Disaggregation feature?
78
86
79
87
Yes, Prefill Disaggregation feature is supported on V1 Engine for NPND support.
80
88
81
-
### 10. Does vllm-ascend support quantization method?
89
+
### 11. Does vllm-ascend support quantization method?
82
90
83
91
w8a8 and w4a8 quantization is already supported by vllm-ascend originally on v0.8.4rc2 or higher,
84
92
85
-
### 11. How to run w8a8 DeepSeek model?
93
+
### 12. How to run w8a8 DeepSeek model?
86
94
87
95
Please following the [inferencing tutorail](https://vllm-ascend.readthedocs.io/en/v0.9.1-dev/tutorials/multi_node.html) and replace model to DeepSeek.
88
96
89
-
### 12. How vllm-ascend is tested
97
+
### 13. How vllm-ascend is tested
90
98
91
99
vllm-ascend is tested by functional test, performance test and accuracy test.
92
100
@@ -98,10 +106,10 @@ vllm-ascend is tested by functional test, performance test and accuracy test.
98
106
99
107
Final, for each release, we'll publish the performance test and accuracy test report in the future.
100
108
101
-
### 13. How to fix the error "InvalidVersion" when using vllm-ascend?
109
+
### 14. How to fix the error "InvalidVersion" when using vllm-ascend?
102
110
It's usually because you have installed an dev/editable version of vLLM package. In this case, we provide the env variable `VLLM_VERSION` to let users specify the version of vLLM package to use. Please set the env variable `VLLM_VERSION` to the version of vLLM package you have installed. The format of `VLLM_VERSION` should be `X.Y.Z`.
103
111
104
-
### 14. How to handle Out Of Memory?
112
+
### 15. How to handle Out Of Memory?
105
113
OOM errors typically occur when the model exceeds the memory capacity of a single NPU. For general guidance, you can refer to [vLLM's OOM troubleshooting documentation](https://docs.vllm.ai/en/latest/getting_started/troubleshooting.html#out-of-memory).
106
114
107
115
In scenarios where NPUs have limited HBM (High Bandwidth Memory) capacity, dynamic memory allocation/deallocation during inference can exacerbate memory fragmentation, leading to OOM. To address this:
@@ -110,7 +118,7 @@ In scenarios where NPUs have limited HBM (High Bandwidth Memory) capacity, dynam
110
118
111
119
-**Configure `PYTORCH_NPU_ALLOC_CONF`**: Set this environment variable to optimize NPU memory management. For example, you can `export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True` to enable virtual memory feature to mitigate memory fragmentation caused by frequent dynamic memory size adjustments during runtime, see more note in: [PYTORCH_NPU_ALLOC_CONF](https://www.hiascend.com/document/detail/zh/Pytorch/700/comref/Envvariables/Envir_012.html).
112
120
113
-
### 15. Failed to enable NPU graph mode when running DeepSeek?
121
+
### 16. Failed to enable NPU graph mode when running DeepSeek?
114
122
You may encounter the following error if running DeepSeek with NPU graph mode enabled. The allowed number of queries per kv when enabling both MLA and Graph mode only support {32, 64, 128}, **Thus this is not supported for DeepSeek-V2-Lite**, as it only has 16 attention heads. The NPU graph mode support on DeepSeek-V2-Lite will be done in the future.
115
123
116
124
And if you're using DeepSeek-V3 or DeepSeek-R1, please make sure after the tensor parallel split, num_heads / num_kv_heads in {32, 64, 128}.
@@ -120,10 +128,10 @@ And if you're using DeepSeek-V3 or DeepSeek-R1, please make sure after the tenso
120
128
[rank0]: EZ9999: [PID: 62938] 2025-05-27-06:52:12.455.807 numHeads / numKvHeads = 8, MLA only support {32, 64, 128}.[FUNC:CheckMlaAttrs][FILE:incre_flash_attention_tiling_check.cc][LINE:1218]
121
129
```
122
130
123
-
### 16. Failed to reinstall vllm-ascend from source after uninstalling vllm-ascend?
131
+
### 17. Failed to reinstall vllm-ascend from source after uninstalling vllm-ascend?
124
132
You may encounter the problem of C compilation failure when reinstalling vllm-ascend from source using pip. If the installation fails, it is recommended to use `python setup.py install` to install, or use `python setup.py clean` to clear the cache.
125
133
126
-
### 17. How to generate determinitic results when using vllm-ascend?
134
+
### 18. How to generate determinitic results when using vllm-ascend?
127
135
There are several factors that affect output certainty:
128
136
129
137
1. Sampler Method: using **Greedy sample** by setting `temperature=0` in `SamplingParams`, e.g.:
### 18. How to fix the error "ImportError: Please install vllm[audio] for audio support" for Qwen2.5-Omni model?
174
+
### 19. How to fix the error "ImportError: Please install vllm[audio] for audio support" for Qwen2.5-Omni model?
167
175
The `Qwen2.5-Omni` model requires the `librosa` package to be installed, you need to install the `qwen-omni-utils` package to ensure all dependencies are met `pip install qwen-omni-utils`,
168
176
this package will install `librosa` and its related dependencies, resolving the `ImportError: No module named 'librosa'` issue and ensuring audio processing functionality works correctly.
169
177
170
-
### 19. Failed to run with `ray` distributed backend?
178
+
### 20. Failed to run with `ray` distributed backend?
171
179
You might facing the following errors when running with ray backend in distributed scenarios:
172
180
173
181
```
@@ -184,7 +192,7 @@ This has been solved in `ray>=2.47.1`, thus we could solve this as following:
0 commit comments