Skip to content

Commit 9e9ec57

Browse files
yma11sramakintel
andauthored
update readme for vLLM 0.10.0 release on Intel GPU (#833)
Signed-off-by: Yan Ma <[email protected]> Co-authored-by: Srikanth Ramakrishna <[email protected]>
1 parent ccdf0d1 commit 9e9ec57

File tree

1 file changed

+233
-0
lines changed

1 file changed

+233
-0
lines changed

vllm/0.10.0-xpu.md

Lines changed: 233 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,233 @@
1+
# Optimize LLM serving with vLLM on Intel® GPUs
2+
3+
vLLM is a fast and easy-to-use library for LLM inference and serving. It has evolved into a community-driven project with contributions from both academia and industry. Intel, as one of the community contributors, is working actively to bring satisfying performance with vLLM on Intel® platforms, including Intel® Xeon® Scalable Processors, Intel® discrete GPUs, as well as Intel® Gaudi® AI accelerators. This readme focuses on Intel® discrete GPUs at this time and brings you the necessary information to get the workloads running well on your Intel® graphics cards.
4+
5+
The vLLM used in this docker image is based on [v0.10.0](https://github.com/vllm-project/vllm/tree/v0.10.0) and using following BKC:
6+
7+
| Ingredients | Version |
8+
|-------------|-----------|
9+
| Host OS   | Ubuntu 25.04 |
10+
| Python   | 3.12 |
11+
| KMD Driver | 6.14.0 |
12+
| OneAPI   | 2025.1.3-0 |
13+
| PyTorch   | PyTorch 2.8 |
14+
| IPEX   | 2.8.10 |
15+
| OneCCL   | 2021.15.4 |
16+
17+
## 1. What's Supported?
18+
19+
Intel GPUs benefit from enhancements brought by [vLLM V1 engine](https://blog.vllm.ai/2025/01/27/v1-alpha-release.html), including:
20+
21+
* Optimized Execution Loop & API Server
22+
* Simple & Flexible Scheduler
23+
* Zero-Overhead Prefix Caching
24+
* Clean Architecture for Tensor-Parallel Inference
25+
* Efficient Input Preparation
26+
27+
Besides, following up vLLM V1 design, corresponding optimized kernels and features are implemented for Intel GPUs.
28+
29+
* chunked_prefill:
30+
31+
chunked_prefill is an optimization feature in vLLM that allows large prefill requests to be divided into small chunks and batched together with decode requests. This approach prioritizes decode requests, improving inter-token latency (ITL) and GPU utilization by combining compute-bound (prefill) and memory-bound (decode) requests in the same batch. vLLM v1 engine is built on this feature and in this release, it's also supported on intel GPUs by leveraging corresponding kernel from Intel® Extension for PyTorch\* for model execution.
32+
33+
* FP8 W8A16 MatMul:
34+
35+
vLLM supports FP8 (8-bit floating point) weight using hardware acceleration on GPUs. We support weight-only online dynamic quantization with FP8, which allows for a 2x reduction in model memory requirements and up to a 1.6x improvement in throughput with minimal impact on accuracy.
36+
37+
Dynamic quantization of an original precision BF16/FP16 model to FP8 can be achieved with vLLM without any calibration data required. You can enable the feature by specifying `--quantization="fp8"` in the command line or setting `quantization="fp8"` in the LLM constructor.
38+
39+
Besides, the FP8 types typically supported in hardware have two distinct representations, each useful in different scenarios:
40+
41+
* **E4M3**: Consists of 1 sign bit, 4 exponent bits, and 3 bits of mantissa. It can store values up to +/-448 and `nan`.
42+
* **E5M2**: Consists of 1 sign bit, 5 exponent bits, and 2 bits of mantissa. It can store values up to +/-57344, +/- `inf`, and `nan`. The tradeoff for the increased dynamic range is lower precision of the stored values.
43+
44+
We support both representations through ENV variable `VLLM_XPU_FP8_DTYPE` with default value `E5M2`.
45+
46+
:::{warning}
47+
Currently, by default we load the model at original precision before quantizing down to 8-bits, so you need enough memory to load the whole model. To avoid this, adding `VLLM_OFFLOAD_WEIGHTS_BEFORE_QUANT=1` can allow offloading weights to cpu before quantization and quantized weights will be kept in device.
48+
:::
49+
50+
* Multi Modality Support
51+
52+
In this release, image/audio input can be processed using Qwen2.5-VL series models, like Qwen/Qwen2.5-VL-32B-Instruct on 4 BMG cards.
53+
54+
* Pooling Models Support
55+
56+
vLLM supports pooling models such as embedding, classification and reward models. All of these models are now supported on Intel® GPUs. For detailed usage, refer [guide](https://docs.vllm.ai/en/latest/models/pooling_models.html).
57+
58+
* Pipeline Parallelism
59+
60+
Pipeline parallelism distributes model layers across multiple GPUs. Each GPU processes different parts of the model in sequence. For Intel® GPUs, we support it on single node with `mp` as the backend.
61+
62+
We also have some experimental features supported, including:
63+
64+
* **torch.compile**: Can be enabled for fp16/bf16 path.
65+
* **speculative decoding**: Supports methods `n-gram`, `EAGLE` and `EAGLE3`.
66+
* **async scheduling**: Can be enabled by `--async-scheduling`. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism.
67+
* **MoE models**: Models with MoE structure like Deepseek-v2-lite and Qwen/Qwen3-30B-A3B are now supported.
68+
69+
## Optimizations
70+
71+
* Long context optimization: The 4K and above context are significantly improved for the decoding phase by optimized flash attention kernel, up to 3.7x improvement on next token latency and 2.4x output throughput for 40K/512 input/output seq length for llama-70B.
72+
* Tensor parallel inference: Intel® oneAPI Collective Communications Library(oneCCL) is optimized to provide boosted performance in Intel® Arc™ B-Series graphics cards. For details, please refer [2021.15.4](https://github.com/uxlfoundation/oneCCL/releases/tag/2021.15.4).
73+
* oneDNN GEMM optimization: fp8 gemm performance with batch size ranging from 1 to 128 are all optimized to above 80% TFLOPs efficiency.
74+
75+
## Supported Models
76+
77+
The table below lists models that have been verified by Intel. However, there should be broader models that are supported by vLLM work on Intel® GPUs.
78+
79+
| Model Type | Model (company/model name) | FP16 | Dynamic Online FP8 |
80+
|-----------------|-------------------------------------------| --- | --- |
81+
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Llama-8B |✅︎|✅︎|
82+
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Qwen-14B |✅︎|✅︎|
83+
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B |✅︎|✅︎|
84+
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Llama-70B |✅︎|✅︎|
85+
| Text Generation | Qwen/Qwen2.5-72B-Instruct |✅︎|✅︎|
86+
| Text Generation | Qwen/Qwen3-14B |✅︎|✅︎|
87+
| Text Generation | Qwen/Qwen3-32B |✅︎|✅︎|
88+
| Text Generation | Qwen/Qwen3-30B-A3B |✅︎|✅︎|
89+
| Text Generation | deepseek-ai/DeepSeek-V2-Lite |✅︎|✅︎|
90+
| Text Generation | meta-llama/Llama-3.1-8B-Instruct |✅︎|✅︎|
91+
| Text Generation | baichuan-inc/Baichuan2-13B-Chat |✅︎|✅︎|
92+
| Text Generation | THUDM/GLM-4-9B-chat |✅︎|✅︎|
93+
| Text Generation | THUDM/GLM-4v-9B-chat |✅︎|✅︎|
94+
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎|
95+
| Text Generation | chuhac/TeleChat2-35B |✅︎|✅︎|
96+
| Text Generation | 01-ai/Yi1.5-34B-Chat |✅︎|✅︎|
97+
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎|
98+
| Text Generation | deepseek-ai/DeepSeek-Coder-33B-base |✅︎|✅︎|
99+
| Text Generation | baichuan-inc/Baichuan2-13B-Chat |✅︎|✅︎|
100+
| Text Generation | meta-llama/Llama-2-13b-chat-hf |✅︎|✅︎|
101+
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎|
102+
| Text Generation | Qwen/Qwen1.5-14B-Chat |✅︎|✅︎|
103+
| Text Generation | Qwen/Qwen1.5-32B-Chat |✅︎|✅︎|
104+
| Multi Modality | Qwen/Qwen2.5-VL-72B-Instruct |✅︎|✅︎|
105+
| Multi Modality | Qwen/Qwen2.5-VL-32B-Instruct |✅︎|✅︎|
106+
| Embedding Model | Qwen/Qwen3-Embedding-8B |✅︎|✅︎|
107+
| Embedding Model | BAAI/bge-large-en-v1.5 |✅︎|✅︎|
108+
| Embedding Model | BAAI/bge-large-zh-v1.5 |✅︎|✅︎|
109+
| Embedding Model | BAAI/bge-m3 |✅︎|✅︎|
110+
| Reranker Model | Qwen/Qwen3-Reranker-8B |✅︎|✅︎|
111+
| Reranker Model | BAAI/bge-reranker-v2-m3 |✅︎|✅︎|
112+
| Reranker Model | BAAI/bge-reranker-large |✅︎|✅︎|
113+
| Reward Model | internlm/internlm2-1_8b-reward |✅︎|✅︎|
114+
115+
## 2. Limitations
116+
117+
Some of vLLM V1 features may need extra support, including LoRA(Low-Rank Adaptation), pipeline parallel on Ray, EP(Expert Parallelism)/TP(Tensor Parallelism) MoE(Mixture of Experts), DP(Data Parallelism) Attention and MLA(Multi-head Latent Attention).
118+
119+
The following issues are known issues:
120+
121+
* Qwen/Qwen3-30B-A3B need set `--gpu-memory-utilization=0.8` due to its high memory consumption.
122+
* W8A8 quantized models through llm_compressor are not supported yet, like RedHatAI/DeepSeek-R1-Distill-Qwen-32B-FP8-dynamic.
123+
124+
## 3. How to Get Started
125+
126+
### 3.1. Prerequisite
127+
128+
| OS | Hardware |
129+
| ---------- | ---------- |
130+
| Ubuntu 25.04 | Intel® Arc™ B-Series |
131+
132+
### 3.2. Prepare a Serving Environment
133+
134+
1. Get the released docker image with command `docker pull intel/vllm:0.10.0-xpu`
135+
2. Instantiate a docker container with command `docker run -t -d --shm-size 10g --net=host --ipc=host --privileged -v /dev/dri/by-path:/dev/dri/by-path --name=vllm-test --device /dev/dri:/dev/dri --entrypoint= intel/vllm:0.10.0-xpu /bin/bash`
136+
3. Source openapi envs to ensure correct variables set with command `docker exec vllm-test /bin/bash -c "source /opt/intel/oneapi/setvars.sh --force"`
137+
4. Run command `docker exec -it vllm-test bash` in 2 separate terminals to enter container environments for the server and the client respectively.
138+
139+
\* Starting from here, all commands are expected to be run inside the docker container, if not explicitly noted.
140+
141+
In both environments, you may then wish to set a `HUGGING_FACE_HUB_TOKEN` environment variable to make sure necessary files can be downloaded from the HuggingFace website.
142+
143+
```bash
144+
export HUGGING_FACE_HUB_TOKEN=xxxxxx
145+
```
146+
147+
### 3.3. Launch Workloads
148+
149+
#### 3.3.1. Launch Server in the Server Environment
150+
151+
Command:
152+
153+
```bash
154+
TORCH_LLM_ALLREDUCE=1 VLLM_USE_V1=1 VLLM_WORKER_MULTIPROC_METHOD=spawn python3 -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --dtype=float16 --device=xpu --enforce-eager --port 8000 --block-size 64 --gpu-memory-util 0.9  --no-enable-prefix-caching --trust-remote-code --disable-sliding-window --disable-log-requests --max_num_batched_tokens=8192 --max_model_len 4096 -tp=4 --quantization fp8
155+
```
156+
157+
Note that by default fp8 online quantization will use `e5m2` and you can switch to use `e4m3` by explicitly add env `VLLM_XPU_FP8_DTYPE=e4m3`. If there is not enough memory to hold the whole model before quantization to fp8, you can use `VLLM_OFFLOAD_WEIGHTS_BEFORE_QUANT=1` to offload weights to CPU first.
158+
159+
Expected output:
160+
161+
```bash
162+
INFO 02-20 03:20:29 api_server.py:937] Starting vLLM API server on http://0.0.0.0:8000
163+
INFO 02-20 03:20:29 launcher.py:23] Available routes are:
164+
INFO 02-20 03:20:29 launcher.py:31] Route: /openapi.json, Methods: HEAD, GET
165+
INFO 02-20 03:20:29 launcher.py:31] Route: /docs, Methods: HEAD, GET
166+
INFO 02-20 03:20:29 launcher.py:31] Route: /docs/oauth2-redirect, Methods: HEAD, GET
167+
INFO 02-20 03:20:29 launcher.py:31] Route: /redoc, Methods: HEAD, GET
168+
INFO 02-20 03:20:29 launcher.py:31] Route: /health, Methods: GET
169+
INFO 02-20 03:20:29 launcher.py:31] Route: /ping, Methods: POST, GET
170+
INFO 02-20 03:20:29 launcher.py:31] Route: /tokenize, Methods: POST
171+
INFO 02-20 03:20:29 launcher.py:31] Route: /detokenize, Methods: POST
172+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/models, Methods: GET
173+
INFO 02-20 03:20:29 launcher.py:31] Route: /version, Methods: GET
174+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/chat/completions, Methods: POST
175+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/completions, Methods: POST
176+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/embeddings, Methods: POST
177+
INFO 02-20 03:20:29 launcher.py:31] Route: /pooling, Methods: POST
178+
INFO 02-20 03:20:29 launcher.py:31] Route: /score, Methods: POST
179+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/score, Methods: POST
180+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/audio/transcriptions, Methods: POST
181+
INFO 02-20 03:20:29 launcher.py:31] Route: /rerank, Methods: POST
182+
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/rerank, Methods: POST
183+
INFO 02-20 03:20:29 launcher.py:31] Route: /v2/rerank, Methods: POST
184+
INFO 02-20 03:20:29 launcher.py:31] Route: /invocations, Methods: POST
185+
INFO: Started server process [1636943]
186+
INFO: Waiting for application startup.
187+
INFO: Application startup complete.
188+
```
189+
190+
It may take some time. Showing `INFO: Application startup complete.` indicates that the server is ready.
191+
192+
#### 3.3.2. Raise Requests for Benchmarking in the Client Environment
193+
194+
We leverage a [benchmarking script](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_serving.py) which is provided in vLLM to perform performance benchmarking. You can use your own client scripts as well.
195+
196+
Use the command below to shoot serving requests:
197+
198+
```bash
199+
python3 benchmarks/benchmark_serving.py --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --dataset-name random --random-input-len=1024 --random-output-len=1024 --ignore-eos --num-prompt 1 --max-concurrency 16 --request-rate inf --backend vllm --port=8000 --host 0.0.0.0
200+
```
201+
202+
The command uses model `deepseek-ai/DeepSeek-R1-Distill-Qwen-32B`. Both input and output token sizes are set to `1024`. Maximally `16` requests are processed concurrently in the server.
203+
204+
Expected output:
205+
206+
```bash
207+
Maximum request concurrency: 16
208+
============ Serving Benchmark Result ============
209+
Successful requests: 1
210+
Benchmark duration (s): xxx
211+
Total input tokens: 1024
212+
Total generated tokens: 1024
213+
Request throughput (req/s): xxx
214+
Output token throughput (tok/s): xxx
215+
Total Token throughput (tok/s): xxx
216+
---------------Time to First Token----------------
217+
Mean TTFT (ms): xxx
218+
Median TTFT (ms): xxx
219+
P99 TTFT (ms): xxx
220+
-----Time per Output Token (excl. 1st token)------
221+
Mean TPOT (ms): xxx
222+
Median TPOT (ms): xxx
223+
P99 TPOT (ms): xxx
224+
---------------Inter-token Latency----------------
225+
Mean ITL (ms): xxx
226+
Median ITL (ms): xxx
227+
P99 ITL (ms): xxx
228+
==================================================
229+
```
230+
231+
## 4. Need Assistance?
232+
233+
Should you encounter any issues or have any questions, please submit an issue ticket at [vLLM Github Issues](https://github.com/vllm-project/vllm/issues). Include the text `[Intel GPU]` in the issue title to ensure it gets noticed.

0 commit comments

Comments
 (0)