-
Notifications
You must be signed in to change notification settings - Fork 25
update readme for vLLM 0.10.2 release on Intel GPU #869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yma11
wants to merge
3
commits into
intel:main
Choose a base branch
from
yma11:2509
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,242 @@ | ||
# Optimize LLM serving with vLLM on Intel® GPUs | ||
|
||
vLLM is a fast and easy-to-use library for LLM inference and serving. It has evolved into a community-driven project with contributions from both academia and industry. Intel, as one of the community contributors, is working actively to bring satisfying performance with vLLM on Intel® platforms, including Intel® Xeon® Scalable Processors, Intel® discrete GPUs, as well as Intel® Gaudi® AI accelerators. This readme focuses on Intel® discrete GPUs at this time and brings you the necessary information to get the workloads running well on your Intel® graphics cards. | ||
|
||
The vLLM used in this docker image is based on [v0.10.2](https://github.com/vllm-project/vllm/tree/v0.10.2) and using following BKC: | ||
|
||
| Ingredients | Version | | ||
|-------------|-----------| | ||
| Host OS | Ubuntu 25.04 | | ||
| Python | 3.12 | | ||
| KMD Driver | 6.14.0 | | ||
| OneAPI | 2025.1.3-0 | | ||
| PyTorch | PyTorch 2.8 | | ||
| IPEX | 2.8.10 | | ||
| OneCCL | 2021.15.4 | | ||
|
||
## 1. What's new in this release? | ||
|
||
* Gpt-oss 20B and 120B are supportted in MXFP4 with optimized performance. | ||
* Attention kernel optimizations for decoding phase brings >10% e2e throughput improvement on 10+ models with 1k/512 as input/output len. | ||
* MoE models are optimized using persistent MoE gemm kernel and fused activation kernel to reduce the kernel bubbles. Qwen3-30B-A3B achieved 2.6X e2e improvement and DeepSeek-V2-lite achieved 1.5X e2e improvement. | ||
* vLLM 0.10.2 with new features: P/D disaggregation, DP, tooling, reasoning output, structured output. | ||
|
||
## 2. What's Supported? | ||
|
||
Intel GPUs benefit from enhancements brought by [vLLM V1 engine](https://blog.vllm.ai/2025/01/27/v1-alpha-release.html), including: | ||
|
||
* Optimized Execution Loop & API Server | ||
* Simple & Flexible Scheduler | ||
* Zero-Overhead Prefix Caching | ||
* Clean Architecture for Tensor-Parallel Inference | ||
* Efficient Input Preparation | ||
|
||
Besides, following up vLLM V1 design, corresponding optimized kernels and features are implemented for Intel GPUs. | ||
|
||
* chunked_prefill: | ||
|
||
chunked_prefill is an optimization feature in vLLM that allows large prefill requests to be divided into small chunks and batched together with decode requests. This approach prioritizes decode requests, improving inter-token latency (ITL) and GPU utilization by combining compute-bound (prefill) and memory-bound (decode) requests in the same batch. vLLM v1 engine is built on this feature and in this release, it's also supported on intel GPUs by leveraging corresponding kernel from Intel® Extension for PyTorch\* for model execution. | ||
|
||
* FP8 W8A16 MatMul: | ||
|
||
vLLM supports FP8 (8-bit floating point) weight using hardware acceleration on GPUs. We support weight-only online dynamic quantization with FP8, which allows for a 2x reduction in model memory requirements and up to a 1.6x improvement in throughput with minimal impact on accuracy. | ||
|
||
Dynamic quantization of an original precision BF16/FP16 model to FP8 can be achieved with vLLM without any calibration data required. You can enable the feature by specifying `--quantization="fp8"` in the command line or setting `quantization="fp8"` in the LLM constructor. | ||
|
||
Besides, the FP8 types typically supported in hardware have two distinct representations, each useful in different scenarios: | ||
|
||
* **E4M3**: Consists of 1 sign bit, 4 exponent bits, and 3 bits of mantissa. It can store values up to +/-448 and `nan`. | ||
* **E5M2**: Consists of 1 sign bit, 5 exponent bits, and 2 bits of mantissa. It can store values up to +/-57344, +/- `inf`, and `nan`. The tradeoff for the increased dynamic range is lower precision of the stored values. | ||
|
||
We support both representations through ENV variable `VLLM_XPU_FP8_DTYPE` with default value `E5M2`. | ||
|
||
:::{warning} | ||
Currently, by default we load the model at original precision before quantizing down to 8-bits, so you need enough memory to load the whole model. To avoid this, adding `VLLM_OFFLOAD_WEIGHTS_BEFORE_QUANT=1` can allow offloading weights to cpu before quantization and quantized weights will be kept in device. | ||
::: | ||
|
||
* Multi Modality Support | ||
|
||
In this release, image/audio input can be processed using Qwen2.5-VL series models, like Qwen/Qwen2.5-VL-32B-Instruct on 4 BMG cards. | ||
|
||
* Pooling Models Support | ||
|
||
vLLM supports pooling models such as embedding, classification and reward models. All of these models are now supported on Intel® GPUs. For detailed usage, refer [guide](https://docs.vllm.ai/en/latest/models/pooling_models.html). | ||
|
||
* Pipeline Parallelism | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we roll back the oneccl release to 2021.15.3 then PP will be rolled back to naive implementation w/o performance. then we lose this feature. |
||
|
||
Pipeline parallelism distributes model layers across multiple GPUs. Each GPU processes different parts of the model in sequence. For Intel® GPUs, we support it on single node with `mp` as the backend. | ||
|
||
* Data Parallelism | ||
|
||
vLLM supports [Data Parallel](https://docs.vllm.ai/en/latest/serving/data_parallel_deployment.html) deployment, where model weights are replicated across separate instances/GPUs to process independent batches of requests. This will work with both dense and MoE models. Note export parallelism is under enabling that will be supported soon. | ||
|
||
* MoE models | ||
|
||
Models with MoE structure like GPT-OSS 20B/120B in MXFP4 format, Deepseek-v2-lite and Qwen/Qwen3-30B-A3B are now supported. | ||
|
||
Other features like [reasoning_outputs](https://docs.vllm.ai/en/latest/features/reasoning_outputs.html), [structured_outputs](https://docs.vllm.ai/en/latest/features/structured_outputs.html) and [tool calling](https://docs.vllm.ai/en/latest/features/tool_calling.html) are supported now. We also have some experimental features supported, including: | ||
|
||
* **torch.compile**: Can be enabled for fp16/bf16 path. | ||
* **speculative decoding**: Supports methods `n-gram`, `EAGLE` and `EAGLE3`. | ||
* **async scheduling**: Can be enabled by `--async-scheduling`. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism. | ||
|
||
## Supported Models | ||
|
||
The table below lists models that have been verified by Intel. However, there should be broader models that are supported by vLLM work on Intel® GPUs. | ||
|
||
| Model Type | Model (company/model name) | FP16 | Dynamic Online FP8 | MXFP4 | | ||
|-----------------|-------------------------------------------| --- | --- | --- | | ||
| Text Generation | openai/gpt-oss-20b | | |✅︎| | ||
| Text Generation | openai/gpt-oss-120b | | |✅︎| | ||
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Llama-8B |✅︎|✅︎| | | ||
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Qwen-14B |✅︎|✅︎| | | ||
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B |✅︎|✅︎| | | ||
| Text Generation | deepseek-ai/DeepSeek-R1-Distill-Llama-70B |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen2.5-72B-Instruct |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen3-14B |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen3-32B |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen3-30B-A3B |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen3-coder-30B-A3B-Instruct |✅︎|✅︎| | | ||
| Text Generation | Qwen/QwQ-32B |✅︎|✅︎| | | ||
| Text Generation | OpenGVLab/InternVL3_5-8B |✅︎|✅︎| | | ||
| Text Generation | OpenGVLab/InternVL3_5-14B |✅︎|✅︎| | | ||
| Text Generation | OpenGVLab/InternVL3_5-38B |✅︎|✅︎| | | ||
| Text Generation | openbmb/MiniCPM-V-4 |✅︎|✅︎| | | ||
| Text Generation | deepseek-ai/DeepSeek-V2-Lite |✅︎|✅︎| | | ||
| Text Generation | meta-llama/Llama-3.1-8B-Instruct |✅︎|✅︎| | | ||
| Text Generation | baichuan-inc/Baichuan2-13B-Chat |✅︎|✅︎| | | ||
| Text Generation | THUDM/GLM-4-9B-chat |✅︎|✅︎| | | ||
| Text Generation | THUDM/GLM-4v-9B-chat |✅︎|✅︎| | | ||
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎| | | ||
| Text Generation | chuhac/TeleChat2-35B |✅︎|✅︎| | | ||
| Text Generation | 01-ai/Yi1.5-34B-Chat |✅︎|✅︎| | | ||
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎| | | ||
| Text Generation | deepseek-ai/DeepSeek-Coder-33B-base |✅︎|✅︎| | | ||
| Text Generation | baichuan-inc/Baichuan2-13B-Chat |✅︎|✅︎| | | ||
| Text Generation | meta-llama/Llama-2-13b-chat-hf |✅︎|✅︎| | | ||
| Text Generation | THUDM/CodeGeex4-All-9B |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen1.5-14B-Chat |✅︎|✅︎| | | ||
| Text Generation | Qwen/Qwen1.5-32B-Chat |✅︎|✅︎| | | ||
| Multi Modality | Qwen/Qwen2.5-VL-72B-Instruct |✅︎|✅︎| | | ||
| Multi Modality | Qwen/Qwen2.5-VL-32B-Instruct |✅︎|✅︎| | | ||
| Embedding Model | Qwen/Qwen3-Embedding-8B |✅︎|✅︎| | | ||
| Reranker Model | Qwen/Qwen3-Reranker-8B |✅︎|✅︎| | | ||
|
||
## 3. Limitations | ||
|
||
Some of vLLM V1 features may need extra support, including LoRA(Low-Rank Adaptation), pipeline parallel on Ray, EP(Expert Parallelism) and MLA(Multi-head Latent Attention). | ||
|
||
The following issues are known issues: | ||
|
||
* Qwen/Qwen3-30B-A3B FP16/BF16 need set `--gpu-memory-utilization=0.8` due to its high memory consumption. | ||
* W8A8 quantized models through llm_compressor are not supported yet, like RedHatAI/DeepSeek-R1-Distill-Qwen-32B-FP8-dynamic. | ||
|
||
## 4. How to Get Started | ||
|
||
### 4.1. Prerequisite | ||
|
||
| OS | Hardware | | ||
| ---------- | ---------- | | ||
| Ubuntu 25.04 | Intel® Arc™ B-Series | | ||
|
||
### 4.2. Prepare a Serving Environment | ||
|
||
1. Get the released docker image with command `docker pull intel/vllm:0.10.2-xpu` | ||
2. Instantiate a docker container with command `docker run -t -d --shm-size 10g --net=host --ipc=host --privileged -v /dev/dri/by-path:/dev/dri/by-path --name=vllm-test --device /dev/dri:/dev/dri --entrypoint= intel/vllm:0.10.2-xpu /bin/bash` | ||
3. Run command `docker exec -it vllm-test bash` in 2 separate terminals to enter container environments for the server and the client respectively. | ||
|
||
\* Starting from here, all commands are expected to be run inside the docker container, if not explicitly noted. | ||
|
||
In both environments, you may then wish to set a `HUGGING_FACE_HUB_TOKEN` environment variable to make sure necessary files can be downloaded from the HuggingFace website. | ||
|
||
```bash | ||
export HUGGING_FACE_HUB_TOKEN=xxxxxx | ||
``` | ||
|
||
### 4.3. Launch Workloads | ||
|
||
#### 4.3.1. Launch Server in the Server Environment | ||
|
||
Command: | ||
|
||
```bash | ||
VLLM_WORKER_MULTIPROC_METHOD=spawn python3 -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --dtype=float16 --device=xpu --enforce-eager --port 8000 --block-size 64 --gpu-memory-util 0.9 --no-enable-prefix-caching --trust-remote-code --disable-sliding-window --disable-log-requests --max_num_batched_tokens=8192 --max_model_len 4096 -tp=4 --quantization fp8 | ||
``` | ||
|
||
Note that by default fp8 online quantization will use `e5m2` and you can switch to use `e4m3` by explicitly add env `VLLM_XPU_FP8_DTYPE=e4m3`. If there is not enough memory to hold the whole model before quantization to fp8, you can use `VLLM_OFFLOAD_WEIGHTS_BEFORE_QUANT=1` to offload weights to CPU first. | ||
|
||
Expected output: | ||
|
||
```bash | ||
INFO 02-20 03:20:29 api_server.py:937] Starting vLLM API server on http://0.0.0.0:8000 | ||
INFO 02-20 03:20:29 launcher.py:23] Available routes are: | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /openapi.json, Methods: HEAD, GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /docs, Methods: HEAD, GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /docs/oauth2-redirect, Methods: HEAD, GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /redoc, Methods: HEAD, GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /health, Methods: GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /ping, Methods: POST, GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /tokenize, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /detokenize, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/models, Methods: GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /version, Methods: GET | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/chat/completions, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/completions, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/embeddings, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /pooling, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /score, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/score, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/audio/transcriptions, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /rerank, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v1/rerank, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /v2/rerank, Methods: POST | ||
INFO 02-20 03:20:29 launcher.py:31] Route: /invocations, Methods: POST | ||
INFO: Started server process [1636943] | ||
INFO: Waiting for application startup. | ||
INFO: Application startup complete. | ||
``` | ||
|
||
It may take some time. Showing `INFO: Application startup complete.` indicates that the server is ready. | ||
|
||
#### 4.3.2. Raise Requests for Benchmarking in the Client Environment | ||
|
||
We leverage a [benchmarking script](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_serving.py) which is provided in vLLM to perform performance benchmarking. You can use your own client scripts as well. | ||
|
||
Use the command below to shoot serving requests: | ||
|
||
```bash | ||
python3 -m vllm.entrypoints.cli.main bench serve --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --dataset-name random --random-input-len=1024 --random-output-len=1024 --ignore-eos --num-prompt 1 --max-concurrency 16 --request-rate inf --backend vllm --port=8000 --host 0.0.0.0 --ready-check-timeout-sec 1 | ||
``` | ||
|
||
The command uses model `deepseek-ai/DeepSeek-R1-Distill-Qwen-32B`. Both input and output token sizes are set to `1024`. Maximally `16` requests are processed concurrently in the server. | ||
|
||
Expected output: | ||
|
||
```bash | ||
Maximum request concurrency: 16 | ||
============ Serving Benchmark Result ============ | ||
Successful requests: 1 | ||
Benchmark duration (s): xxx | ||
Total input tokens: 1024 | ||
Total generated tokens: 1024 | ||
Request throughput (req/s): xxx | ||
Output token throughput (tok/s): xxx | ||
Total Token throughput (tok/s): xxx | ||
---------------Time to First Token---------------- | ||
Mean TTFT (ms): xxx | ||
Median TTFT (ms): xxx | ||
P99 TTFT (ms): xxx | ||
-----Time per Output Token (excl. 1st token)------ | ||
Mean TPOT (ms): xxx | ||
Median TPOT (ms): xxx | ||
P99 TPOT (ms): xxx | ||
---------------Inter-token Latency---------------- | ||
Mean ITL (ms): xxx | ||
Median ITL (ms): xxx | ||
P99 ITL (ms): xxx | ||
================================================== | ||
``` | ||
|
||
## 5. Need Assistance? | ||
|
||
Should you encounter any issues or have any questions, please submit an issue ticket at [vLLM Github Issues](https://github.com/vllm-project/vllm/issues). Include the text `[Intel GPU]` in the issue title to ensure it gets noticed. |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oneccl version is likely to be changed. keep it as a place holder for update when bkc release happened.