-
Notifications
You must be signed in to change notification settings - Fork 25
update readme for vLLM 0.10.2 release on Intel GPU #869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Yan Ma <[email protected]>
| OneAPI | 2025.1.3-0 | | ||
| PyTorch | PyTorch 2.8 | | ||
| IPEX | 2.8.10 | | ||
| OneCCL | 2021.15.4 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oneccl version is likely to be changed. keep it as a place holder for update when bkc release happened.
|
||
vLLM supports pooling models such as embedding, classification and reward models. All of these models are now supported on Intel® GPUs. For detailed usage, refer [guide](https://docs.vllm.ai/en/latest/models/pooling_models.html). | ||
|
||
* Pipeline Parallelism |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we roll back the oneccl release to 2021.15.3 then PP will be rolled back to naive implementation w/o performance. then we lose this feature.
vllm/0.10.2-xpu.md
Outdated
|
||
* Data Parallelism | ||
|
||
vLLM supports [Data Parallel](https://docs.vllm.ai/en/latest/serving/data_parallel_deployment.html) deployment, where model weights are replicated across separate instances/GPUs to process independent batches of requests. This will work with both dense and MoE models. But for Intel® GPUs, we currently don't support DP + EP for now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"This will work with both dense and MoE models. But for Intel® GPUs, we currently don't support DP + EP for now."
-> This will work with both dense and MoE models. Note export parallelism is under enabling that will be supported soon.
vllm/0.10.2-xpu.md
Outdated
* **torch.compile**: Can be enabled for fp16/bf16 path. | ||
* **speculative decoding**: Supports methods `n-gram`, `EAGLE` and `EAGLE3`. | ||
* **async scheduling**: Can be enabled by `--async-scheduling`. This may help reduce the CPU overheads, leading to better latency and throughput. However, async scheduling is currently not supported with some features such as structured outputs, speculative decoding, and pipeline parallelism. | ||
* **MoE models**: Models with MoE structure like gpt-oss, Deepseek-v2-lite and Qwen/Qwen3-30B-A3B are now supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MoE models are officially supported in this release, not "experimental". It's actually one of the key models we optimized, besides the multimodality.
Let's move the MoE models to the official feature list. GPT-OSS 20B and 120B in mxfp4 data type should be highlighted here.
vllm/0.10.2-xpu.md
Outdated
|
||
The following issues are known issues: | ||
|
||
* Qwen/Qwen3-30B-A3B need set `--gpu-memory-utilization=0.8` due to its high memory consumption. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still the case, or fp16/bf16 only? for fp8 my understanding is that it can work with =0.9.
vllm/0.10.2-xpu.md
Outdated
|
||
## Optimizations | ||
|
||
* FMHA Optimizations: XXXXX. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Attention kernel optimizations for decoding steps
MoE model optimizations using persistent MoE gemm kernel and fused activation kernel to reduce the kernel bubbles.
Signed-off-by: Yan Ma <[email protected]>
This PR provides notes for vLLM v0.10.2 release on Intel Multi-Arc, including some key features, optimizations and HowTos.