Skip to content

Commit 81c57f6

Browse files
authored
[XPU] upgrade torch 2.8 on for XPU (#22300)
Signed-off-by: Kunshang Ji <[email protected]>
1 parent 311d875 commit 81c57f6

File tree

4 files changed

+15
-24
lines changed

4 files changed

+15
-24
lines changed

docker/Dockerfile.xpu

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,12 @@
1-
# oneapi 2025.0.2 docker base image use rolling 2448 package. https://dgpu-docs.intel.com/releases/packages.html?release=Rolling+2448.13&os=Ubuntu+22.04, and we don't need install driver manually.
2-
FROM intel/deep-learning-essentials:2025.0.2-0-devel-ubuntu22.04 AS vllm-base
1+
FROM intel/deep-learning-essentials:2025.1.3-0-devel-ubuntu24.04 AS vllm-base
32

43
RUN rm /etc/apt/sources.list.d/intel-graphics.list
54

6-
RUN apt-get update -y && \
5+
RUN apt clean && apt-get update -y && \
6+
apt-get install -y software-properties-common && \
7+
add-apt-repository ppa:deadsnakes/ppa && \
8+
apt-get install -y python3.10 python3.10-distutils && \
9+
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 && \
710
apt-get install -y --no-install-recommends --fix-missing \
811
curl \
912
ffmpeg \
@@ -14,11 +17,13 @@ RUN apt-get update -y && \
1417
libgl1 \
1518
lsb-release \
1619
numactl \
17-
python3 \
18-
python3-dev \
19-
python3-pip \
20+
python3.10-dev \
2021
wget
2122

23+
24+
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
25+
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1
26+
2227
WORKDIR /workspace/vllm
2328
COPY requirements/xpu.txt /workspace/vllm/requirements/xpu.txt
2429
COPY requirements/common.txt /workspace/vllm/requirements/common.txt

requirements/xpu.txt

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,10 @@ wheel
1010
jinja2>=3.1.6
1111
datasets # for benchmark scripts
1212
numba == 0.60.0 # v0.61 doesn't support Python 3.9. Required for N-gram speculative decoding
13-
14-
torch==2.7.0+xpu
13+
--extra-index-url=https://download.pytorch.org/whl/xpu
14+
torch==2.8.0+xpu
1515
torchaudio
1616
torchvision
1717
pytorch-triton-xpu
18-
--extra-index-url=https://download.pytorch.org/whl/xpu
19-
20-
# Please refer xpu doc, we need manually install intel-extension-for-pytorch 2.6.10+xpu due to there are some conflict dependencies with torch 2.6.0+xpu
21-
# FIXME: This will be fix in ipex 2.7. just leave this here for awareness.
22-
intel-extension-for-pytorch==2.7.10+xpu
23-
oneccl_bind_pt==2.7.0+xpu
2418
--extra-index-url=https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
19+
intel-extension-for-pytorch==2.8.10+xpu

vllm/plugins/__init__.py

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44
import logging
55
from typing import Any, Callable
66

7-
import torch
8-
97
import vllm.envs as envs
108

119
logger = logging.getLogger(__name__)
@@ -68,13 +66,6 @@ def load_general_plugins():
6866
return
6967
plugins_loaded = True
7068

71-
# some platform-specific configurations
72-
from vllm.platforms import current_platform
73-
74-
if current_platform.is_xpu():
75-
# see https://github.com/pytorch/pytorch/blob/43c5f59/torch/_dynamo/config.py#L158
76-
torch._dynamo.config.disable = True
77-
7869
plugins = load_plugins_by_group(group=DEFAULT_PLUGINS_GROUP)
7970
# general plugins, we only need to execute the loaded functions
8071
for func in plugins.values():

vllm/v1/worker/xpu_worker.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ def init_device(self):
152152
raise RuntimeError(
153153
f"Not support device type: {self.device_config.device}")
154154

155-
ENV_CCL_ZE_IPC_EXCHANGE = os.getenv("CCL_ZE_IPC_EXCHANGE", "drmfd")
155+
ENV_CCL_ZE_IPC_EXCHANGE = os.getenv("CCL_ZE_IPC_EXCHANGE", "pidfd")
156156
ENV_CCL_ATL_TRANSPORT = os.getenv("CCL_ATL_TRANSPORT", "ofi")
157157
ENV_LOCAL_WORLD_SIZE = os.getenv("LOCAL_WORLD_SIZE",
158158
str(self.parallel_config.world_size))

0 commit comments

Comments
 (0)