Skip to content

Commit 9452863

Browse files
Revert "Revert #28875 (#29159)" (#29179)
Signed-off-by: DarkLight1337 <[email protected]>
1 parent 2b1b3df commit 9452863

File tree

3 files changed

+4
-25
lines changed

3 files changed

+4
-25
lines changed

docker/Dockerfile

Lines changed: 0 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ ARG UV_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}
5656

5757
# PyTorch provides its own indexes for standard and nightly builds
5858
ARG PYTORCH_CUDA_INDEX_BASE_URL=https://download.pytorch.org/whl
59-
ARG PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL=https://download.pytorch.org/whl/nightly
6059

6160
# PIP supports multiple authentication schemes, including keyring
6261
# By parameterizing the PIP_KEYRING_PROVIDER variable and setting it to
@@ -98,7 +97,6 @@ RUN echo 'tzdata tzdata/Areas select America' | debconf-set-selections \
9897
ARG PIP_INDEX_URL UV_INDEX_URL
9998
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
10099
ARG PYTORCH_CUDA_INDEX_BASE_URL
101-
ARG PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL
102100
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
103101

104102
# Activate virtual environment and add uv to PATH
@@ -317,7 +315,6 @@ RUN echo 'tzdata tzdata/Areas select America' | debconf-set-selections \
317315
ARG PIP_INDEX_URL UV_INDEX_URL
318316
ARG PIP_EXTRA_INDEX_URL UV_EXTRA_INDEX_URL
319317
ARG PYTORCH_CUDA_INDEX_BASE_URL
320-
ARG PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL
321318
ARG PIP_KEYRING_PROVIDER UV_KEYRING_PROVIDER
322319

323320
# Install uv for faster pip installs
@@ -337,20 +334,6 @@ ENV UV_LINK_MODE=copy
337334
# or future versions of triton.
338335
RUN ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/
339336

340-
# arm64 (GH200) build follows the practice of "use existing pytorch" build,
341-
# we need to install torch and torchvision from the nightly builds first,
342-
# pytorch will not appear as a vLLM dependency in all of the following steps
343-
# after this step
344-
RUN --mount=type=cache,target=/root/.cache/uv \
345-
if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
346-
uv pip install --system \
347-
--index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') \
348-
"torch==2.8.0.dev20250318+cu128" "torchvision==0.22.0.dev20250319" ; \
349-
uv pip install --system \
350-
--index-url ${PYTORCH_CUDA_NIGHTLY_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') \
351-
--pre pytorch_triton==3.3.0+gitab727c40 ; \
352-
fi
353-
354337
# Install vllm wheel first, so that torch etc will be installed.
355338
RUN --mount=type=bind,from=build,src=/workspace/dist,target=/vllm-workspace/dist \
356339
--mount=type=cache,target=/root/.cache/uv \

docs/deployment/docker.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -82,8 +82,7 @@ DOCKER_BUILDKIT=1 docker build . \
8282

8383
## Building for Arm64/aarch64
8484

85-
A docker container can be built for aarch64 systems such as the Nvidia Grace-Hopper. At time of this writing, this requires the use
86-
of PyTorch Nightly and should be considered **experimental**. Using the flag `--platform "linux/arm64"` will attempt to build for arm64.
85+
A docker container can be built for aarch64 systems such as the Nvidia Grace-Hopper. At time of this writing, this should be considered **experimental**. Using the flag `--platform "linux/arm64"` will attempt to build for arm64.
8786

8887
!!! note
8988
Multiple modules must be compiled, so this process can take a while. Recommend using `--build-arg max_jobs=` & `--build-arg nvcc_threads=`
@@ -94,15 +93,15 @@ of PyTorch Nightly and should be considered **experimental**. Using the flag `--
9493

9594
```bash
9695
# Example of building on Nvidia GH200 server. (Memory usage: ~15GB, Build time: ~1475s / ~25 min, Image size: 6.93GB)
97-
python3 use_existing_torch.py
9896
DOCKER_BUILDKIT=1 docker build . \
9997
--file docker/Dockerfile \
10098
--target vllm-openai \
10199
--platform "linux/arm64" \
102100
-t vllm/vllm-gh200-openai:latest \
103101
--build-arg max_jobs=66 \
104102
--build-arg nvcc_threads=2 \
105-
--build-arg torch_cuda_arch_list="9.0 10.0+PTX"
103+
--build-arg torch_cuda_arch_list="9.0 10.0+PTX" \
104+
--build-arg RUN_WHEEL_CHECK=false
106105
```
107106

108107
!!! note

docs/getting_started/installation/gpu.cuda.inc.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -158,10 +158,7 @@ uv pip install -e .
158158

159159
##### Use an existing PyTorch installation
160160

161-
There are scenarios where the PyTorch dependency cannot be easily installed with `uv`, e.g.:
162-
163-
- Building vLLM with PyTorch nightly or a custom PyTorch build.
164-
- Building vLLM with aarch64 and CUDA (GH200), where the PyTorch wheels are not available on PyPI. Currently, only the PyTorch nightly has wheels for aarch64 with CUDA. You can run `uv pip install --index-url https://download.pytorch.org/whl/nightly/cu128 torch torchvision torchaudio` to [install PyTorch nightly](https://pytorch.org/get-started/locally/) and then build vLLM on top of it.
161+
There are scenarios where the PyTorch dependency cannot be easily installed with `uv`, for example, when building vLLM with non-default PyTorch builds (like nightly or a custom build).
165162

166163
To build vLLM using an existing PyTorch installation:
167164

0 commit comments

Comments
 (0)