You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/getting_started/installation/gpu.rocm.inc.md
+74-49Lines changed: 74 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# --8<-- [start:installation]
2
2
3
-
vLLM supports AMD GPUs with ROCm 6.3 or above.
3
+
vLLM supports AMD GPUs with ROCm 6.3 or above, and torch 2.8.0 and above.
4
4
5
5
!!! tip
6
6
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
@@ -28,57 +28,63 @@ Currently, there are no pre-built ROCm wheels.
28
28
# --8<-- [end:pre-built-wheels]
29
29
# --8<-- [start:build-wheel-from-source]
30
30
31
+
!!! tip
32
+
- If you found that the following installation step does not work for you, please refer to [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base). Dockerfile is a form of installation steps.
33
+
31
34
0. Install prerequisites (skip if you are already in an environment/docker with the following installed):
For installing PyTorch, you can start from a fresh docker image, e.g, `rocm/pytorch:rocm6.4.3_ubuntu24.04_py3.12_pytorch_release_2.6.0`, `rocm/pytorch-nightly`. If you are using docker image, you can skip to Step 3.
39
+
For installing PyTorch, you can start from a fresh docker image, e.g, `rocm/pytorch:rocm7.0_ubuntu22.04_py3.10_pytorch_release_2.8.0`, `rocm/pytorch-nightly`. If you are using docker image, you can skip to Step 3.
37
40
38
41
Alternatively, you can install PyTorch using PyTorch wheels. You can check PyTorch installation guide in PyTorch [Getting Started](https://pytorch.org/get-started/locally/). Example:
1. Install [Triton for ROCm](https://github.com/triton-lang/triton)
49
+
1. Install [Triton for ROCm](https://github.com/ROCm/triton.git)
47
50
48
-
Install ROCm's Triton (the default triton-mlir branch) following the instructions from [ROCm/triton](https://github.com/ROCm/triton/blob/triton-mlir/README.md)
51
+
Install ROCm's Triton following the instructions from [ROCm/triton](https://github.com/ROCm/triton.git)
If you see HTTP issue related to downloading packages during building triton, please try again as the HTTP error is intermittent.
66
+
- The validated `$TRITON_BRANCH` can be found in the [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base).
67
+
- If you see HTTP issue related to downloading packages during building triton, please try again as the HTTP error is intermittent.
63
68
64
-
2. Optionally, if you choose to use CK flash attention, you can install [flash attention for ROCm](https://github.com/Dao-AILab/flash-attention)
69
+
2. Optionally, if you choose to use CK flash attention, you can install [flash attention for ROCm](https://github.com/Dao-AILab/flash-attention.git)
65
70
66
-
Install ROCm's flash attention (v2.7.2) following the instructions from [ROCm/flash-attention](https://github.com/ROCm/flash-attention#amd-rocm-support)
67
-
Alternatively, wheels intended for vLLM use can be accessed under the releases.
71
+
Install ROCm's flash attention (v2.8.0) following the instructions from [ROCm/flash-attention](https://github.com/Dao-AILab/flash-attention#amd-rocm-support)
68
72
69
-
For example, for ROCm 6.3, suppose your gfx arch is `gfx90a`. To get your gfx architecture, run `rocminfo |grep gfx`.
73
+
For example, for ROCm 7.0, suppose your gfx arch is `gfx942`. To get your gfx architecture, run `rocminfo |grep gfx`.
You might need to downgrade the "ninja" version to 1.10 as it is not used when compiling flash-attention-2 (e.g. `pip install ninja==1.10.2.4`)
86
+
- The validated `$FA_BRANCH` can be found in the [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base).
87
+
82
88
83
89
3. If you choose to build AITER yourself to use a certain branch or commit, you can build AITER using the following steps:
84
90
@@ -92,11 +98,13 @@ Currently, there are no pre-built ROCm wheels.
92
98
```
93
99
94
100
!!! note
95
-
You will need to config the `$AITER_BRANCH_OR_COMMIT`for your purpose.
101
+
- You will need to config the `$AITER_BRANCH_OR_COMMIT`for your purpose.
102
+
- The validated `$AITER_BRANCH_OR_COMMIT` can be found in the [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base).
103
+
96
104
97
-
4. Build vLLM. For example, vLLM on ROCM 6.3 can be built with the following steps:
105
+
4. Build vLLM. For example, vLLM on ROCM 7.0 can be built with the following steps:
98
106
99
-
??? console "Commands"
107
+
???+ console "Commands"
100
108
101
109
```bash
102
110
pip install --upgrade pip
@@ -109,31 +117,48 @@ Currently, there are no pre-built ROCm wheels.
109
117
scipy \
110
118
huggingface-hub[cli,hf_transfer] \
111
119
setuptools_scm
112
-
pip install "numpy<2"
113
120
pip install -r requirements/rocm.txt
114
121
115
-
# Build vLLM for MI210/MI250/MI300.
116
-
export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
122
+
# To build for a single architecture (e.g., MI300) for faster installation (recommended):
123
+
export PYTORCH_ROCM_ARCH="gfx942"
124
+
125
+
# To build vLLM for multiple arch MI210/MI250/MI300, use this instead
126
+
# export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
127
+
117
128
python3 setup.py develop
118
129
```
119
130
120
131
This may take 5-10 minutes. Currently, `pip install .` does not work for ROCm installation.
121
132
122
133
!!! tip
123
-
- Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm-up step before collecting perf numbers.
124
-
- Triton flash attention does not currently support sliding window attention. If using half precision, please use CK flash-attention for sliding window support.
125
-
- To use CK flash-attention or PyTorch naive attention, please use this flag `export VLLM_USE_TRITON_FLASH_ATTN=0` to turn off triton flash attention.
126
134
- The ROCm version of PyTorch, ideally, should match the ROCm driver version.
127
135
128
136
!!! tip
129
137
- For MI300x (gfx942) users, to achieve optimal performance, please refer to [MI300x tuning guide](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/index.html) for performance optimization and tuning tips on system and workflow level.
130
-
For vLLM, please refer to [vLLM performance optimization](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html#vllm-performance-optimization).
138
+
For vLLM, please refer to [vLLM performance optimization](https://rocm.docs.amd.com/en/latest/how-to/rocm-for-ai/inference-optimization/vllm-optimization.html).
131
139
132
140
# --8<-- [end:build-wheel-from-source]
133
141
# --8<-- [start:pre-built-images]
134
142
135
143
The [AMD Infinity hub for vLLM](https://hub.docker.com/r/rocm/vllm/tags) offers a prebuilt, optimized
136
144
docker image designed for validating inference performance on the AMD Instinct™ MI300X accelerator.
145
+
AMD also offers nightly prebuilt docker image from [Docker Hub](https://hub.docker.com/r/rocm/vllm-dev), which has vLLM and all its dependencies installed.
146
+
147
+
???+ console "Commands"
148
+
```bash
149
+
docker pull rocm/vllm-dev:nightly # to get the latest image
150
+
docker run -it --rm \
151
+
--network=host \
152
+
--group-add=video \
153
+
--ipc=host \
154
+
--cap-add=SYS_PTRACE \
155
+
--security-opt seccomp=unconfined \
156
+
--device /dev/kfd \
157
+
--device /dev/dri \
158
+
-v <path/to/your/models>:/app/models \
159
+
-e HF_HOME="/app/models" \
160
+
rocm/vllm-dev:nightly
161
+
```
137
162
138
163
!!! tip
139
164
Please check [LLM inference performance validation on AMD Instinct MI300X](https://rocm.docs.amd.com/en/latest/how-to/performance-validation/mi300x/vllm-benchmark.html)
@@ -144,29 +169,29 @@ docker image designed for validating inference performance on the AMD Instinct
144
169
145
170
Building the Docker image from source is the recommended way to use vLLM with ROCm.
146
171
147
-
#### (Optional) Build an image with ROCm software stack
172
+
??? info "(Optional) Build an image with ROCm software stack"
148
173
149
-
Build a docker image from [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base) which setup ROCm software stack needed by the vLLM.
150
-
**This step is optional as this rocm_base image is usually prebuilt and store at [Docker Hub](https://hub.docker.com/r/rocm/vllm-dev) under tag `rocm/vllm-dev:base` to speed up user experience.**
151
-
If you choose to build this rocm_base image yourself, the steps are as follows.
174
+
Build a docker image from [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base) which setup ROCm software stack needed by the vLLM.
175
+
**This step is optional as this rocm_base image is usually prebuilt and store at [Docker Hub](https://hub.docker.com/r/rocm/vllm-dev) under tag `rocm/vllm-dev:base` to speed up user experience.**
176
+
If you choose to build this rocm_base image yourself, the steps are as follows.
152
177
153
-
It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to set up buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:
178
+
It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to set up buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:
154
179
155
-
```json
156
-
{
157
-
"features": {
158
-
"buildkit": true
180
+
```json
181
+
{
182
+
"features": {
183
+
"buildkit": true
184
+
}
159
185
}
160
-
}
161
-
```
186
+
```
162
187
163
-
To build vllm on ROCm 6.3for MI200 and MI300 series, you can use the default:
188
+
To build vllm on ROCm 7.0for MI200 and MI300 series, you can use the default:
164
189
165
-
```bash
166
-
DOCKER_BUILDKIT=1 docker build \
167
-
-f docker/Dockerfile.rocm_base \
168
-
-t rocm/vllm-dev:base .
169
-
```
190
+
```bash
191
+
DOCKER_BUILDKIT=1 docker build \
192
+
-f docker/Dockerfile.rocm_base \
193
+
-t rocm/vllm-dev:base .
194
+
```
170
195
171
196
#### Build an image with vLLM
172
197
@@ -181,24 +206,24 @@ It is important that the user kicks off the docker build using buildkit. Either
181
206
}
182
207
```
183
208
184
-
[docker/Dockerfile.rocm](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm) uses ROCm 6.3 by default, but also supports ROCm 5.7, 6.0, 6.1, and 6.2, in older vLLM branches.
209
+
[docker/Dockerfile.rocm](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm) uses ROCm 7.0 by default, but also supports ROCm 5.7, 6.0, 6.1, 6.2, 6.3, and 6.4, in older vLLM branches.
185
210
It provides flexibility to customize the build of docker image using the following arguments:
186
211
187
212
- `BASE_IMAGE`: specifies the base image used when running `docker build`. The default value `rocm/vllm-dev:base` is an image published and maintained by AMD. It is being built using [docker/Dockerfile.rocm_base](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm_base)
188
213
- `ARG_PYTORCH_ROCM_ARCH`: Allows to override the gfx architecture values from the base docker image
189
214
190
215
Their values can be passed in when running `docker build` with `--build-arg` options.
191
216
192
-
To build vllm on ROCm 6.3for MI200 and MI300 series, you can use the default:
217
+
To build vllm on ROCm 7.0for MI200 and MI300 series, you can use the default:
Copy file name to clipboardExpand all lines: docs/getting_started/installation/python_env_setup.inc.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
It's recommended to use [uv](https://docs.astral.sh/uv/), a very fast Python environment manager, to create and manage Python environments. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following commands:
1
+
On NVIDIA CUDA only, it's recommended to use [uv](https://docs.astral.sh/uv/), a very fast Python environment manager, to create and manage Python environments. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following commands:
0 commit comments