Skip to content

Commit 457b969

Browse files
committed
docs: readme
Signed-off-by: thxCode <[email protected]>
1 parent bd631b4 commit 457b969

File tree

1 file changed

+15
-9
lines changed

1 file changed

+15
-9
lines changed

README.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@ The following table lists the supported accelerated backends and their correspon
3333
and [#2795](https://github.com/vllm-project/vllm-ascend/issues/2795).
3434

3535
> [!IMPORTANT]
36-
> - Update MindIE 2.2.rc1 and 2.1.rc2 with [`av` package installed](https://github.com/gpustack/gpustack/issues/2016#issuecomment-3631228085) and [ATB model patched](https://github.com/gpustack/gpustack/issues/2016#issuecomment-3646603380).
36+
> - Applied [ATB model patched](https://github.com/gpustack/gpustack/issues/2016#issuecomment-3646603380) to MindIE 2.2.rc1/2.1.rc2.
37+
> - Applied [av package](https://github.com/gpustack/gpustack/issues/2016#issuecomment-3631228085) to MindIE 2.2.rc1/2.1.rc2.
3738
> - Update vLLM 0.11.0 with stable vLLM Ascend plugin.
3839
3940
| CANN Version <br/> (Variant) | MindIE | vLLM | SGLang |
@@ -62,7 +63,8 @@ The following table lists the supported accelerated backends and their correspon
6263
`7.5 8.0+PTX 8.9 9.0+PTX`.
6364

6465
> [!IMPORTANT]
65-
> - Update vLLM 0.11.2 with [Qwen2.5 VL patched](https://github.com/gpustack/gpustack/issues/3606).
66+
> - Applied [Qwen2.5 VL patched](https://github.com/gpustack/gpustack/issues/3606) to vLLM 0.11.2.
67+
> - Applied [vLLM[audio] packages](https://github.com/vllm-project/vllm/blob/275de34170654274616082721348b7edd9741d32/setup.py#L720-L724) to vLLM 0.11.2.
6668
6769
| CUDA Version <br/> (Variant) | vLLM | SGLang | VoxBox |
6870
|------------------------------|----------------------------------------------------------------------------|-----------------------------------------------------------|----------|
@@ -93,15 +95,19 @@ The following table lists the supported accelerated backends and their correspon
9395
`gfx908 gfx90a gfx942 gfx1030 gfx1100`.
9496

9597
> [!WARNING]
96-
> - ROCm 7.0 vLLM `0.11.2` and `0.11.0` are reusing the official ROCm 6.4 PyTorch 2.9 wheel package rather than a ROCm
97-
7.0 specific PyTorch build. Although ROCm supports 7.0, `gfx1150 gfx1151` are not supported yet.
98+
> - ROCm 7.0 vLLM `0.11.2/0.11.0` are reusing the official ROCm 6.4 PyTorch 2.9 wheel package rather than a ROCm
99+
7.0 specific PyTorch build. Although supports ROCm 7.0 in vLLM `0.11.2/0.11.0`, `gfx1150/gfx1151` are not supported yet.
98100
> - SGLang supports `gfx942` only.
99101
100-
| ROCm Version <br/> (Variant) | vLLM | SGLang |
101-
|------------------------------|------------------------------------|------------------------------|
102-
| 7.0 | `0.12.0`, `0.11.2`, <br/> `0.11.0` | `0.5.6.post2` |
103-
| 6.4 | `0.12.0`, `0.11.2`, <br/> `0.10.2` | `0.5.6.post2`, `0.5.5.post3` |
104-
| 6.3 | `0.10.1.1`, `0.10.0` | |
102+
> [!IMPORTANT]
103+
> - Applied [vLLM[audio] packages](https://github.com/vllm-project/vllm/blob/275de34170654274616082721348b7edd9741d32/setup.py#L720-L724) to vLLM 0.11.2.
104+
> - Applied [petit-kernel package](https://github.com/vllm-project/vllm/blob/275de34170654274616082721348b7edd9741d32/setup.py#L728) to vLLM 0.11.2 and SGLang 0.5.5.post3.
105+
106+
| ROCm Version <br/> (Variant) | vLLM | SGLang |
107+
|------------------------------|----------------------------------------|----------------------------------|
108+
| 7.0 | `0.12.0`, **`0.11.2`**, <br/> `0.11.0` | `0.5.6.post2` |
109+
| 6.4 | `0.12.0`, **`0.11.2`**, <br/> `0.10.2` | `0.5.6.post2`, **`0.5.5.post3`** |
110+
| 6.3 | `0.10.1.1`, `0.10.0` | |
105111

106112
## Directory Structure
107113

0 commit comments

Comments
 (0)