Skip to content

Commit cfc356f

Browse files
authored
[Docs] Update installation method. (vllm-project#448)
Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
1 parent c78c326 commit cfc356f

File tree

4 files changed

+21
-6
lines changed

4 files changed

+21
-6
lines changed

docs/api/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@
44

55
Main entry points for vLLM-Omni inference and serving.
66

7-
- [vllm_omni.entrypoints.async_diffusion.AsyncOmniDiffusion][]
87
- [vllm_omni.entrypoints.async_omni.AsyncOmni][]
98
- [vllm_omni.entrypoints.async_omni.AsyncOmniStageLLM][]
109
- [vllm_omni.entrypoints.chat_utils.OmniAsyncMultiModalContentParser][]
@@ -18,7 +17,6 @@ Main entry points for vLLM-Omni inference and serving.
1817
- [vllm_omni.entrypoints.omni_llm.OmniLLM][]
1918
- [vllm_omni.entrypoints.omni_llm.OmniStageLLM][]
2019
- [vllm_omni.entrypoints.omni_stage.OmniStage][]
21-
- [vllm_omni.entrypoints.openai.serving_chat.OmniOpenAIServingChat][]
2220

2321
## Inputs
2422

docs/getting_started/installation/gpu/cuda.inc.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ Therefore, it is recommended to install vLLM and vLLM-Omni with a **fresh new**
1717
# --8<-- [start:pre-built-wheels]
1818

1919
#### Installation of vLLM
20+
Note: Pre-built wheels are currently only available for vLLM-Omni 0.11.0rc1. For the latest version, please [build from source](https://docs.vllm.ai/projects/vllm-omni/en/latest/getting_started/installation/gpu/#build-wheel-from-source).
21+
2022

2123
vLLM-Omni is built based on vLLM. Please install it with command below.
2224
```bash

docs/getting_started/quickstart.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,14 +12,17 @@ This guide will help you quickly get started with vLLM-Omni to perform:
1212

1313
## Installation
1414

15-
For installation on GPU using pre-built-wheel:
15+
For installation on GPU from source:
1616

1717
```bash
1818
uv venv --python 3.12 --seed
1919
source .venv/bin/activate
20-
uv pip install vllm==0.11.0 --torch-backend=auto
21-
uv pip install vllm-omni
20+
uv pip install vllm==0.12.0 --torch-backend=auto
21+
git clone https://github.com/vllm-project/vllm-omni.git
22+
cd vllm-omni
23+
uv pip install -e .
2224
```
25+
2326
For additional details—including alternative installation methods, installation on NPU and other platforms — please see the installation guide in [installation](installation/README.md)
2427

2528
## Offline Inference

docs/user_guide/examples/offline_inference/image_to_image.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,14 @@ This example edits an input image with `Qwen/Qwen-Image-Edit` using the `image_e
99

1010
### Single Image Editing
1111

12+
Download the example image:
13+
14+
```bash
15+
wget https://vllm-public-assets.s3.us-west-2.amazonaws.com/omni-assets/qwen-bear.png
16+
```
17+
18+
Then run:
19+
1220
```bash
1321
python image_edit.py \
1422
--image qwen_bear.png \
@@ -20,7 +28,7 @@ python image_edit.py \
2028

2129
### Multiple Image Editing (Qwen-Image-Edit-2509)
2230

23-
For multiple image inputs, use `Qwen/Qwen-Image-Edit-2509` or later version:
31+
For multiple image inputs, use `Qwen/Qwen-Image-Edit-2509` or `Qwen/Qwen-Image-Edit-2511`:
2432

2533
```bash
2634
python image_edit.py \
@@ -49,3 +57,7 @@ Key arguments:
4957
``````py
5058
--8<-- "examples/offline_inference/image_to_image/image_edit.py"
5159
``````
60+
??? abstract "run_qwen_image_edit_2511.sh"
61+
``````sh
62+
--8<-- "examples/offline_inference/image_to_image/run_qwen_image_edit_2511.sh"
63+
``````

0 commit comments

Comments
 (0)