|
3 | 3 | This document will guide you through the process of using vLLM with TRL for faster generation in online methods like GRPO and Online DPO. We first summarize a tl;dr on how to use vLLM with TRL, and then we will go into the details of how it works under the hood. |
4 | 4 |
|
5 | 5 | > [!WARNING] |
6 | | -> TRL currently only supports vLLM versions `0.10.2`, `0.11.0`, `0.11.1`, `0.11.2` and `0.12.0`. Please ensure you have one of these versions installed to avoid compatibility issues. |
| 6 | +> TRL currently only supports vLLM versions from `0.10.2` to `0.14.1`. Please ensure you have a version in this range installed to avoid compatibility issues. |
7 | 7 |
|
8 | 8 | > [!TIP] |
9 | 9 | > The following trainers currently support generation with vLLM: |
@@ -31,12 +31,12 @@ pip install "trl[vllm]" |
31 | 31 | Then run the server on specific GPUs (e.g., GPUs 0-3): |
32 | 32 |
|
33 | 33 | ```sh |
34 | | -CUDA_VISIBLE_DEVICES=0,1,2,3 trl vllm-serve --model Qwen/Qwen2.5-7B --tensor-parallel-size 2 --data-parallel-size 2 |
| 34 | +CUDA_VISIBLE_DEVICES=0,1,2,3 trl vllm-serve --model Qwen/Qwen2.5-7B --tensor-parallel-size 4 |
35 | 35 | ``` |
36 | 36 |
|
37 | 37 | Once the server is running, you can use it to generate completions for training. In the example below, we are using the different supported trainers using the vLLM server for generation. The `--tensor-parallel-size` and `--data-parallel-size` arguments control how the model and data are sharded across GPUs. |
38 | 38 |
|
39 | | -In this example, we are sharding two copies of the model across 4 GPUs. Increasing data parallelism increases throughput, while increasing tensor parallelism allows for serving larger models. Then, run the training script on different GPUs (e.g., GPUs 4-7) by passing `use_vllm=True` in the training arguments as follows: |
| 39 | +In this example, we shard one model across 4 GPUs with tensor parallelism. Then, run the training script on different GPUs (e.g., GPUs 4-7) by passing `use_vllm=True` in the training arguments as follows: |
40 | 40 |
|
41 | 41 | Sample of a simple `train.py` script: |
42 | 42 |
|
@@ -166,19 +166,15 @@ If you've ever done autoregressive decoder training, you know all the input toke |
166 | 166 | When you run for example |
167 | 167 |
|
168 | 168 | ```sh |
169 | | -CUDA_VISIBLE_DEVICES=0,1,2,3 trl vllm-serve --model Qwen/Qwen2.5-7B --tensor-parallel-size 1 --data-parallel-size 4 |
| 169 | +CUDA_VISIBLE_DEVICES=0,1,2,3 trl vllm-serve --model Qwen/Qwen2.5-7B --tensor-parallel-size 4 |
170 | 170 | ``` |
171 | 171 |
|
172 | | -the following happens: |
| 172 | +1. vLLM first spawns multiple workers to handle incoming requests in parallel. The number of workers is determined by multiplying the `--tensor-parallel-size` and `--data-parallel-size` values. In this example, it spawns 4 workers (4 × 1). |
| 173 | +Each worker operates independently and processes a chunk of the incoming requests — which are basically the prompts sent to the server for generation. |
173 | 174 |
|
174 | | - |
| 175 | +2. Once the incoming requests (prompts) are distributed across the workers, the model starts generating completions. Internally, the model’s weights are split across multiple GPUs based on the `--tensor-parallel-size` argument — this is how tensor parallelism is handled. |
175 | 176 |
|
176 | | -1. vLLM first spawns multiple workers to handle incoming requests in parallel. The number of workers is determined by multiplying the `--tensor-parallel-size` and `--data-parallel-size` values. In this example, it spawns 4 workers (1 × 4). |
177 | | -Each worker operates independently and processes a chunk of the incoming requests — which are basically the prompts sent to the server for generation. A key point to understand is that these 4 workers are running in parallel, and each one is responsible for handling a subset of the total incoming load. |
178 | | - |
179 | | -2. Once the incoming requests (prompts) are distributed across the workers, the model starts generating completions. Internally, the model’s weights are split across multiple GPUs based on the `--tensor-parallel-size` argument — this is how tensor parallelism is handled. Meanwhile, data parallelism (controlled by `--data-parallel-size`) ensures that different sets of requests are processed independently across the workers. In short: tensor parallelism splits the model across GPUs, and data parallelism splits the batch of requests across different model replicas. |
180 | | - |
181 | | -3. Although the GPUs process requests independently and in parallel, they still need to communicate with each other. Remember that each GPU handles only a slice of the incoming prompts (for example, with 4 GPUs and 8 prompts using `--data-parallel-size=4`, each GPU processes 2 prompts). |
| 177 | +3. Although the GPUs process requests independently and in parallel, they still need to communicate with each other. Remember that each GPU handles only a slice of the incoming prompts (for example, with 4 GPUs and 8 prompts using `--tensor-parallel-size=4`, each GPU participates in serving the full model). |
182 | 178 | This GPU-to-GPU communication is managed efficiently by NVIDIA’s NCCL library. The communication mainly ensures that each GPU gets its correct portion of the incoming requests — it’s lightweight and doesn’t interfere with generation itself. |
183 | 179 | Separately, the number of completions to generate per prompt is controlled by the `num_generations` setting in the GRPO config. For instance, if you set `num_generations=2` (like in the picture above), each prompt will have 2 completions. So, with 8 prompts and `num_generations=2`, you would end up with 16 completions total — regardless of the number of GPUs or parallelism settings. |
184 | 180 |
|
@@ -224,7 +220,9 @@ options: |
224 | 220 | --tensor_parallel_size TENSOR_PARALLEL_SIZE, --tensor-parallel-size TENSOR_PARALLEL_SIZE |
225 | 221 | Number of tensor parallel workers to use. (default: 1) |
226 | 222 | --data_parallel_size DATA_PARALLEL_SIZE, --data-parallel-size DATA_PARALLEL_SIZE |
227 | | - Number of data parallel workers to use. (default: 1) |
| 223 | + Number of data parallel workers to use. For dense models, keep this at 1. Starting from vLLM `0.14.0`, setting |
| 224 | + this above `1` for dense models is no longer supported/useful and will error out (see vLLM PR #30739). |
| 225 | + (default: 1) |
228 | 226 | --host HOST Host address to run the server on. (default: 0.0.0.0) |
229 | 227 | --port PORT Port to run the server on. (default: 8000) |
230 | 228 | --gpu_memory_utilization GPU_MEMORY_UTILIZATION, --gpu-memory-utilization GPU_MEMORY_UTILIZATION |
@@ -259,20 +257,8 @@ options: |
259 | 257 |  |
260 | 258 |  |
261 | 259 |
|
262 | | -First and foremost, always remember that the optimal setup depends on: |
263 | | - |
264 | | -- The model size |
265 | | -- The number of GPUs you have |
266 | | -- The GPU memory size |
267 | | -- The batch size you are using |
268 | | -- The number of requests you are sending to the server (prompts) |
269 | | -- The `max_model_len` you are using (this is the max length of the input sequence that the model can process, a.k.a. the context window size) |
270 | | -- The number of completions you are generating for each request (`num_generations`) |
271 | | - |
272 | | -Given these factors, our experiments on the Qwen model family (3B, 7B, 14B, 32B) using 8 H100 GPUs show that: |
273 | | - |
274 | | -- For reasonable-sized models (3B–14B) and a moderate context window (`max_len < 8k`), using full capacity for data parallelism gives better throughput. The setup `(tp=1, dp=8)` yields the best results. |
275 | | -- For larger models (32B) and longer context windows (`max_len > 8k`), a smaller DP size combined with some model-side parallelism performs better. For example, `(tp=2, dp=4)` is a good setup for 32B models with a larger context window. |
| 260 | +> [!WARNING] |
| 261 | +> The benchmark plots above were collected with older vLLM versions. Starting with [vLLM PR #30739](https://github.com/vllm-project/vllm/pull/30739) (released in `0.14.0`), offline data parallel scaling for non-MoE (dense) models is no longer supported. To follow the latest recommendations, do not scale DP for non-MoE models. |
276 | 262 |
|
277 | 263 | ### vLLM with Transformers Backend |
278 | 264 |
|
|
0 commit comments