Skip to content

Commit 30e7752

Browse files
WrRanzhuohan123
andauthored
fix typo (#1184)
Co-authored-by: Zhuohan Li <[email protected]>
1 parent 21877b0 commit 30e7752

File tree

2 files changed

+5
-6
lines changed

2 files changed

+5
-6
lines changed

vllm/engine/llm_engine.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,8 @@ class LLMEngine:
5454
scheduler_config: The configuration related to the request scheduler.
5555
distributed_init_method: The initialization method for distributed
5656
execution. See `torch.distributed.init_process_group` for details.
57-
stage_devices: The list of devices for each stage. Each stage is a list
58-
of (rank, node_resource, device) tuples.
57+
placement_group: Ray placement group for distributed execution.
58+
Required for distributed execution.
5959
log_stats: Whether to log statistics.
6060
"""
6161

vllm/engine/ray_utils.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,11 +63,10 @@ def initialize_cluster(
6363
the default Ray cluster address.
6464
6565
Returns:
66-
A tuple of (`distributed_init_method`, `all_stage_devices`). The
66+
A tuple of (`distributed_init_method`, `placement_group`). The
6767
`distributed_init_method` is the address for initializing the
68-
distributed backend. `all_stage_devices` includes device IDs for
69-
each worker in each pipeline stage. Each device ID is a tuple of
70-
(rank, node resource, device id).
68+
distributed backend. `placement_group` includes the specification
69+
of the resources for each distributed worker.
7170
"""
7271
if parallel_config.worker_use_ray or engine_use_ray:
7372
if ray is None:

0 commit comments

Comments
 (0)