Skip to content

Commit 25d585a

Browse files
[XPU] Enable external_launcher to serve as an executor via torchrun (vllm-project#21021)
Signed-off-by: chzhang <[email protected]>
1 parent 8d0a01a commit 25d585a

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

vllm/v1/worker/xpu_worker.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77

88
import vllm.envs as envs
99
from vllm.config import VllmConfig
10+
from vllm.distributed import get_world_group
1011
from vllm.logger import init_logger
1112
from vllm.model_executor import set_random_seed
1213
from vllm.platforms import current_platform
@@ -155,7 +156,8 @@ def init_device(self):
155156
current_platform.dist_backend)
156157

157158
# global all_reduce needed for overall oneccl warm up
158-
torch.distributed.all_reduce(torch.zeros(1).xpu())
159+
torch.distributed.all_reduce(torch.zeros(1).xpu(),
160+
group=get_world_group().device_group)
159161

160162
# Set random seed.
161163
set_random_seed(self.model_config.seed)

0 commit comments

Comments
 (0)