Skip to content

Commit b19a5d2

Browse files
committed
add test for early return with dp size 1
Signed-off-by: Hao Wu <skyw@nvidia.com>
1 parent da249ce commit b19a5d2

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

tests/unit_tests/distributed/test_param_and_grad_buffer.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -382,7 +382,8 @@ def test_start_param_sync_dp_size_1():
382382
"""When dp_size == 1 (e.g., expt_dp_size == 1), start_param_sync should set
383383
param_gather_dispatched=True and return immediately without launching any
384384
all-gather collective."""
385-
Utils.initialize_model_parallel(tensor_model_parallel_size=8)
385+
world_size = torch.distributed.get_world_size()
386+
Utils.initialize_model_parallel(tensor_model_parallel_size=world_size)
386387

387388
ddp_config = DistributedDataParallelConfig(
388389
grad_reduce_in_fp32=True,

0 commit comments

Comments
 (0)