Skip to content

Commit 689b34f

Browse files
authored
bugfix: fix multi-gpu/node unit-test: skip when there aren't enough GPUs in test_trtllm_mnnvl_allreduce (#1627)
<!-- .github/pull_request_template.md --> Similar to PR1600, `test_trtllm_mnnvl_allreduce.py` should skip if MPI rank is too small instead of failing the unit test. ## 📌 Description <!-- What does this PR do? Briefly describe the changes and why they’re needed. --> ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. -->
1 parent 9b861cd commit 689b34f

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

tests/test_trtllm_mnnvl_allreduce.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
# Check torch version:
2-
import sys
32
from typing import Tuple
43

54
import pytest
@@ -179,9 +178,7 @@ def test_mnnvl_allreduce_full(
179178

180179
# Ensure we have exactly 2 ranks for this test
181180
if world_size < 2:
182-
if rank == 0:
183-
print(f"ERROR: This test requires at least 2 MPI ranks, got {world_size}")
184-
sys.exit(1)
181+
pytest.skip(f"This test requires at least 2 MPI ranks, got {world_size}")
185182

186183
mapping = Mapping(
187184
world_size=world_size,

0 commit comments

Comments
 (0)