Skip to content

Commit c162321

Browse files
committed
fix failing test, typo
1 parent c069ad0 commit c162321

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

.github/packaging/vllm_reqs.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# These requirements were generated by running steps 1-3 of scripts/build_wheels.shell
1+
# These requirements were generated by running steps 1-3 of scripts/build_wheels.sh
22
# then running pip freeze and manually removing the vllm dependency.
33
# The intention of this file is to use these known requirements for a fixed
44
# vLLM build to supplement a vLLM install from download.pytorch.org without

tests/unit_tests/test_provisioner.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ def test_provisioner_empty_cuda_visible_devices(self):
144144
available = local_gpu_manager.get_available_gpus()
145145
assert available == [str(i) for i in range(8)]
146146

147-
@mock.patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0,1,2"}, clear=True)
147+
@mock.patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0,1,2"}, clear=False)
148148
@pytest.mark.asyncio
149149
async def test_get_proc_mesh_respects_cuda_visible_devices(self):
150150
"""Test that get_proc_mesh uses CUDA_VISIBLE_DEVICES for local allocation."""

0 commit comments

Comments
 (0)