Skip to content

Commit 7bc948c

Browse files
authored
Don't use the default value for device fixture as this will break it. (#7675)
Example for XPU backend: ```bash =========================================================================== short test summary info ============================================================================ FAILED python/test/unit/language/test_core.py::test_split_subview[128-128-64-64] - AssertionError: Torch not compiled with CUDA enabled FAILED python/test/unit/language/test_core.py::test_split_subview[128-128-64-32] - AssertionError: Torch not compiled with CUDA enabled FAILED python/test/unit/language/test_core.py::test_split_subview[128-64-64-32] - AssertionError: Torch not compiled with CUDA enabled FAILED python/test/unit/language/test_core.py::test_split_subview[256-128-64-64] - AssertionError: Torch not compiled with CUDA enabled ============================================================================== 4 failed in 4.99s =================================================== ```
1 parent d4af439 commit 7bc948c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

python/test/unit/language/test_core.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6515,7 +6515,7 @@ def test_local_load_store(M, N, K, dist_layout, shared_layout, device, tmp_path:
65156515

65166516
@pytest.mark.parametrize("M, N, M_tile_size, N_tile_size",
65176517
[[128, 128, 64, 64], [128, 128, 64, 32], [128, 64, 64, 32], [256, 128, 64, 64]])
6518-
def test_split_subview(M, N, M_tile_size, N_tile_size, device='cuda'):
6518+
def test_split_subview(M, N, M_tile_size, N_tile_size, device):
65196519
num_rows_per_warp = THREADS_PER_WARP // 4
65206520
num_repeats_M = triton.cdiv(M, M_tile_size)
65216521
num_repeats_N = triton.cdiv(N, N_tile_size)

0 commit comments

Comments
 (0)