Skip to content

Commit a3ae45a

Browse files
authored
[Misc] fix tests failure by using current_platform (vllm-project#25825)
Signed-off-by: Juechen Liu <[email protected]>
1 parent 0307428 commit a3ae45a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/attention/ops/triton_reshape_and_cache_flash.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ def triton_reshape_and_cache_flash(
137137

138138
# heuristics instead of autotuning
139139
TILE_SIZE = min(2048, triton.next_power_of_2(n))
140-
if torch.version.hip or torch.version.xpu:
140+
if current_platform.is_rocm() or current_platform.is_xpu():
141141
num_stages = 4
142142
num_warps = 8
143143
else: # cuda

0 commit comments

Comments
 (0)