Skip to content

Commit 4816fd9

Browse files
liangan1pytorchmergebot
authored andcommitted
[xpu][test][FlexAttention]Enable the test_GQA on Intel XPU (pytorch#166376)
Pull Request resolved: pytorch#166376 Approved by: https://github.com/drisspg, https://github.com/EikanWang
1 parent ed2e92b commit 4816fd9

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

test/inductor/test_flex_attention.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1477,7 +1477,6 @@ def mask_mod(b, h, q, kv):
14771477
@dtypesIfXPU(*device_configs["xpu"].dtypes_fast)
14781478
@common_utils.parametrize("score_mod", test_score_mods)
14791479
@skip_on_rocm # TODO: NaNs on ROCM
1480-
@skip_on_xpu # TODO: NaNs on XPU like ROCM, need another PR to fix.
14811480
def test_GQA(self, device, dtype: torch.dtype, score_mod: Callable):
14821481
inputs = (
14831482
score_mod,

0 commit comments

Comments
 (0)