Skip to content

Commit 9e5bd73

Browse files
authored
Fix Flex Attn benchmarks (#4842)
1 parent b698f44 commit 9e5bd73

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

benchmarks/triton_kernels_benchmark/flex_attention_benchmark_causal_mask.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
import torch._inductor
1313
import torch._inductor.lowering
1414
import torch._inductor.kernel
15-
import torch._inductor.kernel.flex_attention as flex_attn
15+
import torch._inductor.kernel.flex.flex_attention as flex_attn
1616
from torch._inductor.template_heuristics import FlexConfig, FlexDecodeConfig
1717

1818
import triton_kernels_benchmark as benchmark_suit

0 commit comments

Comments
 (0)