Skip to content

Commit c626501

Browse files
chore: update xfails report
Auto-generated xfails report based on current test suite markers.
1 parent 2062dec commit c626501

File tree

1 file changed

+97
-0
lines changed

1 file changed

+97
-0
lines changed

reports/xfails_report.txt

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
Scanning for test files in /home/runner/work/flashinfer/flashinfer/tests...
2+
Found 101 test files
3+
Collecting xfail markers...
4+
====================================================================================================
5+
XFAILS REPORT
6+
====================================================================================================
7+
8+
Total xfails: 10
9+
Unique reasons: 8
10+
11+
12+
----------------------------------------------------------------------------------------------------
13+
Reason Count Type
14+
----------------------------------------------------------------------------------------------------
15+
Expected failure for SM120/121 for now since the tile s... 2 decorator
16+
NOTE(Zihao): temporarily disable cuda graph until we fu... 2 runtime
17+
NOTE(Zihao): attention sink with sliding window and non... 1 runtime
18+
seq_len=514 is known to fail in full test suite occasio... 1 parameter
19+
nvidia-cutlass-dsl has issue when l=1 1 runtime
20+
str(e) 1 runtime
21+
Note(jimmzhou): Make MxFP4xBf16 nonfunctional on SM103 ... 1 runtime
22+
Numerical accuracy issue on SM 121 (Spark) 1 decorator
23+
----------------------------------------------------------------------------------------------------
24+
25+
26+
====================================================================================================
27+
DETAILED BREAKDOWN BY REASON
28+
====================================================================================================
29+
30+
[2 xfails] Expected failure for SM120/121 for now since the tile size/number of stages is too large.
31+
----------------------------------------------------------------------------------------------------
32+
• tests/attention/test_batch_attention.py:211
33+
Test: test_batch_attention_with_noncontiguous_q
34+
Type: decorator
35+
Condition: get_compute_capability(torch.device(device='cuda'))[0] == 12
36+
37+
• tests/attention/test_batch_attention.py:260
38+
Test: test_batch_attention_correctness
39+
Type: decorator
40+
Condition: get_compute_capability(torch.device(device='cuda'))[0] == 12
41+
42+
43+
[2 xfails] NOTE(Zihao): temporarily disable cuda graph until we fully fix the workspace buffer overflow issue for prefill + cudagraph
44+
----------------------------------------------------------------------------------------------------
45+
• tests/attention/test_batch_prefill_kernels.py:81
46+
Test: test_batch_prefill_with_paged_kv_cache
47+
Type: runtime
48+
49+
• tests/attention/test_batch_prefill_kernels.py:321
50+
Test: test_batch_prefill_with_tuple_paged_kv_cache
51+
Type: runtime
52+
53+
54+
[1 xfails] NOTE(Zihao): attention sink with sliding window and non-causal will fail after https://github.com/flashinfer-ai/flashinfer/pull/1661, temporarily xfail the test.
55+
----------------------------------------------------------------------------------------------------
56+
• tests/attention/test_attention_sink.py:643
57+
Test: test_attention_sink_chunk_prefill
58+
Type: runtime
59+
60+
61+
[1 xfails] seq_len=514 is known to fail in full test suite occasionally
62+
----------------------------------------------------------------------------------------------------
63+
• tests/attention/test_xqa.py:138
64+
Test: test_xqa
65+
Type: parameter
66+
Strict: False
67+
68+
69+
[1 xfails] nvidia-cutlass-dsl has issue when l=1
70+
----------------------------------------------------------------------------------------------------
71+
• tests/gemm/test_cute_dsl_blockscaled_gemm.py:93
72+
Test: test_blockscaled_gemm_python_interface
73+
Type: runtime
74+
75+
76+
[1 xfails] str(e)
77+
----------------------------------------------------------------------------------------------------
78+
• tests/gemm/test_mm_fp4.py:92
79+
Test: _test_mm_fp4
80+
Type: runtime
81+
82+
83+
[1 xfails] Note(jimmzhou): Make MxFP4xBf16 nonfunctional on SM103 to avoid B200 regression
84+
----------------------------------------------------------------------------------------------------
85+
• tests/moe/utils.py:103
86+
Test: skip_checks
87+
Type: runtime
88+
89+
90+
[1 xfails] Numerical accuracy issue on SM 121 (Spark)
91+
----------------------------------------------------------------------------------------------------
92+
• tests/utils/test_jit_example.py:173
93+
Test: test_dump_logits
94+
Type: decorator
95+
Condition: get_compute_capability(torch.device('cuda:0')) == (12, 1)
96+
97+
====================================================================================================

0 commit comments

Comments
 (0)