Skip to content

Inductor tests segfault on 24.04 + Kobuk #5356

@kwasd

Description

@kwasd

Describe the bug

https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/18683016420/job/53268367779

The following tests failed consistently: ['test/inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_addmm_persistent_tma_a_transposed_False_b_transposed_False_dynamic_True_tma_store_False', 'test/inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_addmm_persistent_tma_a_transposed_True_b_transposed_False_dynamic_False_tma_store_False', 'test/inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_addmm_persistent_tma_a_transposed_True_b_transposed_True_dynamic_False_tma_store_False', 'test/inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_regular_mm_persistent_tma_a_transposed_False_b_transposed_False_dynamic_True_tma_store_False', 'test/inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_regular_mm_persistent_tma_strided_a_transposed_False_b_transposed_False_dynamic_True', 'test/inductor/test_max_autotune.py::TestPrologueFusion::test_preserves_zero_analysis']
inductor/test_max_autotune.py::TestMaxAutotune::test_max_autotune_addmm_persistent_tma_a_transposed_True_b_transposed_False_dynamic_False_tma_store_False Fatal Python error: Segmentation fault

Thread 0x00007ff4d9f516c0 (most recent call first):
  File "/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/threading.py", line 324 in wait
  File "/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/threading.py", line 607 in wait
  File "/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/site-packages/tqdm/_monitor.py", line 60 in run
  File "/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
  File "/opt/hostedtoolcache/Python/3.10.19/x64/lib/python3.10/threading.py", line 973 in _bootstrap

Environment details

TIMESTAMP=20251021154824
JOB_NAME=
GITHUB_RUN_ID=18683016420
GITHUB_RUN_NUMBER=2049
GITHUB_RUN_ATTEMPT=1
PYTHON_VERSION=3.10
PYTORCH_REPO=pytorch/pytorch
PYTORCH_COMMIT_ID=37d57ac9cb7f538b812cf1d9851b55b46213fe15
TRITON_REPO=intel/intel-xpu-backend-for-triton
TRITON_COMMIT_ID=63e9873e0fe85f5aeb1fc8583912aa3df9cec568
TORCHVISION_COMMIT_ID=
LIBIGC1_VERSION=2.18.5-1188
LEVEL_ZERO_VERSION=1.24.1-1~24.04
GPU_DEVICE=Intel(R) Data Center GPU Max 1100
AGAMA_VERSION=1188

Metadata

Metadata

Assignees

Type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions