Skip to content
1 change: 1 addition & 0 deletions sycl/test-e2e/Basic/buffer/reinterpret.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// RUN: %{run} %t.out
//
// XFAIL: level_zero&&gpu
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14430

//==---------- reinterpret.cpp --- SYCL buffer reinterpret basic test ------==//
//
Expand Down
2 changes: 2 additions & 0 deletions sycl/test-e2e/Basic/queue/queue.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
// RUN: %{run} %t.out
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16197

//==--------------- queue.cpp - SYCL queue test ----------------------------==//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Basic/queue/release.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// RUN: env SYCL_UR_TRACE=2 %{run} %t.out | FileCheck %s %if !windows %{--check-prefixes=CHECK-RELEASE%}
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16197
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we remove XFAIL instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we remove XFAIL instead?

Yes, this one slipped through the cracks. Fixed it now!


#include <sycl/detail/core.hpp>
int main() {
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Basic/span.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
//
// Fails to release USM pointer on HIP for NVIDIA
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14404
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? The issue is about AMD HIP, not NVidia.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the catch, fixed it!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how this issue is useful though. If the HW is really unsupported, it should be a REQUIRES: !arch-<something> or something like that.

Or, going deeper, I don't see a value in just adding a bunch of formal links if they don't become actionable.

+ @AlexeySachkov

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree with Andrei here. If we don't support this configuration (and AFAIK, we don't), then there is no reason for us to maintain those XFAILs, we can just drop them.

// REQUIRES: aspect-usm_shared_allocations
#include <numeric>

Expand Down
2 changes: 2 additions & 0 deletions sycl/test-e2e/Basic/stream/auto_flush.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
// RUN: %{run} %t.out %if !gpu || linux %{ | FileCheck %s %}
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16198

//==-------------- copy.cpp - SYCL stream obect auto flushing test ---------==//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
Expand Down
2 changes: 2 additions & 0 deletions sycl/test-e2e/DeprecatedFeatures/queue_old_interop.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
// hip_nvidia has problems constructing queues due to `No device of requested
// type available`.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16199

//==-------- queue_old_interop.cpp - SYCL queue OpenCL interop test --------==//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/DeviceCodeSplit/split-per-kernel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
// RUN: %{run} %t.out
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16201

#include <sycl/detail/core.hpp>
#include <sycl/kernel_bundle.hpp>
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/DeviceCodeSplit/split-per-source-main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
// RUN: %{run} %t.out
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16201

#include "Inputs/split-per-source.h"

Expand Down
2 changes: 2 additions & 0 deletions sycl/test-e2e/GroupAlgorithm/root_group.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
// Fails with opencl non-cpu, enable when fixed.
// XFAIL: (opencl && !cpu && !accelerator)
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14641

// RUN: %{build} -I . -o %t.out %if any-device-is-cuda %{ -Xsycl-target-backend=nvptx64-nvidia-cuda --cuda-gpu-arch=sm_70 %}
// RUN: %{run} %t.out

Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/GroupLocalMemory/group_local_memory.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// RUN: %{run} %t.out
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16204

#include <sycl/detail/core.hpp>

Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/GroupLocalMemory/no_early_opt.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// RUN: %{run} %t.out
//
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/16204

// The test checks that multiple calls to the same template instantiation of a
// group local memory function result in separate allocations, even with device
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// TODO: Passing/returning structures via invoke_simd() API is not implemented
// in GPU driver yet. Enable the test when GPU RT supports it.
// XFAIL: gpu && run-mode
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14543
//
// RUN: %{build} -DIMPL_SUBGROUP -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr -o %t.out
// RUN: env IGC_VCSaveStackCallLinkage=1 IGC_VCDirectCallsOnly=1 %{run} %t.out
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/InvokeSimd/Feature/invoke_simd_struct.cpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// TODO: Passing/returning structures via invoke_simd() API is not implemented
// in GPU driver yet. Enable the test when GPU RT supports it.
// XFAIL: gpu, run-mode
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14543
//
// RUN: %{build} -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr -o %t.out
// RUN: env IGC_VCSaveStackCallLinkage=1 IGC_VCDirectCallsOnly=1 %{run} %t.out
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/InvokeSimd/Spec/ImplicitSubgroup/tuple.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %clangxx -DIMPL_SUBGROUP -fsycl -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr %S/../tuple.cpp -o %t.out
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %clangxx -DIMPL_SUBGROUP -fsycl -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr %S/../tuple_return.cpp -o %t.out
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %clangxx -DIMPL_SUBGROUP -fsycl -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr %S/../tuple_vadd.cpp -o %t.out
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/InvokeSimd/Spec/tuple.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %{build} -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr -o %t.out
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/InvokeSimd/Spec/tuple_return.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %{build} -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr -o %t.out
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/InvokeSimd/Spec/tuple_vadd.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// TODO: enable when Jira ticket resolved
// XFAIL: *
// XFAIL-TRACKER: https://jira.devtools.intel.com/browse/GSD-4509
//
// Check that full compilation works:
// RUN: %{build} -fno-sycl-device-code-split-esimd -Xclang -fsycl-allow-func-ptr -o %t.out
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// This test only checks that the method queue::parallel_for() accepting
// reduction, can be properly translated into queue::submit + parallel_for().
Expand Down
2 changes: 1 addition & 1 deletion sycl/test-e2e/Reduction/reduction_nd_conditional.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
// parallel_for with reduction requires work group size not bigger than 1` on
// Nvidia.
// XFAIL: hip_nvidia

// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973
// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows

Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_nd_dw.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
//
// Group algorithms are not supported on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
2 changes: 1 addition & 1 deletion sycl/test-e2e/Reduction/reduction_nd_ext_double.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
// work group size not bigger than 1` on Nvidia.

// XFAIL: hip_nvidia

// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973
// This test performs basic checks of parallel_for(nd_range, reduction, func)
// used with 'double' type.

Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_nd_ext_half.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
// `The implementation handling parallel_for with reduction requires
// work group size not bigger than 1`.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_nd_queue_shortcut.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_nd_rw.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
//
// `Group algorithms are not supported on host device.` on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_range_usm_dw.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
// Error message `Group algorithms are not
// supported on host device.` on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_span_pack.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
//
// `Group algorithms are not supported on host device.` on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_usm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
//
// `Group algorithms are not supported on host device.` on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
1 change: 1 addition & 0 deletions sycl/test-e2e/Reduction/reduction_usm_dw.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

// `Group algorithms are not supported on host device` on Nvidia.
// XFAIL: hip_nvidia
// XFAIL-TRACKER: https://github.com/intel/llvm/issues/14973

// Windows doesn't yet have full shutdown().
// UNSUPPORTED: ze_debug && windows
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
// tests to match the required format and in that case you should just update
// (i.e. reduce) the number and the list below.
//
// NUMBER-OF-XFAIL-WITHOUT-TRACKER: 77
// NUMBER-OF-XFAIL-WITHOUT-TRACKER: 46
//
// List of improperly XFAIL-ed tests.
// Remove the CHECK once the test has been properly XFAIL-ed.
Expand Down
Loading