Skip to content

Conversation

@vchuravy
Copy link
Member

@vchuravy vchuravy commented Oct 15, 2025

@vchuravy vchuravy changed the title Use ParallelTestRunner with a custom TestRecord Use ParallelTestRunner Oct 16, 2025
@codecov
Copy link

codecov bot commented Oct 16, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 78.83%. Comparing base (53f450a) to head (3ace4c7).

Additional details and impacted files
@@             Coverage Diff             @@
##           master     #390       +/-   ##
===========================================
+ Coverage   34.49%   78.83%   +44.33%     
===========================================
  Files          11       11               
  Lines         629      671       +42     
===========================================
+ Hits          217      529      +312     
+ Misses        412      142      -270     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@maleadt
Copy link
Member

maleadt commented Oct 16, 2025

Alternative approach, which I considered in order to get more visibility on which platform failed:

# for each platform, come up with a short and friendly name.
# note that this name should also work with `cl.platform!`
platform_names = Dict()
for platform in cl.platforms()
    short_name = if occursin("Intel", platform.name)
        "intel"
    elseif occursin("NVIDIA", platform.name)
        "nvidia"
    elseif occursin("AMD", platform.name) || occursin("Advanced Micro Devices", platform.vendor)
        "amd"
    elseif occursin("Apple", platform.vendor)
        "apple"
    elseif occursin("Portable Computing Language", platform.name)
        "pocl"
    else
        lowercase(replace(platform.name, r"\W+" => "_"))
    end
    if haskey(platform_names, short_name)
        @warn "Multiple OpenCL platforms with the same short name '$short_name'. " *
              "Using the first one found: $(platform_names[short_name].name). " *
              "Ignoring: $(platform.name)"
        continue
    end
    platform_names[short_name] = platform
end

# for each device, determine a prefix and see if the device can execute IL
target_devices = Dict{String, Tuple{String, Int, Bool}}()
for (pname, platform) in platform_names
    cl.platform!(platform)

    for (i, device) in enumerate(cl.devices(platform))
        cl.device!(device)
        il = "cl_khr_il_program" in device.extensions
        dname = if length(cl.devices(platform)) == 1
            pname
        else
            "$pname$i"
        end
        target_devices[dname] = (pname, i, il)
    end
end

# discover tests
tests = ParallelTestRunner.find_tests(@__DIR__)
const GPUArraysTestSuite = let
    mod = @eval module $(gensym())
        using ..Test
        import GPUArrays
        gpuarrays = pathof(GPUArrays)
        gpuarrays_root = dirname(dirname(gpuarrays))
        include(joinpath(gpuarrays_root, "test", "testsuite.jl"))
    end
    mod.TestSuite
end
for name in keys(GPUArraysTestSuite.tests)
    test = "gpuarrays/$name"
    tests[test] = :(GPUArraysTestSuite.tests[$name](CLArray))
end

# transform test expressions to run on the appropriate device
custom_tests = Dict{String, Expr}()
for (test, expr) in tests, (dname, (pname, devidx, il)) in target_devices
    # some tests require native execution capabilities
    requires_il = test in ["atomics", "execution", "intrinsics", "kernelabstractions",
                           "statistics", "linalg", ] || startswith(test, "gpuarrays/")
    if requires_il && !il
        continue
    end

    test_name = "$dname/$test"
    custom_tests["$dname/$test"] = quote
        cl.platform!($pname)
        cl.device!(cl.devices(cl.platform())[$devidx])
        $expr
    end
end

function test_filter(test)
    if load_preference(OpenCL, "default_memory_backend") == "svm" &&
       test == "gpuarrays/indexing scalar"
        # GPUArrays' scalar indexing tests assume that indexing is not supported
        return false
    end
    return true
end

const init_code = quote
    # the same
end


runtests(OpenCL, ARGS; discover_tests = false, custom_tests, test_filter, init_code)

But if @vchuravy figures out a way to preserve the testset failure we may still want to consider this approach here.

@vchuravy
Copy link
Member Author

Test Summary:                                    |  Pass  Fail  Error  Broken  Total      Time
  Overall                                        | 12042     3      2       7  12054  12m39.2s
    gpuarrays/indexing scalar                    |   394     3      2            399     44.5s
      $(device.name)                             |   394     3      2            399     44.4s
        errors and warnings                      |     3     3                     6      1.3s

So the WorkerTestSet is working, I am just failing to interpolate.

@maleadt
Copy link
Member

maleadt commented Oct 20, 2025

Lots of Float16-related failures here. @simeonschaub Could this be due to the upgrade to PoCL 7.1? I've tried restricting to 7.0 in the latest commit here.

@simeonschaub
Copy link
Member

Yes, see #312 (comment), the jll builds are missing most of the math intrinsics. Restricting the version sounds ok for now, though we might want to instead just disable Float16 testing for the jll tests.

@maleadt
Copy link
Member

maleadt commented Oct 20, 2025

Windows is problematic again.

@maleadt
Copy link
Member

maleadt commented Oct 22, 2025

Windows failures are #393

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants