Skip to content

Commit 104aae0

Browse files
committed
Fix CudaOffloadLUFactorization to use @get_cacheval macro
When algorithms are part of the default solver system, they must use the @get_cacheval macro to properly retrieve cached values from the unified cache structure. Updated CudaOffloadLUFactorization to follow this pattern. BLISLUFactorization and MetalLUFactorization were already using the correct pattern. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent 8fd2724 commit 104aae0

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

ext/LinearSolveCUDAExt.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,13 @@ end
3838
function SciMLBase.solve!(cache::LinearSolve.LinearCache, alg::CudaOffloadLUFactorization;
3939
kwargs...)
4040
if cache.isfresh
41+
cacheval = LinearSolve.@get_cacheval(cache, :CudaOffloadLUFactorization)
4142
fact = lu(CUDA.CuArray(cache.A))
4243
cache.cacheval = fact
4344
cache.isfresh = false
4445
end
45-
y = Array(ldiv!(CUDA.CuArray(cache.u), cache.cacheval, CUDA.CuArray(cache.b)))
46+
fact = LinearSolve.@get_cacheval(cache, :CudaOffloadLUFactorization)
47+
y = Array(ldiv!(CUDA.CuArray(cache.u), fact, CUDA.CuArray(cache.b)))
4648
cache.u .= y
4749
SciMLBase.build_linear_solution(alg, y, nothing, cache)
4850
end

0 commit comments

Comments
 (0)