Skip to content

Commit d1f7270

Browse files
Merge pull request #391 from avik-pal/ap/patch
Patch CholeskyFactorization for Sparse Arrays
2 parents fb97ea4 + 0f197b0 commit d1f7270

File tree

2 files changed

+19
-14
lines changed

2 files changed

+19
-14
lines changed

src/factorization.jl

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ end
2929
`LUFactorization(pivot=LinearAlgebra.RowMaximum())`
3030
3131
Julia's built in `lu`. Equivalent to calling `lu!(A)`
32-
32+
3333
* On dense matrices, this uses the current BLAS implementation of the user's computer,
3434
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
3535
system.
@@ -135,7 +135,7 @@ end
135135
`QRFactorization(pivot=LinearAlgebra.NoPivot(),blocksize=16)`
136136
137137
Julia's built in `qr`. Equivalent to calling `qr!(A)`.
138-
138+
139139
* On dense matrices, this uses the current BLAS implementation of the user's computer
140140
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
141141
system.
@@ -242,7 +242,7 @@ end
242242
function do_factorization(alg::CholeskyFactorization, A, b, u)
243243
A = convert(AbstractMatrix, A)
244244
if A isa SparseMatrixCSC
245-
fact = cholesky!(A; shift = alg.shift, check = false, perm = alg.perm)
245+
fact = cholesky(A; shift = alg.shift, check = false, perm = alg.perm)
246246
elseif alg.pivot === Val(false) || alg.pivot === NoPivot()
247247
fact = cholesky!(A, alg.pivot; check = false)
248248
else
@@ -346,7 +346,7 @@ end
346346
`SVDFactorization(full=false,alg=LinearAlgebra.DivideAndConquer())`
347347
348348
Julia's built in `svd`. Equivalent to `svd!(A)`.
349-
349+
350350
* On dense matrices, this uses the current BLAS implementation of the user's computer
351351
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
352352
system.
@@ -444,7 +444,7 @@ end
444444
`GenericFactorization(;fact_alg=LinearAlgebra.factorize)`: Constructs a linear solver from a generic
445445
factorization algorithm `fact_alg` which complies with the Base.LinearAlgebra
446446
factorization API. Quoting from Base:
447-
447+
448448
* If `A` is upper or lower triangular (or diagonal), no factorization of `A` is
449449
required. The system is then solved with either forward or backward substitution.
450450
For non-triangular square matrices, an LU factorization is used.
@@ -666,7 +666,7 @@ end
666666
"""
667667
`UMFPACKFactorization(;reuse_symbolic=true, check_pattern=true)`
668668
669-
A fast sparse multithreaded LU-factorization which specializes on sparsity
669+
A fast sparse multithreaded LU-factorization which specializes on sparsity
670670
patterns with “more structure”.
671671
672672
!!! note
@@ -850,7 +850,7 @@ Only supports sparse matrices.
850850
851851
## Keyword Arguments
852852
853-
* shift: the shift argument in CHOLMOD.
853+
* shift: the shift argument in CHOLMOD.
854854
* perm: the perm argument in CHOLMOD
855855
"""
856856
Base.@kwdef struct CHOLMODFactorization{T} <: AbstractFactorization
@@ -916,12 +916,12 @@ end
916916
## RFLUFactorization
917917

918918
"""
919-
`RFLUFactorization()`
919+
`RFLUFactorization()`
920920
921921
A fast pure Julia LU-factorization implementation
922922
using RecursiveFactorization.jl. This is by far the fastest LU-factorization
923923
implementation, usually outperforming OpenBLAS and MKL for smaller matrices
924-
(<500x500), but currently optimized only for Base `Array` with `Float32` or `Float64`.
924+
(<500x500), but currently optimized only for Base `Array` with `Float32` or `Float64`.
925925
Additional optimization for complex matrices is in the works.
926926
"""
927927
struct RFLUFactorization{P, T} <: AbstractFactorization
@@ -1179,7 +1179,7 @@ end
11791179
# But I'm not sure it makes sense as a GenericFactorization
11801180
# since it just uses `LAPACK.getrf!`.
11811181
"""
1182-
`FastLUFactorization()`
1182+
`FastLUFactorization()`
11831183
11841184
The FastLapackInterface.jl version of the LU factorization. Notably,
11851185
this version does not allow for choice of pivoting method.
@@ -1210,7 +1210,7 @@ function SciMLBase.solve!(cache::LinearCache, alg::FastLUFactorization; kwargs..
12101210
end
12111211

12121212
"""
1213-
`FastQRFactorization()`
1213+
`FastQRFactorization()`
12141214
12151215
The FastLapackInterface.jl version of the QR factorization.
12161216
"""

test/sparse_vector.jl

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ using SparseArrays
22
using LinearSolve
33
using LinearAlgebra
44

5-
# Constructing sparse array
5+
# Constructing sparse array
66
function hess_sparse(x::Vector{T}) where {T}
77
return [
88
-sin(x[1] + x[2]) + 1,
@@ -37,7 +37,12 @@ n = length(x0)
3737
hess_mat = sparse(rowval, colval, hess_sparse(x0), n, n)
3838
grad_vec = sparsevec(gradinds, grad_sparse(x0), n)
3939

40-
# # Converting grad_vec to dense succeeds in solving
40+
# Converting grad_vec to dense succeeds in solving
4141
prob = LinearProblem(hess_mat, grad_vec)
42-
linsolve = init(prob)
42+
linsolve = init(prob);
4343
@test solve!(linsolve).u hess_mat \ Array(grad_vec)
44+
45+
H = hess_mat' * hess_mat
46+
prob = LinearProblem(H, hess_mat' * grad_vec)
47+
linsolve = init(prob, CholeskyFactorization())
48+
VERSION >= v"1.8" && @test solve!(linsolve).u H \ Array(hess_mat' * grad_vec)

0 commit comments

Comments
 (0)