@@ -7,7 +7,7 @@ Solves for ``Au=b`` in the problem defined by `prob` using the algorithm
7
7
8
8
## Recommended Methods
9
9
10
- The default algorithm ` nothing ` is good for choosing an algorithm that will work,
10
+ The default algorithm ` nothing ` is good for picking an algorithm that will work,
11
11
but one may need to change this to receive more performance or precision. If
12
12
more precision is necessary, ` QRFactorization() ` and ` SVDFactorization() ` are
13
13
the best choices, with SVD being the slowest but most precise.
@@ -29,7 +29,7 @@ with CPUs and GPUs, and thus is the generally preferred form for Krylov methods.
29
29
30
30
Finally, a user can pass a custom function for handling the linear solve using
31
31
` LinearSolveFunction() ` if existing solvers are not optimally suited for their application.
32
- The interface is detailed [ here] ( #passing-in-a-custom-linear-solver )
32
+ The interface is detailed [ here] ( #passing-in-a-custom-linear-solver ) .
33
33
34
34
## Full List of Methods
35
35
@@ -49,29 +49,29 @@ customized per-package, details given below describe a subset of important array
49
49
(` Matrix ` , ` SparseMatrixCSC ` , ` CuMatrix ` , etc.)
50
50
51
51
- ` LUFactorization(pivot=LinearAlgebra.RowMaximum()) ` : Julia's built in ` lu ` .
52
- - On dense matrices this uses the current BLAS implementation of the user's computer
52
+ - On dense matrices, this uses the current BLAS implementation of the user's computer,
53
53
which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
54
54
system.
55
- - On sparse matrices this will use UMFPACK from SuiteSparse. Note that this will not
55
+ - On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
56
56
cache the symbolic factorization.
57
- - On CuMatrix it will use a CUDA-accelerated LU from CuSolver.
58
- - On BandedMatrix and BlockBandedMatrix it will use a banded LU.
57
+ - On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
58
+ - On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
59
59
- ` QRFactorization(pivot=LinearAlgebra.NoPivot(),blocksize=16) ` : Julia's built in ` qr ` .
60
- - On dense matrices this uses the current BLAS implementation of the user's computer
60
+ - On dense matrices, this uses the current BLAS implementation of the user's computer
61
61
which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
62
62
system.
63
- - On sparse matrices this will use SPQR from SuiteSparse
64
- - On CuMatrix it will use a CUDA-accelerated QR from CuSolver.
65
- - On BandedMatrix and BlockBandedMatrix it will use a banded QR.
63
+ - On sparse matrices, this will use SPQR from SuiteSparse
64
+ - On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
65
+ - On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
66
66
- ` SVDFactorization(full=false,alg=LinearAlgebra.DivideAndConquer()) ` : Julia's built in ` svd ` .
67
- - On dense matrices this uses the current BLAS implementation of the user's computer
67
+ - On dense matrices, this uses the current BLAS implementation of the user's computer
68
68
which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
69
69
system.
70
70
- ` GenericFactorization(fact_alg) ` : Constructs a linear solver from a generic
71
71
factorization algorithm ` fact_alg ` which complies with the Base.LinearAlgebra
72
72
factorization API. Quoting from Base:
73
73
- If ` A ` is upper or lower triangular (or diagonal), no factorization of ` A ` is
74
- required and the system is solved with either forward or backward substitution.
74
+ required. The system is then solved with either forward or backward substitution.
75
75
For non-triangular square matrices, an LU factorization is used.
76
76
For rectangular ` A ` the result is the minimum-norm least squares solution computed by a
77
77
pivoted QR factorization of ` A ` and a rank estimate of ` A ` based on the R factor.
@@ -94,23 +94,22 @@ LinearSolve.jl provides a wrapper to these routines in a way where an initialize
94
94
has a non-allocating LU factorization. In theory, this post-initialized solve should always
95
95
be faster than the Base.LinearAlgebra version.
96
96
97
- - ` FastLUFactorization ` the ` FastLapackInterface ` version of the LU factorizaiton . Notably,
97
+ - ` FastLUFactorization ` the ` FastLapackInterface ` version of the LU factorization . Notably,
98
98
this version does not allow for choice of pivoting method.
99
99
- ` FastQRFactorization(pivot=NoPivot(),blocksize=32) ` , the ` FastLapackInterface ` version of
100
- the QR factorizaiton .
100
+ the QR factorization .
101
101
102
102
### SuiteSparse.jl
103
103
104
104
By default, the SuiteSparse.jl are implemented for efficiency by caching the
105
- symbolic factorization. I.e. if ` set_A ` is used, it is expected that the new
105
+ symbolic factorization. I.e., if ` set_A ` is used, it is expected that the new
106
106
` A ` has the same sparsity pattern as the previous ` A ` . If this algorithm is to
107
107
be used in a context where that assumption does not hold, set ` reuse_symbolic=false ` .
108
108
109
109
- ` KLUFactorization(;reuse_symbolic=true) ` : A fast sparse LU-factorization which
110
- specializes on sparsity patterns with " less structure" .
110
+ specializes on sparsity patterns with “ less structure” .
111
111
- ` UMFPACKFactorization(;reuse_symbolic=true) ` : A fast sparse multithreaded
112
- LU-factorization which specializes on sparsity patterns that are more
113
- structured.
112
+ LU-factorization which specializes on sparsity patterns with “more structure”.
114
113
115
114
### Pardiso.jl
116
115
150
149
151
150
### CUDA.jl
152
151
153
- Note that ` CuArrays ` are supported by ` GenericFactorization ` in the " normal" way.
152
+ Note that ` CuArrays ` are supported by ` GenericFactorization ` in the “ normal” way.
154
153
The following are non-standard GPU factorization routines.
155
154
156
155
!!! note
0 commit comments