|
29 | 29 | `LUFactorization(pivot=LinearAlgebra.RowMaximum())`
|
30 | 30 |
|
31 | 31 | Julia's built in `lu`. Equivalent to calling `lu!(A)`
|
32 |
| - |
| 32 | +
|
33 | 33 | * On dense matrices, this uses the current BLAS implementation of the user's computer,
|
34 | 34 | which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
|
35 | 35 | system.
|
|
135 | 135 | `QRFactorization(pivot=LinearAlgebra.NoPivot(),blocksize=16)`
|
136 | 136 |
|
137 | 137 | Julia's built in `qr`. Equivalent to calling `qr!(A)`.
|
138 |
| - |
| 138 | +
|
139 | 139 | * On dense matrices, this uses the current BLAS implementation of the user's computer
|
140 | 140 | which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
|
141 | 141 | system.
|
|
242 | 242 | function do_factorization(alg::CholeskyFactorization, A, b, u)
|
243 | 243 | A = convert(AbstractMatrix, A)
|
244 | 244 | if A isa SparseMatrixCSC
|
245 |
| - fact = cholesky!(A; shift = alg.shift, check = false, perm = alg.perm) |
| 245 | + fact = cholesky(A; shift = alg.shift, check = false, perm = alg.perm) |
246 | 246 | elseif alg.pivot === Val(false) || alg.pivot === NoPivot()
|
247 | 247 | fact = cholesky!(A, alg.pivot; check = false)
|
248 | 248 | else
|
|
346 | 346 | `SVDFactorization(full=false,alg=LinearAlgebra.DivideAndConquer())`
|
347 | 347 |
|
348 | 348 | Julia's built in `svd`. Equivalent to `svd!(A)`.
|
349 |
| - |
| 349 | +
|
350 | 350 | * On dense matrices, this uses the current BLAS implementation of the user's computer
|
351 | 351 | which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
|
352 | 352 | system.
|
|
444 | 444 | `GenericFactorization(;fact_alg=LinearAlgebra.factorize)`: Constructs a linear solver from a generic
|
445 | 445 | factorization algorithm `fact_alg` which complies with the Base.LinearAlgebra
|
446 | 446 | factorization API. Quoting from Base:
|
447 |
| - |
| 447 | +
|
448 | 448 | * If `A` is upper or lower triangular (or diagonal), no factorization of `A` is
|
449 | 449 | required. The system is then solved with either forward or backward substitution.
|
450 | 450 | For non-triangular square matrices, an LU factorization is used.
|
|
666 | 666 | """
|
667 | 667 | `UMFPACKFactorization(;reuse_symbolic=true, check_pattern=true)`
|
668 | 668 |
|
669 |
| -A fast sparse multithreaded LU-factorization which specializes on sparsity |
| 669 | +A fast sparse multithreaded LU-factorization which specializes on sparsity |
670 | 670 | patterns with “more structure”.
|
671 | 671 |
|
672 | 672 | !!! note
|
@@ -850,7 +850,7 @@ Only supports sparse matrices.
|
850 | 850 |
|
851 | 851 | ## Keyword Arguments
|
852 | 852 |
|
853 |
| -* shift: the shift argument in CHOLMOD. |
| 853 | +* shift: the shift argument in CHOLMOD. |
854 | 854 | * perm: the perm argument in CHOLMOD
|
855 | 855 | """
|
856 | 856 | Base.@kwdef struct CHOLMODFactorization{T} <: AbstractFactorization
|
@@ -916,12 +916,12 @@ end
|
916 | 916 | ## RFLUFactorization
|
917 | 917 |
|
918 | 918 | """
|
919 |
| -`RFLUFactorization()` |
| 919 | +`RFLUFactorization()` |
920 | 920 |
|
921 | 921 | A fast pure Julia LU-factorization implementation
|
922 | 922 | using RecursiveFactorization.jl. This is by far the fastest LU-factorization
|
923 | 923 | implementation, usually outperforming OpenBLAS and MKL for smaller matrices
|
924 |
| -(<500x500), but currently optimized only for Base `Array` with `Float32` or `Float64`. |
| 924 | +(<500x500), but currently optimized only for Base `Array` with `Float32` or `Float64`. |
925 | 925 | Additional optimization for complex matrices is in the works.
|
926 | 926 | """
|
927 | 927 | struct RFLUFactorization{P, T} <: AbstractFactorization
|
@@ -1179,7 +1179,7 @@ end
|
1179 | 1179 | # But I'm not sure it makes sense as a GenericFactorization
|
1180 | 1180 | # since it just uses `LAPACK.getrf!`.
|
1181 | 1181 | """
|
1182 |
| -`FastLUFactorization()` |
| 1182 | +`FastLUFactorization()` |
1183 | 1183 |
|
1184 | 1184 | The FastLapackInterface.jl version of the LU factorization. Notably,
|
1185 | 1185 | this version does not allow for choice of pivoting method.
|
@@ -1210,7 +1210,7 @@ function SciMLBase.solve!(cache::LinearCache, alg::FastLUFactorization; kwargs..
|
1210 | 1210 | end
|
1211 | 1211 |
|
1212 | 1212 | """
|
1213 |
| -`FastQRFactorization()` |
| 1213 | +`FastQRFactorization()` |
1214 | 1214 |
|
1215 | 1215 | The FastLapackInterface.jl version of the QR factorization.
|
1216 | 1216 | """
|
|
0 commit comments