Skip to content

Commit 10758e8

Browse files
Improve a few docstrings
1 parent 74edf0b commit 10758e8

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

src/factorization.jl

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,13 @@ end
3030
3131
Julia's built in `lu`. Equivalent to calling `lu!(A)`
3232
33-
* On dense matrices, this uses the current BLAS implementation of the user's computer,
34-
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
35-
system.
36-
* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
37-
cache the symbolic factorization.
38-
* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
39-
* On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
33+
* On dense matrices, this uses the current BLAS implementation of the user's computer,
34+
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
35+
system.
36+
* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
37+
cache the symbolic factorization.
38+
* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
39+
* On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
4040
4141
## Positional Arguments
4242
@@ -136,12 +136,12 @@ end
136136
137137
Julia's built in `qr`. Equivalent to calling `qr!(A)`.
138138
139-
* On dense matrices, this uses the current BLAS implementation of the user's computer
140-
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
141-
system.
142-
* On sparse matrices, this will use SPQR from SuiteSparse
143-
* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
144-
* On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
139+
* On dense matrices, this uses the current BLAS implementation of the user's computer
140+
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
141+
system.
142+
* On sparse matrices, this will use SPQR from SuiteSparse
143+
* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
144+
* On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
145145
"""
146146
struct QRFactorization{P} <: AbstractFactorization
147147
pivot::P
@@ -324,9 +324,9 @@ end
324324
325325
Julia's built in `svd`. Equivalent to `svd!(A)`.
326326
327-
* On dense matrices, this uses the current BLAS implementation of the user's computer
328-
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
329-
system.
327+
* On dense matrices, this uses the current BLAS implementation of the user's computer
328+
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
329+
system.
330330
"""
331331
struct SVDFactorization{A} <: AbstractFactorization
332332
full::Bool

0 commit comments

Comments
 (0)