Skip to content

Commit 57203cf

Browse files
Merge pull request #250 from ArnoStrouwen/LanguageTool
[skip ci] LanguageTool
2 parents e850e1e + 70e944b commit 57203cf

File tree

7 files changed

+39
-42
lines changed

7 files changed

+39
-42
lines changed

docs/src/advanced/developing.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ one of two ways:
77
2. You can extend LinearSolve.jl's internal mechanisms.
88

99
For developer ease, we highly recommend (2) as that will automatically make the
10-
caching API work. Thus this is the documentation for how to do that.
10+
caching API work. Thus, this is the documentation for how to do that.
1111

1212
## Developing New Linear Solvers with LinearSolve.jl Primitives
1313

@@ -43,18 +43,18 @@ is what is called at `init` time to create the first `cacheval`. Note that this
4343
should match the type of the cache later used in `solve` as many algorithms, like
4444
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
4545
While there are cheaper ways to obtain this type for LU factorizations (specifically,
46-
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration this just performs an
46+
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration, this just performs an
4747
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
48-
so its typed for future use.
48+
so it is typed for future use.
4949

5050
After the `init_cacheval`, the only thing left to do is to define
5151
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
52-
may use a lazy matrix-free representation of the operator `A`. Thus if the
52+
may use a lazy matrix-free representation of the operator `A`. Thus, if the
5353
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
5454
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
5555
`A` has changed since the last `solve`. Since we only need to factorize when
5656
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
57-
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache
57+
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache,
5858
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
5959
performs the solve and a linear solution is returned via
6060
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.

docs/src/basics/FAQ.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,11 @@
11
# Frequently Asked Questions
22

3-
Ask more questions.
4-
53
## How is LinearSolve.jl compared to just using normal \, i.e. A\b?
64

75
Check out [this video from JuliaCon 2022](https://www.youtube.com/watch?v=JWI34_w-yYw) which goes
8-
into detail on how and why LinearSolve.jl is able to be a more general and efficient interface.
6+
into detail on how and why LinearSolve.jl can be a more general and efficient interface.
97

10-
Note that if `\` is good enough for you, great! We still tend to use `\` in the REPL all of the time!
8+
Note that if `\` is good enough for you, great! We still tend to use `\` in the REPL all the time!
119
However, if you're building a package, you may want to consider using LinearSolve.jl for the improved
1210
efficiency and ability to choose solvers.
1311

@@ -27,16 +25,16 @@ a few ways:
2725
faster (in a platform and matrix-specific way).
2826
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
2927
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
30-
3. LinearSolve.jl makes a lot of other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
28+
3. LinearSolve.jl makes many other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
3129
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
3230
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
3331
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
3432
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
3533

3634
## How do I use IterativeSolvers solvers with a weighted tolerance vector?
3735

38-
IterativeSolvers.jl computes the norm after the application of the left precondtioner
39-
`Pl`. Thus in order to use a vector tolerance `weights`, one can mathematically
36+
IterativeSolvers.jl computes the norm after the application of the left preconditioner
37+
`Pl`. Thus, in order to use a vector tolerance `weights`, one can mathematically
4038
hack the system via the following formulation:
4139

4240
```@example FAQPrec
@@ -57,7 +55,7 @@ sol = solve(prob,IterativeSolversJL_GMRES(),Pl=Pl,Pr=Pr)
5755
sol.u
5856
```
5957

60-
If you want to use a "real" preconditioner under the norm `weights`, then one
58+
If you want to use a real preconditioner under the norm `weights`, then one
6159
can use `ComposePreconditioner` to apply the preconditioner after the application
6260
of the weights like as follows:
6361

docs/src/basics/Preconditioners.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ A two-sided preconditioned system is of the form:
3131
P_l^{-1}A P_r^{-1} (P_r u) = P_l^{-1}b
3232
```
3333

34-
By default, if no preconditioner is given the preconditioner is assumed to be
34+
By default, if no preconditioner is given, the preconditioner is assumed to be
3535
the identity ``I``.
3636

3737
### Using Preconditioners
@@ -99,8 +99,8 @@ The following preconditioners match the interface of LinearSolve.jl.
9999
- [LimitedLDLFactorizations.lldl](https://github.com/JuliaSmoothOptimizers/LimitedLDLFactorizations.jl):
100100
A limited-memory LDLᵀ factorization for symmetric matrices. Requires `A` as a
101101
`SparseMatrixCSC`. Applying `F = lldl(A); F.D .= abs.(F.D)` before usage as a preconditioner
102-
makes the preconditioner symmetric postive definite and thus is required for Krylov methods which
102+
makes the preconditioner symmetric positive definite and thus is required for Krylov methods which
103103
are specialized for symmetric linear systems.
104104
- [RandomizedPreconditioners.NystromPreconditioner](https://github.com/tjdiamandis/RandomizedPreconditioners.jl)
105105
A randomized sketching method for positive semidefinite matrices `A`. Builds a preconditioner ``P ≈ A + μ*I``
106-
for the system ``(A + μ*I)x = b``
106+
for the system ``(A + μ*I)x = b``.

docs/src/basics/common_solver_opts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Common Solver Options (Keyword Arguments for Solve)
22

33
While many algorithms have specific arguments within their constructor,
4-
the keyword arguments for `solve` are common across all of the algorithms
4+
the keyword arguments for `solve` are common across all the algorithms
55
in order to give composability. These are also the options taken at `init` time.
66
The following are the options these algorithms take, along with their defaults.
77

@@ -23,5 +23,5 @@ solve completely. Error controls only apply to iterative solvers.
2323
- `abstol`: The absolute tolerance. Defaults to `√(eps(eltype(A)))`
2424
- `reltol`: The relative tolerance. Defaults to `√(eps(eltype(A)))`
2525
- `maxiters`: The number of iterations allowed. Defaults to `length(prob.b)`
26-
- `Pl,Pr`: The left and right preconditioners respectively. For more information
26+
- `Pl,Pr`: The left and right preconditioners, respectively. For more information,
2727
see [the Preconditioners page](@ref prec).

docs/src/solvers/solvers.md

Lines changed: 18 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Solves for ``Au=b`` in the problem defined by `prob` using the algorithm
77

88
## Recommended Methods
99

10-
The default algorithm `nothing` is good for choosing an algorithm that will work,
10+
The default algorithm `nothing` is good for picking an algorithm that will work,
1111
but one may need to change this to receive more performance or precision. If
1212
more precision is necessary, `QRFactorization()` and `SVDFactorization()` are
1313
the best choices, with SVD being the slowest but most precise.
@@ -29,7 +29,7 @@ with CPUs and GPUs, and thus is the generally preferred form for Krylov methods.
2929

3030
Finally, a user can pass a custom function for handling the linear solve using
3131
`LinearSolveFunction()` if existing solvers are not optimally suited for their application.
32-
The interface is detailed [here](#passing-in-a-custom-linear-solver)
32+
The interface is detailed [here](#passing-in-a-custom-linear-solver).
3333

3434
## Full List of Methods
3535

@@ -49,29 +49,29 @@ customized per-package, details given below describe a subset of important array
4949
(`Matrix`, `SparseMatrixCSC`, `CuMatrix`, etc.)
5050

5151
- `LUFactorization(pivot=LinearAlgebra.RowMaximum())`: Julia's built in `lu`.
52-
- On dense matrices this uses the current BLAS implementation of the user's computer
52+
- On dense matrices, this uses the current BLAS implementation of the user's computer,
5353
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
5454
system.
55-
- On sparse matrices this will use UMFPACK from SuiteSparse. Note that this will not
55+
- On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
5656
cache the symbolic factorization.
57-
- On CuMatrix it will use a CUDA-accelerated LU from CuSolver.
58-
- On BandedMatrix and BlockBandedMatrix it will use a banded LU.
57+
- On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
58+
- On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
5959
- `QRFactorization(pivot=LinearAlgebra.NoPivot(),blocksize=16)`: Julia's built in `qr`.
60-
- On dense matrices this uses the current BLAS implementation of the user's computer
60+
- On dense matrices, this uses the current BLAS implementation of the user's computer
6161
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
6262
system.
63-
- On sparse matrices this will use SPQR from SuiteSparse
64-
- On CuMatrix it will use a CUDA-accelerated QR from CuSolver.
65-
- On BandedMatrix and BlockBandedMatrix it will use a banded QR.
63+
- On sparse matrices, this will use SPQR from SuiteSparse
64+
- On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
65+
- On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
6666
- `SVDFactorization(full=false,alg=LinearAlgebra.DivideAndConquer())`: Julia's built in `svd`.
67-
- On dense matrices this uses the current BLAS implementation of the user's computer
67+
- On dense matrices, this uses the current BLAS implementation of the user's computer
6868
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
6969
system.
7070
- `GenericFactorization(fact_alg)`: Constructs a linear solver from a generic
7171
factorization algorithm `fact_alg` which complies with the Base.LinearAlgebra
7272
factorization API. Quoting from Base:
7373
- If `A` is upper or lower triangular (or diagonal), no factorization of `A` is
74-
required and the system is solved with either forward or backward substitution.
74+
required. The system is then solved with either forward or backward substitution.
7575
For non-triangular square matrices, an LU factorization is used.
7676
For rectangular `A` the result is the minimum-norm least squares solution computed by a
7777
pivoted QR factorization of `A` and a rank estimate of `A` based on the R factor.
@@ -94,23 +94,22 @@ LinearSolve.jl provides a wrapper to these routines in a way where an initialize
9494
has a non-allocating LU factorization. In theory, this post-initialized solve should always
9595
be faster than the Base.LinearAlgebra version.
9696

97-
- `FastLUFactorization` the `FastLapackInterface` version of the LU factorizaiton. Notably,
97+
- `FastLUFactorization` the `FastLapackInterface` version of the LU factorization. Notably,
9898
this version does not allow for choice of pivoting method.
9999
- `FastQRFactorization(pivot=NoPivot(),blocksize=32)`, the `FastLapackInterface` version of
100-
the QR factorizaiton.
100+
the QR factorization.
101101

102102
### SuiteSparse.jl
103103

104104
By default, the SuiteSparse.jl are implemented for efficiency by caching the
105-
symbolic factorization. I.e. if `set_A` is used, it is expected that the new
105+
symbolic factorization. I.e., if `set_A` is used, it is expected that the new
106106
`A` has the same sparsity pattern as the previous `A`. If this algorithm is to
107107
be used in a context where that assumption does not hold, set `reuse_symbolic=false`.
108108

109109
- `KLUFactorization(;reuse_symbolic=true)`: A fast sparse LU-factorization which
110-
specializes on sparsity patterns with "less structure".
110+
specializes on sparsity patterns with less structure.
111111
- `UMFPACKFactorization(;reuse_symbolic=true)`: A fast sparse multithreaded
112-
LU-factorization which specializes on sparsity patterns that are more
113-
structured.
112+
LU-factorization which specializes on sparsity patterns with “more structure”.
114113

115114
### Pardiso.jl
116115

@@ -150,7 +149,7 @@ end
150149

151150
### CUDA.jl
152151

153-
Note that `CuArrays` are supported by `GenericFactorization` in the "normal" way.
152+
Note that `CuArrays` are supported by `GenericFactorization` in the normal way.
154153
The following are non-standard GPU factorization routines.
155154

156155
!!! note

docs/src/tutorials/caching_interface.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Linear Solve with Caching Interface
22

3-
In many cases one may want to cache information that is reused between different
3+
Often, one may want to cache information that is reused between different
44
linear solves. For example, if one is going to perform:
55

66
```julia
@@ -52,9 +52,9 @@ sol3.u
5252

5353
The factorization occurs on the first solve, and it stores the factorization in
5454
the cache. You can retrieve this cache via `sol.cache`, which is the same object
55-
as the `init` but updated to know not to re-solve the factorization.
55+
as the `init`, but updated to know not to re-solve the factorization.
5656

5757
The advantage of course with using LinearSolve.jl in this form is that it is
5858
efficient while being agnostic to the linear solver. One can easily swap in
59-
iterative solvers, sparse solvers, etc. and it will do all of the tricks like
60-
caching symbolic factorizations if the sparsity pattern is unchanged.
59+
iterative solvers, sparse solvers, etc. and it will do all the tricks like
60+
caching the symbolic factorization if the sparsity pattern is unchanged.

docs/src/tutorials/linear.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,6 @@ sol = solve(prob,KrylovJL_GMRES())
3232
sol.u
3333
```
3434

35-
Thus a package which uses LinearSolve.jl simply needs to allow the user to
35+
Thus, a package which uses LinearSolve.jl simply needs to allow the user to
3636
pass in an algorithm struct and all wrapped linear solvers are immediately
3737
available as tweaks to the general algorithm.

0 commit comments

Comments
 (0)