Skip to content

Commit 7a5eb95

Browse files
Merge pull request #91 from SciML/docs
Document developing algorithms and keyword arguments
2 parents 885773b + 0faa602 commit 7a5eb95

File tree

4 files changed

+92
-1
lines changed

4 files changed

+92
-1
lines changed

docs/make.jl

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,16 @@ makedocs(
1919
],
2020
"Basics" => Any[
2121
"basics/LinearProblem.md",
22+
"basics/common_solver_opts.md",
2223
"basics/CachingAPI.md",
2324
"basics/Preconditioners.md",
2425
"basics/FAQ.md"
2526
],
2627
"Solvers" => Any[
2728
"solvers/solvers.md"
29+
],
30+
"Advanced" => Any[
31+
"advanced/developing.md"
2832
]
2933
]
3034
)

docs/src/advanced/developing.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Developing New Linear Solvers
2+
3+
Developing new or custom linear solvers for the SciML interface can be done in
4+
one of two ways:
5+
6+
1. You can either create a completely new set of dispatches for `init` and `solve`.
7+
2. You can extend LinearSolve.jl's internal mechanisms.
8+
9+
For developer ease, we highly recommend (2) as that will automatically make the
10+
caching API work. Thus this is the documentation for how to do that.
11+
12+
## Developing New Linear Solvers with LinearSolve.jl Primitives
13+
14+
Let's create a new wrapper for a simple LU-factorization which uses only the
15+
basic machinery. A simplified version is:
16+
17+
```julia
18+
struct MyLUFactorization{P} <: SciMLBase.AbstractLinearAlgorithm end
19+
20+
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose) = lu!(convert(AbstractMatrix,A))
21+
22+
function SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization; kwargs...)
23+
if cache.isfresh
24+
A = convert(AbstractMatrix,A)
25+
fact = lu!(A)
26+
cache = set_cacheval(cache, fact)
27+
end
28+
y = ldiv!(cache.u, cache.cacheval, cache.b)
29+
SciMLBase.build_linear_solution(alg,y,nothing,cache)
30+
end
31+
```
32+
33+
The way this works is as follows. LinearSolve.jl has a `LinearCache` that everything
34+
shares (this is what gives most of the ease of use). However, many algorithms
35+
need to cache their own things, and so there's one value `cacheval` that is
36+
for the algorithms to modify. The function:
37+
38+
```julia
39+
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose)
40+
```
41+
42+
is what is called at `init` time to create the first `cacheval`. Note that this
43+
should match the type of the cache later used in `solve` as many algorithms, like
44+
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
45+
While there are cheaper ways to obtain this type for LU factorizations (specifically,
46+
`ArrayInterface.lu_instance(A)`), for a demonstration this just performs an
47+
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
48+
so its typed for future use.
49+
50+
After the `init_cacheval`, the only thing left to do is to define
51+
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
52+
may use a lazy matrix-free representation of the operator `A`. Thus if the
53+
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
54+
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
55+
`A` has changed since the last `solve`. Since we only need to factorize when
56+
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
57+
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache
58+
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
59+
performs the solve and a linear solution is returned via
60+
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.

docs/src/basics/Preconditioners.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Preconditioners
1+
# [Preconditioners](@id prec)
22

33
Many linear solvers can be accelerated by using what is known as a **preconditioner**,
44
an approximation to the matrix inverse action which is cheap to evaluate. These

docs/src/basics/common_solver_opts.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Common Solver Options (Keyword Arguments to Solve)
2+
3+
While many algorithms have specific arguments within their constructor,
4+
the keyword arguments for `solve` are common across all of the algorithms
5+
in order to give composability. These are also the options taken at `init` time.
6+
The following are the options these algorithms take, along with their defaults.
7+
8+
## General Controls
9+
10+
- `alias_A`: Whether to alias the matrix `A` or use a copy by default. When true,
11+
algorithms like LU-factorization can be faster by reusing the memory via `lu!`,
12+
but care must be taken as the original input will be modified. Default is `false`.
13+
- `alias_b`: Whether to alias the matrix `b` or use a copy by default. When true,
14+
algorithms can write and change `b` upon usage. Care must be taken as the
15+
original input will be modified. Default is `false`.
16+
- `verbose`: Whether to print extra information. Defaults to `false`.
17+
18+
## Iterative Solver Controls
19+
20+
Error controls are not used by all algorithms. Specifically, direct solves always
21+
solve completely. Error controls only apply to iterative solvers.
22+
23+
- `abstol`: The absolute tolerance. Defaults to `√(eps(eltype(A)))`
24+
- `reltol`: The relative tolerance. Defaults to `√(eps(eltype(A)))`
25+
- `maxiters`: The number of iterations allowed. Defaults to `length(prob.b)`
26+
- `Pl,Pr`: The left and right preconditioners respectively. For more information
27+
see [the Preconditioners page](@ref prec).

0 commit comments

Comments
 (0)