Skip to content

Commit 3072f82

Browse files
committed
format
1 parent d38aeb6 commit 3072f82

File tree

12 files changed

+433
-392
lines changed

12 files changed

+433
-392
lines changed

README.md

Lines changed: 21 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -7,26 +7,26 @@
77
[![Build Status](https://github.com/SciML/LinearSolvers.jl/workflows/CI/badge.svg)](https://github.com/SciML/LinearSolvers.jl/actions?query=workflow%3ACI)
88
[![Build status](https://badge.buildkite.com/74699764ce224514c9632e2750e08f77c6d174c5ba7cd38297.svg?branch=main)](https://buildkite.com/julialang/linearsolve-dot-jl)
99

10-
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
10+
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
1111
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)
1212

1313
Fast implementations of linear solving algorithms in Julia that satisfy the SciML
1414
common interface. LinearSolve.jl makes it easy to define high level algorithms
1515
which allow for swapping out the linear solver that is used while maintaining
1616
maximum efficiency. Specifically, LinearSolve.jl includes:
1717

18-
- Fast pure Julia LU factorizations which outperform standard BLAS
19-
- KLU for faster sparse LU factorization on unstructured matrices
20-
- UMFPACK for faster sparse LU factorization on matrices with some repeated structure
21-
- MKLPardiso wrappers for handling many sparse matrices faster than SuiteSparse (KLU, UMFPACK) methods
22-
- Sparspak.jl for sparse LU factorization in pure Julia for generic number types and for non-GPL distributions
23-
- GPU-offloading for large dense matrices
24-
- Wrappers to all of the Krylov implementations (Krylov.jl, IterativeSolvers.jl, KrylovKit.jl) for easy
25-
testing of all of them. LinearSolve.jl handles the API differences, especially with the preconditioner
26-
definitions
27-
- A polyalgorithm that smartly chooses between these methods
28-
- A caching interface which automates caching of symbolic factorizations and numerical factorizations
29-
as optimally as possible
18+
- Fast pure Julia LU factorizations which outperform standard BLAS
19+
- KLU for faster sparse LU factorization on unstructured matrices
20+
- UMFPACK for faster sparse LU factorization on matrices with some repeated structure
21+
- MKLPardiso wrappers for handling many sparse matrices faster than SuiteSparse (KLU, UMFPACK) methods
22+
- Sparspak.jl for sparse LU factorization in pure Julia for generic number types and for non-GPL distributions
23+
- GPU-offloading for large dense matrices
24+
- Wrappers to all of the Krylov implementations (Krylov.jl, IterativeSolvers.jl, KrylovKit.jl) for easy
25+
testing of all of them. LinearSolve.jl handles the API differences, especially with the preconditioner
26+
definitions
27+
- A polyalgorithm that smartly chooses between these methods
28+
- A caching interface which automates caching of symbolic factorizations and numerical factorizations
29+
as optimally as possible
3030

3131
For information on using the package,
3232
[see the stable documentation](https://docs.sciml.ai/LinearSolve/stable/). Use the
@@ -37,8 +37,9 @@ the documentation which contains the unreleased features.
3737

3838
```julia
3939
n = 4
40-
A = rand(n,n)
41-
b1 = rand(n); b2 = rand(n)
40+
A = rand(n, n)
41+
b1 = rand(n);
42+
b2 = rand(n);
4243
prob = LinearProblem(A, b1)
4344

4445
linsolve = init(prob)
@@ -53,7 +54,7 @@ sol1.u
5354
1.8385599677530706
5455
=#
5556

56-
linsolve = LinearSolve.set_b(linsolve,b2)
57+
linsolve = LinearSolve.set_b(linsolve, b2)
5758
sol2 = solve(linsolve)
5859

5960
sol2.u
@@ -65,8 +66,8 @@ sol2.u
6566
-0.4998342686003478
6667
=#
6768

68-
linsolve = LinearSolve.set_b(linsolve,b2)
69-
sol2 = solve(linsolve,IterativeSolversJL_GMRES()) # Switch to GMRES
69+
linsolve = LinearSolve.set_b(linsolve, b2)
70+
sol2 = solve(linsolve, IterativeSolversJL_GMRES()) # Switch to GMRES
7071
sol2.u
7172
#=
7273
4-element Vector{Float64}:
@@ -76,8 +77,8 @@ sol2.u
7677
-0.4998342686003478
7778
=#
7879

79-
A2 = rand(n,n)
80-
linsolve = LinearSolve.set_A(linsolve,A2)
80+
A2 = rand(n, n)
81+
linsolve = LinearSolve.set_A(linsolve, A2)
8182
sol3 = solve(linsolve)
8283

8384
sol3.u

docs/src/advanced/custom.md

Lines changed: 19 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# Passing in a Custom Linear Solver
2+
23
Julia users are building a wide variety of applications in the SciML ecosystem,
34
often requiring problem-specific handling of their linear solves. As existing solvers in `LinearSolve.jl` may not
45
be optimally suited for novel applications, it is essential for the linear solve
@@ -10,7 +11,7 @@ user can pass in their custom linear solve function, say `my_linsolve`, to
1011
```@example advanced1
1112
using LinearSolve, LinearAlgebra
1213
13-
function my_linsolve(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
14+
function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...)
1415
if verbose == true
1516
println("solving Ax=b")
1617
end
@@ -19,34 +20,37 @@ function my_linsolve(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
1920
end
2021
2122
prob = LinearProblem(Diagonal(rand(4)), rand(4))
22-
alg = LinearSolveFunction(my_linsolve)
23-
sol = solve(prob, alg)
23+
alg = LinearSolveFunction(my_linsolve)
24+
sol = solve(prob, alg)
2425
sol.u
2526
```
27+
2628
The inputs to the function are as follows:
27-
- `A`, the linear operator
28-
- `b`, the right-hand-side
29-
- `u`, the solution initialized as `zero(b)`,
30-
- `p`, a set of parameters
31-
- `newA`, a `Bool` which is `true` if `A` has been modified since last solve
32-
- `Pl`, left-preconditioner
33-
- `Pr`, right-preconditioner
34-
- `solverdata`, solver cache set to `nothing` if solver hasn't been initialized
35-
- `kwargs`, standard SciML keyword arguments such as `verbose`, `maxiters`, `abstol`, `reltol`
29+
30+
- `A`, the linear operator
31+
- `b`, the right-hand-side
32+
- `u`, the solution initialized as `zero(b)`,
33+
- `p`, a set of parameters
34+
- `newA`, a `Bool` which is `true` if `A` has been modified since last solve
35+
- `Pl`, left-preconditioner
36+
- `Pr`, right-preconditioner
37+
- `solverdata`, solver cache set to `nothing` if solver hasn't been initialized
38+
- `kwargs`, standard SciML keyword arguments such as `verbose`, `maxiters`, `abstol`, `reltol`
3639

3740
The function `my_linsolve` must accept the above specified arguments, and return
3841
the solution, `u`. As memory for `u` is already allocated, the user may choose
3942
to modify `u` in place as follows:
43+
4044
```@example advanced1
41-
function my_linsolve!(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
45+
function my_linsolve!(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...)
4246
if verbose == true
4347
println("solving Ax=b")
4448
end
4549
u .= A \ b # in place
4650
return u
4751
end
4852
49-
alg = LinearSolveFunction(my_linsolve!)
50-
sol = solve(prob, alg)
53+
alg = LinearSolveFunction(my_linsolve!)
54+
sol = solve(prob, alg)
5155
sol.u
5256
```

docs/src/advanced/developing.md

Lines changed: 63 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -1,60 +1,63 @@
1-
# Developing New Linear Solvers
2-
3-
Developing new or custom linear solvers for the SciML interface can be done in
4-
one of two ways:
5-
6-
1. You can either create a completely new set of dispatches for `init` and `solve`.
7-
2. You can extend LinearSolve.jl's internal mechanisms.
8-
9-
For developer ease, we highly recommend (2) as that will automatically make the
10-
caching API work. Thus, this is the documentation for how to do that.
11-
12-
## Developing New Linear Solvers with LinearSolve.jl Primitives
13-
14-
Let's create a new wrapper for a simple LU-factorization which uses only the
15-
basic machinery. A simplified version is:
16-
17-
```julia
18-
struct MyLUFactorization{P} <: SciMLBase.AbstractLinearAlgorithm end
19-
20-
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose) = lu!(convert(AbstractMatrix,A))
21-
22-
function SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization; kwargs...)
23-
if cache.isfresh
24-
A = convert(AbstractMatrix,A)
25-
fact = lu!(A)
26-
cache = set_cacheval(cache, fact)
27-
end
28-
y = ldiv!(cache.u, cache.cacheval, cache.b)
29-
SciMLBase.build_linear_solution(alg,y,nothing,cache)
30-
end
31-
```
32-
33-
The way this works is as follows. LinearSolve.jl has a `LinearCache` that everything
34-
shares (this is what gives most of the ease of use). However, many algorithms
35-
need to cache their own things, and so there's one value `cacheval` that is
36-
for the algorithms to modify. The function:
37-
38-
```julia
39-
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose)
40-
```
41-
42-
is what is called at `init` time to create the first `cacheval`. Note that this
43-
should match the type of the cache later used in `solve` as many algorithms, like
44-
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
45-
While there are cheaper ways to obtain this type for LU factorizations (specifically,
46-
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration, this just performs an
47-
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
48-
so it is typed for future use.
49-
50-
After the `init_cacheval`, the only thing left to do is to define
51-
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
52-
may use a lazy matrix-free representation of the operator `A`. Thus, if the
53-
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
54-
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
55-
`A` has changed since the last `solve`. Since we only need to factorize when
56-
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
57-
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache,
58-
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
59-
performs the solve and a linear solution is returned via
60-
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.
1+
# Developing New Linear Solvers
2+
3+
Developing new or custom linear solvers for the SciML interface can be done in
4+
one of two ways:
5+
6+
1. You can either create a completely new set of dispatches for `init` and `solve`.
7+
2. You can extend LinearSolve.jl's internal mechanisms.
8+
9+
For developer ease, we highly recommend (2) as that will automatically make the
10+
caching API work. Thus, this is the documentation for how to do that.
11+
12+
## Developing New Linear Solvers with LinearSolve.jl Primitives
13+
14+
Let's create a new wrapper for a simple LU-factorization which uses only the
15+
basic machinery. A simplified version is:
16+
17+
```julia
18+
struct MyLUFactorization{P} <: SciMLBase.AbstractLinearAlgorithm end
19+
20+
function init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol,
21+
verbose)
22+
lu!(convert(AbstractMatrix, A))
23+
end
24+
25+
function SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization; kwargs...)
26+
if cache.isfresh
27+
A = convert(AbstractMatrix, A)
28+
fact = lu!(A)
29+
cache = set_cacheval(cache, fact)
30+
end
31+
y = ldiv!(cache.u, cache.cacheval, cache.b)
32+
SciMLBase.build_linear_solution(alg, y, nothing, cache)
33+
end
34+
```
35+
36+
The way this works is as follows. LinearSolve.jl has a `LinearCache` that everything
37+
shares (this is what gives most of the ease of use). However, many algorithms
38+
need to cache their own things, and so there's one value `cacheval` that is
39+
for the algorithms to modify. The function:
40+
41+
```julia
42+
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose)
43+
```
44+
45+
is what is called at `init` time to create the first `cacheval`. Note that this
46+
should match the type of the cache later used in `solve` as many algorithms, like
47+
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
48+
While there are cheaper ways to obtain this type for LU factorizations (specifically,
49+
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration, this just performs an
50+
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
51+
so it is typed for future use.
52+
53+
After the `init_cacheval`, the only thing left to do is to define
54+
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
55+
may use a lazy matrix-free representation of the operator `A`. Thus, if the
56+
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
57+
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
58+
`A` has changed since the last `solve`. Since we only need to factorize when
59+
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
60+
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache,
61+
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
62+
performs the solve and a linear solution is returned via
63+
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.

docs/src/basics/CachingAPI.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
# Caching Interface API Functions
2-
3-
```@docs
4-
LinearSolve.set_A
5-
LinearSolve.set_b
6-
LinearSolve.set_u
7-
LinearSolve.set_p
8-
LinearSolve.set_prec
9-
```
1+
# Caching Interface API Functions
2+
3+
```@docs
4+
LinearSolve.set_A
5+
LinearSolve.set_b
6+
LinearSolve.set_u
7+
LinearSolve.set_p
8+
LinearSolve.set_prec
9+
```

docs/src/basics/FAQ.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -14,22 +14,22 @@ efficiency and ability to choose solvers.
1414
This is addressed in the [JuliaCon 2022 video](https://youtu.be/JWI34_w-yYw?t=182). This happens in
1515
a few ways:
1616

17-
1. The Fortran/C code that NumPy/SciPy uses is actually slow. It's [OpenBLAS](https://github.com/xianyi/OpenBLAS),
18-
a library developed in part by the Julia Lab back in 2012 as a fast open source BLAS implementation. Many
19-
open source environments now use this build, including many R distributions. However, the Julia Lab has greatly
20-
improved its ability to generate optimized SIMD in platform-specific ways. This, and improved multithreading support
21-
(OpenBLAS's multithreading is rather slow), has led to pure Julia-based BLAS implementations which the lab now
22-
works on. This includes [RecursiveFactorization.jl](https://github.com/JuliaLinearAlgebra/RecursiveFactorization.jl)
23-
which generally outperforms OpenBLAS by 2x-10x depending on the platform. It even outperforms MKL for small matrices
24-
(<100). LinearSolve.jl uses RecursiveFactorization.jl by default sometimes, but switches to BLAS when it would be
25-
faster (in a platform and matrix-specific way).
26-
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
27-
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
28-
3. LinearSolve.jl makes many other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
29-
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
30-
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
31-
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
32-
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
17+
1. The Fortran/C code that NumPy/SciPy uses is actually slow. It's [OpenBLAS](https://github.com/xianyi/OpenBLAS),
18+
a library developed in part by the Julia Lab back in 2012 as a fast open source BLAS implementation. Many
19+
open source environments now use this build, including many R distributions. However, the Julia Lab has greatly
20+
improved its ability to generate optimized SIMD in platform-specific ways. This, and improved multithreading support
21+
(OpenBLAS's multithreading is rather slow), has led to pure Julia-based BLAS implementations which the lab now
22+
works on. This includes [RecursiveFactorization.jl](https://github.com/JuliaLinearAlgebra/RecursiveFactorization.jl)
23+
which generally outperforms OpenBLAS by 2x-10x depending on the platform. It even outperforms MKL for small matrices
24+
(<100). LinearSolve.jl uses RecursiveFactorization.jl by default sometimes, but switches to BLAS when it would be
25+
faster (in a platform and matrix-specific way).
26+
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
27+
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
28+
3. LinearSolve.jl makes many other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
29+
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
30+
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
31+
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
32+
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
3333

3434
## How do I use IterativeSolvers solvers with a weighted tolerance vector?
3535

@@ -41,16 +41,15 @@ hack the system via the following formulation:
4141
using LinearSolve, LinearAlgebra
4242
4343
n = 2
44-
A = rand(n,n)
44+
A = rand(n, n)
4545
b = rand(n)
4646
4747
weights = [1e-1, 1]
4848
Pl = LinearSolve.InvPreconditioner(Diagonal(weights))
4949
Pr = Diagonal(weights)
5050
51-
52-
prob = LinearProblem(A,b)
53-
sol = solve(prob,IterativeSolversJL_GMRES(),Pl=Pl,Pr=Pr)
51+
prob = LinearProblem(A, b)
52+
sol = solve(prob, IterativeSolversJL_GMRES(), Pl = Pl, Pr = Pr)
5453
5554
sol.u
5655
```
@@ -63,14 +62,15 @@ of the weights like as follows:
6362
using LinearSolve, LinearAlgebra
6463
6564
n = 4
66-
A = rand(n,n)
65+
A = rand(n, n)
6766
b = rand(n)
6867
6968
weights = rand(n)
70-
realprec = lu(rand(n,n)) # some random preconditioner
71-
Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)),realprec)
69+
realprec = lu(rand(n, n)) # some random preconditioner
70+
Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)),
71+
realprec)
7272
Pr = Diagonal(weights)
7373
74-
prob = LinearProblem(A,b)
75-
sol = solve(prob,IterativeSolversJL_GMRES(),Pl=Pl,Pr=Pr)
74+
prob = LinearProblem(A, b)
75+
sol = solve(prob, IterativeSolversJL_GMRES(), Pl = Pl, Pr = Pr)
7676
```

docs/src/basics/LinearProblem.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22

33
```@docs
44
LinearProblem
5-
```
5+
```

0 commit comments

Comments
 (0)