Skip to content

Commit 6370728

Browse files
Merge pull request #635 from SciML/explicitimports
Use explicit imports
2 parents aba7dd6 + d876514 commit 6370728

File tree

7 files changed

+83
-73
lines changed

7 files changed

+83
-73
lines changed

docs/src/advanced/custom.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,13 @@ Julia users are building a wide variety of applications in the SciML ecosystem,
44
often requiring problem-specific handling of their linear solves. As existing solvers in `LinearSolve.jl` may not
55
be optimally suited for novel applications, it is essential for the linear solve
66
interface to be easily extendable by users. To that end, the linear solve algorithm
7-
`LinearSolveFunction()` accepts a user-defined function for handling the solve. A
7+
`LS.LinearSolveFunction()` accepts a user-defined function for handling the solve. A
88
user can pass in their custom linear solve function, say `my_linsolve`, to
9-
`LinearSolveFunction()`. A contrived example of solving a linear system with a custom solver is below.
9+
`LS.LinearSolveFunction()`. A contrived example of solving a linear system with a custom solver is below.
1010

1111
```@example advanced1
12-
using LinearSolve, LinearAlgebra
12+
import LinearSolve as LS
13+
import LinearAlgebra as LA
1314
1415
function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...)
1516
if verbose == true
@@ -19,9 +20,9 @@ function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwarg
1920
return u
2021
end
2122
22-
prob = LinearProblem(Diagonal(rand(4)), rand(4))
23-
alg = LinearSolveFunction(my_linsolve)
24-
sol = solve(prob, alg)
23+
prob = LS.LinearProblem(LA.Diagonal(rand(4)), rand(4))
24+
alg = LS.LinearSolveFunction(my_linsolve)
25+
sol = LS.solve(prob, alg)
2526
sol.u
2627
```
2728

@@ -50,7 +51,7 @@ function my_linsolve!(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwar
5051
return u
5152
end
5253
53-
alg = LinearSolveFunction(my_linsolve!)
54-
sol = solve(prob, alg)
54+
alg = LS.LinearSolveFunction(my_linsolve!)
55+
sol = LS.solve(prob, alg)
5556
sol.u
5657
```

docs/src/basics/FAQ.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -50,17 +50,18 @@ Thus, in order to use a vector tolerance `weights`, one can mathematically
5050
hack the system via the following formulation:
5151

5252
```@example FAQPrec
53-
using LinearSolve, LinearAlgebra
53+
import LinearSolve as LS
54+
import LinearAlgebra as LA
5455
5556
n = 2
5657
A = rand(n, n)
5758
b = rand(n)
5859
5960
weights = [1e-1, 1]
60-
precs = Returns((LinearSolve.InvPreconditioner(Diagonal(weights)), Diagonal(weights)))
61+
precs = Returns((LS.InvPreconditioner(LA.Diagonal(weights)), LA.Diagonal(weights)))
6162
62-
prob = LinearProblem(A, b)
63-
sol = solve(prob, KrylovJL_GMRES(precs))
63+
prob = LS.LinearProblem(A, b)
64+
sol = LS.solve(prob, LS.KrylovJL_GMRES(precs))
6465
6566
sol.u
6667
```
@@ -70,18 +71,19 @@ can use `ComposePreconditioner` to apply the preconditioner after the applicatio
7071
of the weights like as follows:
7172

7273
```@example FAQ2
73-
using LinearSolve, LinearAlgebra
74+
import LinearSolve as LS
75+
import LinearAlgebra as LA
7476
7577
n = 4
7678
A = rand(n, n)
7779
b = rand(n)
7880
7981
weights = rand(n)
80-
realprec = lu(rand(n, n)) # some random preconditioner
81-
Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)),
82+
realprec = LA.lu(rand(n, n)) # some random preconditioner
83+
Pl = LS.ComposePreconditioner(LS.InvPreconditioner(LA.Diagonal(weights)),
8284
realprec)
83-
Pr = Diagonal(weights)
85+
Pr = LA.Diagonal(weights)
8486
85-
prob = LinearProblem(A, b)
86-
sol = solve(prob, KrylovJL_GMRES(precs = Returns((Pl, Pr))))
87+
prob = LS.LinearProblem(A, b)
88+
sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = Returns((Pl, Pr))))
8789
```

docs/src/basics/Preconditioners.md

Lines changed: 17 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -38,16 +38,17 @@ the identity ``I``.
3838
In the following, we will use a left sided diagonal (Jacobi) preconditioner.
3939

4040
```@example precon1
41-
using LinearSolve, LinearAlgebra
41+
import LinearSolve as LS
42+
import LinearAlgebra as LA
4243
n = 4
4344
4445
A = rand(n, n)
4546
b = rand(n)
4647
47-
Pl = Diagonal(A)
48+
Pl = LA.Diagonal(A)
4849
49-
prob = LinearProblem(A, b)
50-
sol = solve(prob, KrylovJL_GMRES(), Pl = Pl)
50+
prob = LS.LinearProblem(A, b)
51+
sol = LS.solve(prob, LS.KrylovJL_GMRES(), Pl = Pl)
5152
sol.u
5253
```
5354

@@ -56,14 +57,15 @@ an iterative solver specification. This argument shall deliver a factory method
5657
parameter `p` to a tuple `(Pl,Pr)` consisting a left and a right preconditioner.
5758

5859
```@example precon2
59-
using LinearSolve, LinearAlgebra
60+
import LinearSolve as LS
61+
import LinearAlgebra as LA
6062
n = 4
6163
6264
A = rand(n, n)
6365
b = rand(n)
6466
65-
prob = LinearProblem(A, b)
66-
sol = solve(prob, KrylovJL_GMRES(precs = (A, p) -> (Diagonal(A), I)))
67+
prob = LS.LinearProblem(A, b)
68+
sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = (A, p) -> (LA.Diagonal(A), LA.I)))
6769
sol.u
6870
```
6971

@@ -73,26 +75,27 @@ and to pass parameters to the constructor of the preconditioner instances. The
7375
to reuse the preconditioner once constructed for the subsequent solution of a modified problem.
7476

7577
```@example precon3
76-
using LinearSolve, LinearAlgebra
78+
import LinearSolve as LS
79+
import LinearAlgebra as LA
7780
7881
Base.@kwdef struct WeightedDiagonalPreconBuilder
7982
w::Float64
8083
end
8184
82-
(builder::WeightedDiagonalPreconBuilder)(A, p) = (builder.w * Diagonal(A), I)
85+
(builder::WeightedDiagonalPreconBuilder)(A, p) = (builder.w * LA.Diagonal(A), LA.I)
8386
8487
n = 4
85-
A = n * I - rand(n, n)
88+
A = n * LA.I - rand(n, n)
8689
b = rand(n)
8790
88-
prob = LinearProblem(A, b)
89-
sol = solve(prob, KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9)))
91+
prob = LS.LinearProblem(A, b)
92+
sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9)))
9093
sol.u
9194
9295
B = A .+ 0.1
9396
cache = sol.cache
94-
reinit!(cache, A = B, reuse_precs = true)
95-
sol = solve!(cache, KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9)))
97+
LS.reinit!(cache, A = B, reuse_precs = true)
98+
sol = LS.solve!(cache, LS.KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9)))
9699
sol.u
97100
```
98101

docs/src/solvers/solvers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# [Linear System Solvers](@id linearsystemsolvers)
22

3-
`solve(prob::LinearProblem,alg;kwargs)`
3+
`LS.solve(prob::LS.LinearProblem,alg;kwargs)`
44

55
Solves for ``Au=b`` in the problem defined by `prob` using the algorithm
66
`alg`. If no algorithm is given, a default algorithm will be chosen.
@@ -11,7 +11,7 @@ Solves for ``Au=b`` in the problem defined by `prob` using the algorithm
1111

1212
The default algorithm `nothing` is good for picking an algorithm that will work,
1313
but one may need to change this to receive more performance or precision. If
14-
more precision is necessary, `QRFactorization()` and `SVDFactorization()` are
14+
more precision is necessary, `LS.QRFactorization()` and `LS.SVDFactorization()` are
1515
the best choices, with SVD being the slowest but most precise.
1616

1717
For efficiency, `RFLUFactorization` is the fastest for dense LU-factorizations until around
@@ -59,7 +59,7 @@ has, for example if positive definite then `Krylov_CG()`, but if no good propert
5959
use `Krylov_GMRES()`.
6060

6161
Finally, a user can pass a custom function for handling the linear solve using
62-
`LinearSolveFunction()` if existing solvers are not optimally suited for their application.
62+
`LS.LinearSolveFunction()` if existing solvers are not optimally suited for their application.
6363
The interface is detailed [here](@ref custom).
6464

6565
### Lazy SciMLOperators

docs/src/tutorials/caching_interface.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ A \ b2
1111
then it would be more efficient to LU-factorize one time and reuse the factorization:
1212

1313
```julia
14-
lu!(A)
14+
LA.lu!(A)
1515
A \ b1
1616
A \ b2
1717
```
@@ -21,21 +21,22 @@ means of solving and resolving linear systems. To do this with LinearSolve.jl,
2121
you simply `init` a cache, `solve`, replace `b`, and solve again. This looks like:
2222

2323
```@example linsys2
24-
using LinearSolve
24+
import LinearSolve as LS
25+
import LinearAlgebra as LA
2526
2627
n = 4
2728
A = rand(n, n)
2829
b1 = rand(n);
2930
b2 = rand(n);
30-
prob = LinearProblem(A, b1)
31+
prob = LS.LinearProblem(A, b1)
3132
32-
linsolve = init(prob)
33-
sol1 = solve!(linsolve)
33+
linsolve = LS.init(prob)
34+
sol1 = LS.solve!(linsolve)
3435
```
3536

3637
```@example linsys2
3738
linsolve.b = b2
38-
sol2 = solve!(linsolve)
39+
sol2 = LS.solve!(linsolve)
3940
4041
sol2.u
4142
```
@@ -45,7 +46,7 @@ Then refactorization will occur when a new `A` is given:
4546
```@example linsys2
4647
A2 = rand(n, n)
4748
linsolve.A = A2
48-
sol3 = solve!(linsolve)
49+
sol3 = LS.solve!(linsolve)
4950
5051
sol3.u
5152
```
@@ -54,7 +55,7 @@ The factorization occurs on the first solve, and it stores the factorization in
5455
the cache. You can retrieve this cache via `sol.cache`, which is the same object
5556
as the `init`, but updated to know not to re-solve the factorization.
5657

57-
The advantage of course with using LinearSolve.jl in this form is that it is
58+
The advantage of course with import LinearSolve.jl in this form is that it is
5859
efficient while being agnostic to the linear solver. One can easily swap in
5960
iterative solvers, sparse solvers, etc. and it will do all the tricks like
6061
caching the symbolic factorization if the sparsity pattern is unchanged.

docs/src/tutorials/gpu.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -27,20 +27,20 @@ GPU offloading is simple as it's done simply by changing the solver algorithm. T
2727
example from the start of the documentation:
2828

2929
```julia
30-
using LinearSolve
30+
import LinearSolve as LS
3131

3232
A = rand(4, 4)
3333
b = rand(4)
34-
prob = LinearProblem(A, b)
35-
sol = solve(prob)
34+
prob = LS.LinearProblem(A, b)
35+
sol = LS.solve(prob)
3636
sol.u
3737
```
3838

3939
This computation can be moved to the GPU by the following:
4040

4141
```julia
4242
using CUDA # Add the GPU library
43-
sol = solve(prob, CudaOffloadFactorization())
43+
sol = LS.solve(prob, LS.CudaOffloadFactorization())
4444
sol.u
4545
```
4646

@@ -56,8 +56,8 @@ using CUDA
5656

5757
A = rand(4, 4) |> cu
5858
b = rand(4) |> cu
59-
prob = LinearProblem(A, b)
60-
sol = solve(prob)
59+
prob = LS.LinearProblem(A, b)
60+
sol = LS.solve(prob)
6161
sol.u
6262
```
6363

@@ -81,13 +81,13 @@ move things to CPU on command.
8181
However, this change in numerical precision needs to be accounted for in your mathematics
8282
as it could lead to instabilities. To disable this, use a constructor that is more
8383
specific about the bitsize, such as `CuArray{Float64}(A)`. Additionally, preferring more
84-
stable factorization methods, such as `QRFactorization()`, can improve the numerics in
84+
stable factorization methods, such as `LS.QRFactorization()`, can improve the numerics in
8585
such cases.
8686

8787
Similarly to other use cases, you can choose the solver, for example:
8888

8989
```julia
90-
sol = solve(prob, QRFactorization())
90+
sol = LS.solve(prob, LS.QRFactorization())
9191
```
9292

9393
## Sparse Matrices on GPUs
@@ -96,10 +96,12 @@ Currently, sparse matrix computations on GPUs are only supported for CUDA. This
9696
the `CUDA.CUSPARSE` sublibrary.
9797

9898
```julia
99-
using LinearAlgebra, CUDA.CUSPARSE
99+
import LinearAlgebra as LA
100+
import SparseArrays as SA
101+
import CUDA
100102
T = Float32
101103
n = 100
102-
A_cpu = sprand(T, n, n, 0.05) + I
104+
A_cpu = SA.sprand(T, n, n, 0.05) + LA.I
103105
x_cpu = zeros(T, n)
104106
b_cpu = rand(T, n)
105107

@@ -112,7 +114,7 @@ In order to solve such problems using a direct method, you must add
112114

113115
```julia
114116
using CUDSS
115-
sol = solve(prob, LUFactorization())
117+
sol = LS.solve(prob, LS.LUFactorization())
116118
```
117119

118120
!!! note
@@ -122,13 +124,13 @@ sol = solve(prob, LUFactorization())
122124
Note that `KrylovJL` methods also work with sparse GPU arrays:
123125

124126
```julia
125-
sol = solve(prob, KrylovJL_GMRES())
127+
sol = LS.solve(prob, LS.KrylovJL_GMRES())
126128
```
127129

128130
Note that CUSPARSE also has some GPU-based preconditioners, such as a built-in `ilu`. However:
129131

130132
```julia
131-
sol = solve(prob, KrylovJL_GMRES(precs = (A, p) -> (CUDA.CUSPARSE.ilu02!(A, 'O'), I)))
133+
sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = (A, p) -> (CUDA.CUSPARSE.ilu02!(A, 'O'), LA.I)))
132134
```
133135

134136
However, right now CUSPARSE is missing the right `ldiv!` implementation for this to work

0 commit comments

Comments
 (0)