diff --git a/docs/src/advanced/custom.md b/docs/src/advanced/custom.md index 5f17b096c..ff0b41130 100644 --- a/docs/src/advanced/custom.md +++ b/docs/src/advanced/custom.md @@ -4,12 +4,13 @@ Julia users are building a wide variety of applications in the SciML ecosystem, often requiring problem-specific handling of their linear solves. As existing solvers in `LinearSolve.jl` may not be optimally suited for novel applications, it is essential for the linear solve interface to be easily extendable by users. To that end, the linear solve algorithm -`LinearSolveFunction()` accepts a user-defined function for handling the solve. A +`LS.LinearSolveFunction()` accepts a user-defined function for handling the solve. A user can pass in their custom linear solve function, say `my_linsolve`, to -`LinearSolveFunction()`. A contrived example of solving a linear system with a custom solver is below. +`LS.LinearSolveFunction()`. A contrived example of solving a linear system with a custom solver is below. ```@example advanced1 -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...) if verbose == true @@ -19,9 +20,9 @@ function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwarg return u end -prob = LinearProblem(Diagonal(rand(4)), rand(4)) -alg = LinearSolveFunction(my_linsolve) -sol = solve(prob, alg) +prob = LS.LinearProblem(LA.Diagonal(rand(4)), rand(4)) +alg = LS.LinearSolveFunction(my_linsolve) +sol = LS.solve(prob, alg) sol.u ``` @@ -50,7 +51,7 @@ function my_linsolve!(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwar return u end -alg = LinearSolveFunction(my_linsolve!) -sol = solve(prob, alg) +alg = LS.LinearSolveFunction(my_linsolve!) +sol = LS.solve(prob, alg) sol.u ``` diff --git a/docs/src/basics/FAQ.md b/docs/src/basics/FAQ.md index 5027b22e2..b5eb6e612 100644 --- a/docs/src/basics/FAQ.md +++ b/docs/src/basics/FAQ.md @@ -50,17 +50,18 @@ Thus, in order to use a vector tolerance `weights`, one can mathematically hack the system via the following formulation: ```@example FAQPrec -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA n = 2 A = rand(n, n) b = rand(n) weights = [1e-1, 1] -precs = Returns((LinearSolve.InvPreconditioner(Diagonal(weights)), Diagonal(weights))) +precs = Returns((LS.InvPreconditioner(LA.Diagonal(weights)), LA.Diagonal(weights))) -prob = LinearProblem(A, b) -sol = solve(prob, KrylovJL_GMRES(precs)) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob, LS.KrylovJL_GMRES(precs)) sol.u ``` @@ -70,18 +71,19 @@ can use `ComposePreconditioner` to apply the preconditioner after the applicatio of the weights like as follows: ```@example FAQ2 -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA n = 4 A = rand(n, n) b = rand(n) weights = rand(n) -realprec = lu(rand(n, n)) # some random preconditioner -Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)), +realprec = LA.lu(rand(n, n)) # some random preconditioner +Pl = LS.ComposePreconditioner(LS.InvPreconditioner(LA.Diagonal(weights)), realprec) -Pr = Diagonal(weights) +Pr = LA.Diagonal(weights) -prob = LinearProblem(A, b) -sol = solve(prob, KrylovJL_GMRES(precs = Returns((Pl, Pr)))) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = Returns((Pl, Pr)))) ``` diff --git a/docs/src/basics/Preconditioners.md b/docs/src/basics/Preconditioners.md index d8c385c45..83619704d 100644 --- a/docs/src/basics/Preconditioners.md +++ b/docs/src/basics/Preconditioners.md @@ -38,16 +38,17 @@ the identity ``I``. In the following, we will use a left sided diagonal (Jacobi) preconditioner. ```@example precon1 -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA n = 4 A = rand(n, n) b = rand(n) -Pl = Diagonal(A) +Pl = LA.Diagonal(A) -prob = LinearProblem(A, b) -sol = solve(prob, KrylovJL_GMRES(), Pl = Pl) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob, LS.KrylovJL_GMRES(), Pl = Pl) sol.u ``` @@ -56,14 +57,15 @@ an iterative solver specification. This argument shall deliver a factory method parameter `p` to a tuple `(Pl,Pr)` consisting a left and a right preconditioner. ```@example precon2 -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA n = 4 A = rand(n, n) b = rand(n) -prob = LinearProblem(A, b) -sol = solve(prob, KrylovJL_GMRES(precs = (A, p) -> (Diagonal(A), I))) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = (A, p) -> (LA.Diagonal(A), LA.I))) sol.u ``` @@ -73,26 +75,27 @@ and to pass parameters to the constructor of the preconditioner instances. The to reuse the preconditioner once constructed for the subsequent solution of a modified problem. ```@example precon3 -using LinearSolve, LinearAlgebra +import LinearSolve as LS +import LinearAlgebra as LA Base.@kwdef struct WeightedDiagonalPreconBuilder w::Float64 end -(builder::WeightedDiagonalPreconBuilder)(A, p) = (builder.w * Diagonal(A), I) +(builder::WeightedDiagonalPreconBuilder)(A, p) = (builder.w * LA.Diagonal(A), LA.I) n = 4 -A = n * I - rand(n, n) +A = n * LA.I - rand(n, n) b = rand(n) -prob = LinearProblem(A, b) -sol = solve(prob, KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9))) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9))) sol.u B = A .+ 0.1 cache = sol.cache -reinit!(cache, A = B, reuse_precs = true) -sol = solve!(cache, KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9))) +LS.reinit!(cache, A = B, reuse_precs = true) +sol = LS.solve!(cache, LS.KrylovJL_GMRES(precs = WeightedDiagonalPreconBuilder(w = 0.9))) sol.u ``` diff --git a/docs/src/solvers/solvers.md b/docs/src/solvers/solvers.md index f6c828edb..02825b3fa 100644 --- a/docs/src/solvers/solvers.md +++ b/docs/src/solvers/solvers.md @@ -1,6 +1,6 @@ # [Linear System Solvers](@id linearsystemsolvers) -`solve(prob::LinearProblem,alg;kwargs)` +`LS.solve(prob::LS.LinearProblem,alg;kwargs)` Solves for ``Au=b`` in the problem defined by `prob` using the algorithm `alg`. If no algorithm is given, a default algorithm will be chosen. @@ -11,7 +11,7 @@ Solves for ``Au=b`` in the problem defined by `prob` using the algorithm The default algorithm `nothing` is good for picking an algorithm that will work, but one may need to change this to receive more performance or precision. If -more precision is necessary, `QRFactorization()` and `SVDFactorization()` are +more precision is necessary, `LS.QRFactorization()` and `LS.SVDFactorization()` are the best choices, with SVD being the slowest but most precise. For efficiency, `RFLUFactorization` is the fastest for dense LU-factorizations until around @@ -59,7 +59,7 @@ has, for example if positive definite then `Krylov_CG()`, but if no good propert use `Krylov_GMRES()`. Finally, a user can pass a custom function for handling the linear solve using -`LinearSolveFunction()` if existing solvers are not optimally suited for their application. +`LS.LinearSolveFunction()` if existing solvers are not optimally suited for their application. The interface is detailed [here](@ref custom). ### Lazy SciMLOperators diff --git a/docs/src/tutorials/caching_interface.md b/docs/src/tutorials/caching_interface.md index 9f43b3b5a..291209b85 100644 --- a/docs/src/tutorials/caching_interface.md +++ b/docs/src/tutorials/caching_interface.md @@ -11,7 +11,7 @@ A \ b2 then it would be more efficient to LU-factorize one time and reuse the factorization: ```julia -lu!(A) +LA.lu!(A) A \ b1 A \ b2 ``` @@ -21,21 +21,22 @@ means of solving and resolving linear systems. To do this with LinearSolve.jl, you simply `init` a cache, `solve`, replace `b`, and solve again. This looks like: ```@example linsys2 -using LinearSolve +import LinearSolve as LS +import LinearAlgebra as LA n = 4 A = rand(n, n) b1 = rand(n); b2 = rand(n); -prob = LinearProblem(A, b1) +prob = LS.LinearProblem(A, b1) -linsolve = init(prob) -sol1 = solve!(linsolve) +linsolve = LS.init(prob) +sol1 = LS.solve!(linsolve) ``` ```@example linsys2 linsolve.b = b2 -sol2 = solve!(linsolve) +sol2 = LS.solve!(linsolve) sol2.u ``` @@ -45,7 +46,7 @@ Then refactorization will occur when a new `A` is given: ```@example linsys2 A2 = rand(n, n) linsolve.A = A2 -sol3 = solve!(linsolve) +sol3 = LS.solve!(linsolve) sol3.u ``` @@ -54,7 +55,7 @@ The factorization occurs on the first solve, and it stores the factorization in the cache. You can retrieve this cache via `sol.cache`, which is the same object as the `init`, but updated to know not to re-solve the factorization. -The advantage of course with using LinearSolve.jl in this form is that it is +The advantage of course with import LinearSolve.jl in this form is that it is efficient while being agnostic to the linear solver. One can easily swap in iterative solvers, sparse solvers, etc. and it will do all the tricks like caching the symbolic factorization if the sparsity pattern is unchanged. diff --git a/docs/src/tutorials/gpu.md b/docs/src/tutorials/gpu.md index 4717f2f16..d76456368 100644 --- a/docs/src/tutorials/gpu.md +++ b/docs/src/tutorials/gpu.md @@ -27,12 +27,12 @@ GPU offloading is simple as it's done simply by changing the solver algorithm. T example from the start of the documentation: ```julia -using LinearSolve +import LinearSolve as LS A = rand(4, 4) b = rand(4) -prob = LinearProblem(A, b) -sol = solve(prob) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob) sol.u ``` @@ -40,7 +40,7 @@ This computation can be moved to the GPU by the following: ```julia using CUDA # Add the GPU library -sol = solve(prob, CudaOffloadFactorization()) +sol = LS.solve(prob, LS.CudaOffloadFactorization()) sol.u ``` @@ -56,8 +56,8 @@ using CUDA A = rand(4, 4) |> cu b = rand(4) |> cu -prob = LinearProblem(A, b) -sol = solve(prob) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob) sol.u ``` @@ -81,13 +81,13 @@ move things to CPU on command. However, this change in numerical precision needs to be accounted for in your mathematics as it could lead to instabilities. To disable this, use a constructor that is more specific about the bitsize, such as `CuArray{Float64}(A)`. Additionally, preferring more - stable factorization methods, such as `QRFactorization()`, can improve the numerics in + stable factorization methods, such as `LS.QRFactorization()`, can improve the numerics in such cases. Similarly to other use cases, you can choose the solver, for example: ```julia -sol = solve(prob, QRFactorization()) +sol = LS.solve(prob, LS.QRFactorization()) ``` ## Sparse Matrices on GPUs @@ -96,10 +96,12 @@ Currently, sparse matrix computations on GPUs are only supported for CUDA. This the `CUDA.CUSPARSE` sublibrary. ```julia -using LinearAlgebra, CUDA.CUSPARSE +import LinearAlgebra as LA +import SparseArrays as SA +import CUDA T = Float32 n = 100 -A_cpu = sprand(T, n, n, 0.05) + I +A_cpu = SA.sprand(T, n, n, 0.05) + LA.I x_cpu = zeros(T, n) b_cpu = rand(T, n) @@ -112,7 +114,7 @@ In order to solve such problems using a direct method, you must add ```julia using CUDSS -sol = solve(prob, LUFactorization()) +sol = LS.solve(prob, LS.LUFactorization()) ``` !!! note @@ -122,13 +124,13 @@ sol = solve(prob, LUFactorization()) Note that `KrylovJL` methods also work with sparse GPU arrays: ```julia -sol = solve(prob, KrylovJL_GMRES()) +sol = LS.solve(prob, LS.KrylovJL_GMRES()) ``` Note that CUSPARSE also has some GPU-based preconditioners, such as a built-in `ilu`. However: ```julia -sol = solve(prob, KrylovJL_GMRES(precs = (A, p) -> (CUDA.CUSPARSE.ilu02!(A, 'O'), I))) +sol = LS.solve(prob, LS.KrylovJL_GMRES(precs = (A, p) -> (CUDA.CUSPARSE.ilu02!(A, 'O'), LA.I))) ``` However, right now CUSPARSE is missing the right `ldiv!` implementation for this to work diff --git a/docs/src/tutorials/linear.md b/docs/src/tutorials/linear.md index a08bc5dfd..e600db987 100644 --- a/docs/src/tutorials/linear.md +++ b/docs/src/tutorials/linear.md @@ -8,16 +8,16 @@ The following defines a `Matrix` and a `LinearProblem` which is subsequently sol by the default linear solver. ```@example linsys1 -using LinearSolve +import LinearSolve as LS A = rand(4, 4) b = rand(4) -prob = LinearProblem(A, b) -sol = solve(prob) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob) sol.u ``` -Note that `solve(prob)` is equivalent to `solve(prob,nothing)` where `nothing` +Note that `LS.solve(prob)` is equivalent to `LS.solve(prob,nothing)` where `nothing` denotes the choice of the default linear solver. This is equivalent to the Julia built-in `A\b`, where the solution is recovered via `sol.u`. The power of this package comes into play when changing the algorithms. For example, @@ -27,7 +27,7 @@ LinearSolve.jl, there is one interface and changing linear solvers is simply the switch of the algorithm choice: ```@example linsys1 -sol = solve(prob, KrylovJL_GMRES()) +sol = LS.solve(prob, LS.KrylovJL_GMRES()) sol.u ``` @@ -38,23 +38,24 @@ available solvers, see [the solvers page](@ref linearsystemsolvers) ## Sparse and Structured Matrices -There is no difference in the interface for using LinearSolve.jl on sparse +There is no difference in the interface for LinearSolve.jl on sparse and structured matrices. For example, the following now uses Julia's built-in [SparseArrays.jl](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) -to define a sparse matrix (`SparseMatrixCSC`) and solve the system using LinearSolve.jl. +to define a sparse matrix (`SparseMatrixCSC`) and solve the system with LinearSolve.jl. Note that `sprand` is a shorthand for quickly creating a sparse random matrix (see SparseArrays.jl for more details on defining sparse matrices). ```@example linsys1 -using LinearSolve, SparseArrays +import LinearSolve as LS +import SparseArrays as SA -A = sprand(4, 4, 0.75) +A = SA.sprand(4, 4, 0.75) b = rand(4) -prob = LinearProblem(A, b) -sol = solve(prob) +prob = LS.LinearProblem(A, b) +sol = LS.solve(prob) sol.u -sol = solve(prob, KrylovJL_GMRES()) # Choosing algorithms is done the same way +sol = LS.solve(prob, LS.KrylovJL_GMRES()) # Choosing algorithms is done the same way sol.u ``` @@ -130,8 +131,8 @@ function Afunc!(v,u,p,t) w end -using SciMLOperators -mfopA = FunctionOperator(Afunc!, zeros(5), zeros(5)) +import SciMLOperators as SMO +mfopA = SMO.FunctionOperator(Afunc!, zeros(5), zeros(5)) ``` Let's check these are the same: @@ -146,8 +147,8 @@ we don't have a matrix, we can still solve linear systems defined by this operat ```@example linsys1 b = rand(5) -prob = LinearProblem(mfopA, b) -sol = solve(prob) +prob = LS.LinearProblem(mfopA, b) +sol = LS.solve(prob) sol.u ``` @@ -158,6 +159,6 @@ mfopA * sol.u - b ``` !!! note - Note that not all methods can use a matrix-free operator. For example, `LUFactorization()` requires a matrix. If you use an + Note that not all methods can use a matrix-free operator. For example, `LS.LUFactorization()` requires a matrix. If you use an invalid method, you will get an error. The methods particularly from KrylovJL are the ones preferred for these cases (and are defaulted to).