Skip to content

Commit 7c7f405

Browse files
More updates to the Getting Started page
@WalterMadelim did some edits, did this cover all the questions you had?
1 parent f26a4df commit 7c7f405

File tree

1 file changed

+28
-9
lines changed

1 file changed

+28
-9
lines changed

docs/src/tutorials/linear.md

Lines changed: 28 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,9 @@
22

33
A linear system $$Au=b$$ is specified by defining an `AbstractMatrix` or `AbstractSciMLOperator`.
44
For the sake of simplicity, this tutorial will start by only showcasing concrete matrices.
5+
And specifically, we will start by using the basic Julia `Matrix` type.
56

6-
The following defines a matrix and a `LinearProblem` which is subsequently solved
7+
The following defines a `Matrix` and a `LinearProblem` which is subsequently solved
78
by the default linear solver.
89

910
```@example linsys1
@@ -57,15 +58,33 @@ sol = solve(prob, KrylovJL_GMRES()) # Choosing algorithms is done the same way
5758
sol.u
5859
```
5960

60-
Similerly structure matrix types, like banded matrices, can be provided using special matrix
61+
Similarly structure matrix types, like banded matrices, can be provided using special matrix
6162
types. While any `AbstractMatrix` type should be compatible via the general Julia interfaces,
62-
LinearSolve.jl specifically tests with the following cases:
63-
64-
* [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl)
65-
* [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl)
66-
* [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices)
67-
* [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl)
68-
* [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices)
63+
LinearSolve.jl specifically tests with the following cases:
64+
65+
* [LinearAlgebra.jl](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/)
66+
* Symmetric
67+
* Hermitian
68+
* UpperTriangular
69+
* UnitUpperTriangular
70+
* LowerTriangular
71+
* UnitLowerTriangular
72+
* SymTridiagonal
73+
* Tridiagonal
74+
* Bidiagonal
75+
* Diagonal
76+
* [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl) `BandedMatrix`
77+
* [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl) `BlockDiagonal`
78+
* [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices) `CuArray` (`GPUArray`)
79+
* [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl) `FastAlmostBandedMatrix`
80+
* [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices) `MetalArray`
81+
82+
!!! note
83+
84+
Choosing the most specific matrix structure that matches your specific system will give you the most performance.
85+
Thus if your matrix is symmetric, specifically building with `Symmetric(A)` will be faster than simply using `A`,
86+
and will generally lead to better automatic linear solver choices. Note that you can also choose the type for `b`,
87+
but generally a dense vector will be the fastest here and many solvers will not support a sparse `b`.
6988

7089
## Using Matrix-Free Operators
7190

0 commit comments

Comments
 (0)