Skip to content

Commit 013d932

Browse files
Add comprehensive documentation for internal APIs and algorithm selection
- Create new Internal API documentation page with complete docstring coverage * Document abstract type hierarchy with detailed explanations * Add LinearCache and caching system documentation * Include trait functions and utility function docs * Cover solve functions and preconditioner infrastructure - Add Algorithm Selection Guide for users * Explain automatic algorithm selection logic with examples * Provide performance guidance for different matrix types * Include decision flowchart and manual override examples * Cover dense, sparse, GPU, and iterative method selection - Enhance existing solver documentation * Add DirectLdiv! and LinearSolveFunction to solver list * Update documentation navigation structure - Update documentation navigation * Add internal_api.md to Advanced section * Add algorithm_selection.md to Basics section This significantly improves both user and developer documentation by making the package's architecture and algorithm selection transparent and accessible. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent c303ed2 commit 013d932

File tree

4 files changed

+275
-2
lines changed

4 files changed

+275
-2
lines changed

docs/pages.jl

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,14 @@ pages = ["index.md",
88
"tutorials/gpu.md",
99
"tutorials/autotune.md"],
1010
"Basics" => Any["basics/LinearProblem.md",
11+
"basics/algorithm_selection.md",
1112
"basics/common_solver_opts.md",
1213
"basics/OperatorAssumptions.md",
1314
"basics/Preconditioners.md",
1415
"basics/FAQ.md"],
1516
"Solvers" => Any["solvers/solvers.md"],
16-
"Advanced" => Any["advanced/developing.md"
17-
"advanced/custom.md"],
17+
"Advanced" => Any["advanced/developing.md",
18+
"advanced/custom.md",
19+
"advanced/internal_api.md"],
1820
"Release Notes" => "release_notes.md"
1921
]

docs/src/advanced/internal_api.md

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# Internal API Documentation
2+
3+
This page documents LinearSolve.jl's internal API, which is useful for developers who want to understand the package's architecture, contribute to the codebase, or develop custom linear solver algorithms.
4+
5+
## Abstract Type Hierarchy
6+
7+
LinearSolve.jl uses a well-structured type hierarchy to organize different classes of linear solver algorithms:
8+
9+
```@docs
10+
LinearSolve.SciMLLinearSolveAlgorithm
11+
LinearSolve.AbstractFactorization
12+
LinearSolve.AbstractDenseFactorization
13+
LinearSolve.AbstractSparseFactorization
14+
LinearSolve.AbstractKrylovSubspaceMethod
15+
LinearSolve.AbstractSolveFunction
16+
```
17+
18+
## Core Cache System
19+
20+
The caching system is central to LinearSolve.jl's performance and functionality:
21+
22+
```@docs
23+
LinearSolve.LinearCache
24+
LinearSolve.init_cacheval
25+
```
26+
27+
## Algorithm Selection
28+
29+
The automatic algorithm selection is one of LinearSolve.jl's key features:
30+
31+
```@docs
32+
LinearSolve.defaultalg
33+
```
34+
35+
## Trait Functions
36+
37+
These trait functions help determine algorithm capabilities and requirements:
38+
39+
```@docs
40+
LinearSolve.needs_concrete_A
41+
```
42+
43+
## Utility Functions
44+
45+
Various utility functions support the core functionality:
46+
47+
```@docs
48+
LinearSolve.default_tol
49+
LinearSolve.default_alias_A
50+
LinearSolve.default_alias_b
51+
LinearSolve.__init_u0_from_Ab
52+
```
53+
54+
## Solve Functions
55+
56+
For custom solving strategies:
57+
58+
```@docs
59+
LinearSolve.LinearSolveFunction
60+
LinearSolve.DirectLdiv!
61+
```
62+
63+
## Preconditioner Infrastructure
64+
65+
The preconditioner system allows for flexible preconditioning strategies:
66+
67+
```@docs
68+
LinearSolve.ComposePreconditioner
69+
LinearSolve.InvPreconditioner
70+
```
71+
72+
## Internal Algorithm Types
73+
74+
These are internal algorithm implementations:
75+
76+
```@docs
77+
LinearSolve.SimpleLUFactorization
78+
LinearSolve.LUSolver
79+
```
80+
81+
## Developer Notes
82+
83+
### Adding New Algorithms
84+
85+
When adding a new linear solver algorithm to LinearSolve.jl:
86+
87+
1. **Choose the appropriate abstract type**: Inherit from the most specific abstract type that fits your algorithm
88+
2. **Implement required methods**: At minimum, implement `solve!` and possibly `init_cacheval`
89+
3. **Consider trait functions**: Override trait functions like `needs_concrete_A` if needed
90+
4. **Document thoroughly**: Add comprehensive docstrings following the patterns shown here
91+
92+
### Performance Considerations
93+
94+
- The `LinearCache` system is designed for efficient repeated solves
95+
- Use `cache.isfresh` to avoid redundant computations when the matrix hasn't changed
96+
- Consider implementing specialized `init_cacheval` for algorithms that need setup
97+
- Leverage trait functions to optimize dispatch and memory usage
98+
99+
### Testing Guidelines
100+
101+
When adding new functionality:
102+
103+
- Test with various matrix types (dense, sparse, GPU arrays)
104+
- Verify caching behavior works correctly
105+
- Ensure trait functions return appropriate values
106+
- Test integration with the automatic algorithm selection system
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Algorithm Selection Guide
2+
3+
LinearSolve.jl automatically selects appropriate algorithms based on your problem characteristics, but understanding how this works can help you make better choices for your specific use case.
4+
5+
## Automatic Algorithm Selection
6+
7+
When you call `solve(prob)` without specifying an algorithm, LinearSolve.jl uses intelligent heuristics to choose the best solver:
8+
9+
```julia
10+
using LinearSolve
11+
12+
# LinearSolve.jl automatically chooses the best algorithm
13+
A = rand(100, 100)
14+
b = rand(100)
15+
prob = LinearProblem(A, b)
16+
sol = solve(prob) # Automatic algorithm selection
17+
```
18+
19+
The selection process considers:
20+
21+
- **Matrix type**: Dense vs. sparse vs. structured matrices
22+
- **Matrix properties**: Square vs. rectangular, symmetric, positive definite
23+
- **Size**: Small vs. large matrices for performance optimization
24+
- **Hardware**: CPU vs. GPU arrays
25+
- **Conditioning**: Well-conditioned vs. ill-conditioned systems
26+
27+
## Algorithm Categories
28+
29+
LinearSolve.jl organizes algorithms into several categories:
30+
31+
### Factorization Methods
32+
33+
These algorithms decompose your matrix into simpler components:
34+
35+
- **Dense factorizations**: Best for matrices without special sparsity structure
36+
- `LUFactorization()`: General-purpose, good balance of speed and stability
37+
- `QRFactorization()`: More stable for ill-conditioned problems
38+
- `CholeskyFactorization()`: Fastest for symmetric positive definite matrices
39+
40+
- **Sparse factorizations**: Optimized for matrices with many zeros
41+
- `UMFPACKFactorization()`: General sparse LU with good fill-in control
42+
- `KLUFactorization()`: Optimized for circuit simulation problems
43+
44+
### Iterative Methods
45+
46+
These solve the system iteratively without explicit factorization:
47+
48+
- **Krylov methods**: Memory-efficient for large sparse systems
49+
- `KrylovJL_GMRES()`: General-purpose iterative solver
50+
- `KrylovJL_CG()`: For symmetric positive definite systems
51+
52+
### Direct Methods
53+
54+
Simple direct approaches:
55+
56+
- `DirectLdiv!()`: Uses Julia's built-in `\` operator
57+
- `DiagonalFactorization()`: Optimized for diagonal matrices
58+
59+
## Performance Characteristics
60+
61+
### Dense Matrices
62+
63+
For dense matrices, algorithm choice depends on size and conditioning:
64+
65+
```julia
66+
# Small matrices (< 100×100): SimpleLUFactorization often fastest
67+
A_small = rand(50, 50)
68+
sol = solve(LinearProblem(A_small, rand(50)), SimpleLUFactorization())
69+
70+
# Medium matrices (100×500): RFLUFactorization often optimal
71+
A_medium = rand(200, 200)
72+
sol = solve(LinearProblem(A_medium, rand(200)), RFLUFactorization())
73+
74+
# Large matrices (> 500×500): MKLLUFactorization or AppleAccelerate
75+
A_large = rand(1000, 1000)
76+
sol = solve(LinearProblem(A_large, rand(1000)), MKLLUFactorization())
77+
```
78+
79+
### Sparse Matrices
80+
81+
For sparse matrices, structure matters:
82+
83+
```julia
84+
using SparseArrays
85+
86+
# General sparse matrices
87+
A_sparse = sprand(1000, 1000, 0.01)
88+
sol = solve(LinearProblem(A_sparse, rand(1000)), UMFPACKFactorization())
89+
90+
# Structured sparse (e.g., from discretized PDEs)
91+
# KLUFactorization often better for circuit-like problems
92+
```
93+
94+
### GPU Acceleration
95+
96+
For very large problems, GPU offloading can be beneficial:
97+
98+
```julia
99+
# Requires CUDA.jl
100+
# A_gpu = CuArray(rand(Float32, 2000, 2000))
101+
# sol = solve(LinearProblem(A_gpu, CuArray(rand(Float32, 2000))),
102+
# CudaOffloadLUFactorization())
103+
```
104+
105+
## When to Override Automatic Selection
106+
107+
You might want to manually specify an algorithm when:
108+
109+
1. **You know your problem structure**: E.g., you know your matrix is positive definite
110+
```julia
111+
sol = solve(prob, CholeskyFactorization()) # Faster for SPD matrices
112+
```
113+
114+
2. **You need maximum stability**: For ill-conditioned problems
115+
```julia
116+
sol = solve(prob, QRFactorization()) # More numerically stable
117+
```
118+
119+
3. **You're doing many solves**: Factorization methods amortize cost over multiple solves
120+
```julia
121+
cache = init(prob, LUFactorization())
122+
for i in 1:1000
123+
cache.b = new_rhs[i]
124+
sol = solve!(cache)
125+
end
126+
```
127+
128+
4. **Memory constraints**: Iterative methods use less memory
129+
```julia
130+
sol = solve(prob, KrylovJL_GMRES()) # Lower memory usage
131+
```
132+
133+
## Algorithm Selection Flowchart
134+
135+
The automatic selection roughly follows this logic:
136+
137+
```
138+
Is A diagonal? → DiagonalFactorization
139+
Is A tridiagonal/bidiagonal? → DirectLdiv! (Julia 1.11+) or LUFactorization
140+
Is A symmetric positive definite? → CholeskyFactorization
141+
Is A symmetric indefinite? → BunchKaufmanFactorization
142+
Is A sparse? → UMFPACKFactorization or KLUFactorization
143+
Is A small dense? → RFLUFactorization or SimpleLUFactorization
144+
Is A large dense? → MKLLUFactorization or AppleAccelerateLUFactorization
145+
Is A GPU array? → QRFactorization or LUFactorization
146+
Is A an operator/function? → KrylovJL_GMRES
147+
Is the system overdetermined? → QRFactorization or KrylovJL_LSMR
148+
```
149+
150+
## Custom Functions
151+
152+
For specialized algorithms not covered by the built-in solvers:
153+
154+
```julia
155+
function my_custom_solver(A, b, u, p, isfresh, Pl, Pr, cacheval; kwargs...)
156+
# Your custom solving logic here
157+
return A \ b # Simple example
158+
end
159+
160+
sol = solve(prob, LinearSolveFunction(my_custom_solver))
161+
```
162+
163+
See the [Custom Linear Solvers](@ref custom) section for more details.

docs/src/solvers/solvers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,8 @@ LinearSolve.jl contains some linear solvers built in for specialized cases.
135135
SimpleLUFactorization
136136
DiagonalFactorization
137137
SimpleGMRES
138+
DirectLdiv!
139+
LinearSolveFunction
138140
```
139141

140142
### FastLapackInterface.jl

0 commit comments

Comments
 (0)