Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,14 @@ pages = ["index.md",
"tutorials/gpu.md",
"tutorials/autotune.md"],
"Basics" => Any["basics/LinearProblem.md",
"basics/algorithm_selection.md",
"basics/common_solver_opts.md",
"basics/OperatorAssumptions.md",
"basics/Preconditioners.md",
"basics/FAQ.md"],
"Solvers" => Any["solvers/solvers.md"],
"Advanced" => Any["advanced/developing.md"
"advanced/custom.md"],
"Advanced" => Any["advanced/developing.md",
"advanced/custom.md",
"advanced/internal_api.md"],
"Release Notes" => "release_notes.md"
]
106 changes: 106 additions & 0 deletions docs/src/advanced/internal_api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# Internal API Documentation

This page documents LinearSolve.jl's internal API, which is useful for developers who want to understand the package's architecture, contribute to the codebase, or develop custom linear solver algorithms.

## Abstract Type Hierarchy

LinearSolve.jl uses a well-structured type hierarchy to organize different classes of linear solver algorithms:

```@docs
LinearSolve.SciMLLinearSolveAlgorithm
LinearSolve.AbstractFactorization
LinearSolve.AbstractDenseFactorization
LinearSolve.AbstractSparseFactorization
LinearSolve.AbstractKrylovSubspaceMethod
LinearSolve.AbstractSolveFunction
```

## Core Cache System

The caching system is central to LinearSolve.jl's performance and functionality:

```@docs
LinearSolve.LinearCache
LinearSolve.init_cacheval
```

## Algorithm Selection

The automatic algorithm selection is one of LinearSolve.jl's key features:

```@docs
LinearSolve.defaultalg
```

## Trait Functions

These trait functions help determine algorithm capabilities and requirements:

```@docs
LinearSolve.needs_concrete_A
```

## Utility Functions

Various utility functions support the core functionality:

```@docs
LinearSolve.default_tol
LinearSolve.default_alias_A
LinearSolve.default_alias_b
LinearSolve.__init_u0_from_Ab
```

## Solve Functions

For custom solving strategies:

```@docs
LinearSolve.LinearSolveFunction
LinearSolve.DirectLdiv!
```

## Preconditioner Infrastructure

The preconditioner system allows for flexible preconditioning strategies:

```@docs
LinearSolve.ComposePreconditioner
LinearSolve.InvPreconditioner
```

## Internal Algorithm Types

These are internal algorithm implementations:

```@docs
LinearSolve.SimpleLUFactorization
LinearSolve.LUSolver
```

## Developer Notes

### Adding New Algorithms

When adding a new linear solver algorithm to LinearSolve.jl:

1. **Choose the appropriate abstract type**: Inherit from the most specific abstract type that fits your algorithm
2. **Implement required methods**: At minimum, implement `solve!` and possibly `init_cacheval`
3. **Consider trait functions**: Override trait functions like `needs_concrete_A` if needed
4. **Document thoroughly**: Add comprehensive docstrings following the patterns shown here

### Performance Considerations

- The `LinearCache` system is designed for efficient repeated solves
- Use `cache.isfresh` to avoid redundant computations when the matrix hasn't changed
- Consider implementing specialized `init_cacheval` for algorithms that need setup
- Leverage trait functions to optimize dispatch and memory usage

### Testing Guidelines

When adding new functionality:

- Test with various matrix types (dense, sparse, GPU arrays)
- Verify caching behavior works correctly
- Ensure trait functions return appropriate values
- Test integration with the automatic algorithm selection system
163 changes: 163 additions & 0 deletions docs/src/basics/algorithm_selection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
# Algorithm Selection Guide

LinearSolve.jl automatically selects appropriate algorithms based on your problem characteristics, but understanding how this works can help you make better choices for your specific use case.

## Automatic Algorithm Selection

When you call `solve(prob)` without specifying an algorithm, LinearSolve.jl uses intelligent heuristics to choose the best solver:

```julia
using LinearSolve

# LinearSolve.jl automatically chooses the best algorithm
A = rand(100, 100)
b = rand(100)
prob = LinearProblem(A, b)
sol = solve(prob) # Automatic algorithm selection
```

The selection process considers:

- **Matrix type**: Dense vs. sparse vs. structured matrices
- **Matrix properties**: Square vs. rectangular, symmetric, positive definite
- **Size**: Small vs. large matrices for performance optimization
- **Hardware**: CPU vs. GPU arrays
- **Conditioning**: Well-conditioned vs. ill-conditioned systems

## Algorithm Categories

LinearSolve.jl organizes algorithms into several categories:

### Factorization Methods

These algorithms decompose your matrix into simpler components:

- **Dense factorizations**: Best for matrices without special sparsity structure
- `LUFactorization()`: General-purpose, good balance of speed and stability
- `QRFactorization()`: More stable for ill-conditioned problems
- `CholeskyFactorization()`: Fastest for symmetric positive definite matrices

- **Sparse factorizations**: Optimized for matrices with many zeros
- `UMFPACKFactorization()`: General sparse LU with good fill-in control
- `KLUFactorization()`: Optimized for circuit simulation problems

### Iterative Methods

These solve the system iteratively without explicit factorization:

- **Krylov methods**: Memory-efficient for large sparse systems
- `KrylovJL_GMRES()`: General-purpose iterative solver
- `KrylovJL_CG()`: For symmetric positive definite systems

### Direct Methods

Simple direct approaches:

- `DirectLdiv!()`: Uses Julia's built-in `\` operator
- `DiagonalFactorization()`: Optimized for diagonal matrices

## Performance Characteristics

### Dense Matrices

For dense matrices, algorithm choice depends on size and conditioning:

```julia
# Small matrices (< 100×100): SimpleLUFactorization often fastest
A_small = rand(50, 50)
sol = solve(LinearProblem(A_small, rand(50)), SimpleLUFactorization())

# Medium matrices (100×500): RFLUFactorization often optimal
A_medium = rand(200, 200)
sol = solve(LinearProblem(A_medium, rand(200)), RFLUFactorization())

# Large matrices (> 500×500): MKLLUFactorization or AppleAccelerate
A_large = rand(1000, 1000)
sol = solve(LinearProblem(A_large, rand(1000)), MKLLUFactorization())
```

### Sparse Matrices

For sparse matrices, structure matters:

```julia
using SparseArrays

# General sparse matrices
A_sparse = sprand(1000, 1000, 0.01)
sol = solve(LinearProblem(A_sparse, rand(1000)), UMFPACKFactorization())

# Structured sparse (e.g., from discretized PDEs)
# KLUFactorization often better for circuit-like problems
```

### GPU Acceleration

For very large problems, GPU offloading can be beneficial:

```julia
# Requires CUDA.jl
# A_gpu = CuArray(rand(Float32, 2000, 2000))
# sol = solve(LinearProblem(A_gpu, CuArray(rand(Float32, 2000))),
# CudaOffloadLUFactorization())
```

## When to Override Automatic Selection

You might want to manually specify an algorithm when:

1. **You know your problem structure**: E.g., you know your matrix is positive definite
```julia
sol = solve(prob, CholeskyFactorization()) # Faster for SPD matrices
```

2. **You need maximum stability**: For ill-conditioned problems
```julia
sol = solve(prob, QRFactorization()) # More numerically stable
```

3. **You're doing many solves**: Factorization methods amortize cost over multiple solves
```julia
cache = init(prob, LUFactorization())
for i in 1:1000
cache.b = new_rhs[i]
sol = solve!(cache)
end
```

4. **Memory constraints**: Iterative methods use less memory
```julia
sol = solve(prob, KrylovJL_GMRES()) # Lower memory usage
```

## Algorithm Selection Flowchart

The automatic selection roughly follows this logic:

```
Is A diagonal? → DiagonalFactorization
Is A tridiagonal/bidiagonal? → DirectLdiv! (Julia 1.11+) or LUFactorization
Is A symmetric positive definite? → CholeskyFactorization
Is A symmetric indefinite? → BunchKaufmanFactorization
Is A sparse? → UMFPACKFactorization or KLUFactorization
Is A small dense? → RFLUFactorization or SimpleLUFactorization
Is A large dense? → MKLLUFactorization or AppleAccelerateLUFactorization
Is A GPU array? → QRFactorization or LUFactorization
Is A an operator/function? → KrylovJL_GMRES
Is the system overdetermined? → QRFactorization or KrylovJL_LSMR
```

## Custom Functions

For specialized algorithms not covered by the built-in solvers:

```julia
function my_custom_solver(A, b, u, p, isfresh, Pl, Pr, cacheval; kwargs...)
# Your custom solving logic here
return A \ b # Simple example
end

sol = solve(prob, LinearSolveFunction(my_custom_solver))
```

See the [Custom Linear Solvers](@ref custom) section for more details.
2 changes: 2 additions & 0 deletions docs/src/solvers/solvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,8 @@ LinearSolve.jl contains some linear solvers built in for specialized cases.
SimpleLUFactorization
DiagonalFactorization
SimpleGMRES
DirectLdiv!
LinearSolveFunction
```

### FastLapackInterface.jl
Expand Down
Loading
Loading