|
| 1 | +# Internal API Documentation |
| 2 | + |
| 3 | +This page documents LinearSolve.jl's internal API, which is useful for developers who want to understand the package's architecture, contribute to the codebase, or develop custom linear solver algorithms. |
| 4 | + |
| 5 | +## Abstract Type Hierarchy |
| 6 | + |
| 7 | +LinearSolve.jl uses a well-structured type hierarchy to organize different classes of linear solver algorithms: |
| 8 | + |
| 9 | +```@docs |
| 10 | +LinearSolve.SciMLLinearSolveAlgorithm |
| 11 | +LinearSolve.AbstractFactorization |
| 12 | +LinearSolve.AbstractDenseFactorization |
| 13 | +LinearSolve.AbstractSparseFactorization |
| 14 | +LinearSolve.AbstractKrylovSubspaceMethod |
| 15 | +LinearSolve.AbstractSolveFunction |
| 16 | +``` |
| 17 | + |
| 18 | +## Core Cache System |
| 19 | + |
| 20 | +The caching system is central to LinearSolve.jl's performance and functionality: |
| 21 | + |
| 22 | +```@docs |
| 23 | +LinearSolve.LinearCache |
| 24 | +LinearSolve.init_cacheval |
| 25 | +``` |
| 26 | + |
| 27 | +## Algorithm Selection |
| 28 | + |
| 29 | +The automatic algorithm selection is one of LinearSolve.jl's key features: |
| 30 | + |
| 31 | +```@docs |
| 32 | +LinearSolve.defaultalg |
| 33 | +LinearSolve.get_tuned_algorithm |
| 34 | +LinearSolve.is_algorithm_available |
| 35 | +LinearSolve.show_algorithm_choices |
| 36 | +LinearSolve.make_preferences_dynamic! |
| 37 | +``` |
| 38 | + |
| 39 | +### Preference System Architecture |
| 40 | + |
| 41 | +The dual preference system provides intelligent algorithm selection with comprehensive fallbacks: |
| 42 | + |
| 43 | +#### **Core Functions** |
| 44 | +- **`get_tuned_algorithm`**: Retrieves tuned algorithm preferences based on matrix size and element type |
| 45 | +- **`is_algorithm_available`**: Checks if a specific algorithm is currently available (extensions loaded) |
| 46 | +- **`show_algorithm_choices`**: Analysis function displaying algorithm choices for all element types |
| 47 | +- **`make_preferences_dynamic!`**: Testing function that enables runtime preference checking |
| 48 | + |
| 49 | +#### **Size Categorization** |
| 50 | +The system categorizes matrix sizes to match LinearSolveAutotune benchmarking: |
| 51 | +- **tiny**: ≤20 elements (matrices ≤10 always override to GenericLU) |
| 52 | +- **small**: 21-100 elements |
| 53 | +- **medium**: 101-300 elements |
| 54 | +- **large**: 301-1000 elements |
| 55 | +- **big**: >1000 elements |
| 56 | + |
| 57 | +#### **Dual Preference Structure** |
| 58 | +For each category and element type (Float32, Float64, ComplexF32, ComplexF64): |
| 59 | +- `best_algorithm_{type}_{size}`: Overall fastest algorithm from autotune |
| 60 | +- `best_always_loaded_{type}_{size}`: Fastest always-available algorithm (fallback) |
| 61 | + |
| 62 | +#### **Preference File Organization** |
| 63 | +All preference-related functionality is consolidated in `src/preferences.jl`: |
| 64 | + |
| 65 | +**Compile-Time Constants**: |
| 66 | +- `AUTOTUNE_PREFS`: Preference structure loaded at package import |
| 67 | +- `AUTOTUNE_PREFS_SET`: Fast path check for whether any preferences are set |
| 68 | +- `_string_to_algorithm_choice`: Mapping from preference strings to algorithm enums |
| 69 | + |
| 70 | +**Runtime Functions**: |
| 71 | +- `_get_tuned_algorithm_runtime`: Dynamic preference checking for testing |
| 72 | +- `_choose_available_algorithm`: Algorithm availability and fallback logic |
| 73 | +- `show_algorithm_choices`: Comprehensive analysis and display function |
| 74 | + |
| 75 | +**Testing Infrastructure**: |
| 76 | +- `make_preferences_dynamic!`: Eval-based function redefinition for testing |
| 77 | +- Enables runtime preference verification without affecting production performance |
| 78 | + |
| 79 | +#### **Testing Mode Operation** |
| 80 | +The testing system uses an elegant eval-based approach: |
| 81 | +```julia |
| 82 | +# Production: Uses compile-time constants (maximum performance) |
| 83 | +get_tuned_algorithm(Float64, Float64, 200) # → Uses AUTOTUNE_PREFS constants |
| 84 | + |
| 85 | +# Testing: Redefines function to use runtime checking |
| 86 | +make_preferences_dynamic!() |
| 87 | +get_tuned_algorithm(Float64, Float64, 200) # → Uses runtime preference loading |
| 88 | +``` |
| 89 | + |
| 90 | +This approach maintains type stability and inference while enabling comprehensive testing. |
| 91 | + |
| 92 | +#### **Algorithm Support Scope** |
| 93 | +The preference system focuses exclusively on LU algorithms for dense matrices: |
| 94 | + |
| 95 | +**Supported LU Algorithms**: |
| 96 | +- `LUFactorization`, `GenericLUFactorization`, `RFLUFactorization` |
| 97 | +- `MKLLUFactorization`, `AppleAccelerateLUFactorization` |
| 98 | +- `SimpleLUFactorization`, `FastLUFactorization` (both map to LU) |
| 99 | +- GPU LU variants (CUDA, Metal, AMDGPU - all map to LU) |
| 100 | + |
| 101 | +**Non-LU algorithms** (QR, Cholesky, SVD, etc.) are not included in the preference system |
| 102 | +as they serve different use cases and are not typically the focus of dense matrix autotune optimization. |
| 103 | + |
| 104 | +## Trait Functions |
| 105 | + |
| 106 | +These trait functions help determine algorithm capabilities and requirements: |
| 107 | + |
| 108 | +```@docs |
| 109 | +LinearSolve.needs_concrete_A |
| 110 | +``` |
| 111 | + |
| 112 | +## Utility Functions |
| 113 | + |
| 114 | +Various utility functions support the core functionality: |
| 115 | + |
| 116 | +```@docs |
| 117 | +LinearSolve.default_tol |
| 118 | +LinearSolve.default_alias_A |
| 119 | +LinearSolve.default_alias_b |
| 120 | +LinearSolve.__init_u0_from_Ab |
| 121 | +``` |
| 122 | + |
| 123 | +## Solve Functions |
| 124 | + |
| 125 | +For custom solving strategies: |
| 126 | + |
| 127 | +```@docs |
| 128 | +LinearSolve.LinearSolveFunction |
| 129 | +LinearSolve.DirectLdiv! |
| 130 | +``` |
| 131 | + |
| 132 | +## Preconditioner Infrastructure |
| 133 | + |
| 134 | +The preconditioner system allows for flexible preconditioning strategies: |
| 135 | + |
| 136 | +```@docs |
| 137 | +LinearSolve.ComposePreconditioner |
| 138 | +LinearSolve.InvPreconditioner |
| 139 | +``` |
| 140 | + |
| 141 | +## Internal Algorithm Types |
| 142 | + |
| 143 | +These are internal algorithm implementations: |
| 144 | + |
| 145 | +```@docs |
| 146 | +LinearSolve.SimpleLUFactorization |
| 147 | +LinearSolve.LUSolver |
| 148 | +``` |
| 149 | + |
| 150 | +## Developer Notes |
| 151 | + |
| 152 | +### Adding New Algorithms |
| 153 | + |
| 154 | +When adding a new linear solver algorithm to LinearSolve.jl: |
| 155 | + |
| 156 | +1. **Choose the appropriate abstract type**: Inherit from the most specific abstract type that fits your algorithm |
| 157 | +2. **Implement required methods**: At minimum, implement `solve!` and possibly `init_cacheval` |
| 158 | +3. **Consider trait functions**: Override trait functions like `needs_concrete_A` if needed |
| 159 | +4. **Document thoroughly**: Add comprehensive docstrings following the patterns shown here |
| 160 | + |
| 161 | +### Performance Considerations |
| 162 | + |
| 163 | +- The `LinearCache` system is designed for efficient repeated solves |
| 164 | +- Use `cache.isfresh` to avoid redundant computations when the matrix hasn't changed |
| 165 | +- Consider implementing specialized `init_cacheval` for algorithms that need setup |
| 166 | +- Leverage trait functions to optimize dispatch and memory usage |
| 167 | + |
| 168 | +### Testing Guidelines |
| 169 | + |
| 170 | +When adding new functionality: |
| 171 | + |
| 172 | +- Test with various matrix types (dense, sparse, GPU arrays) |
| 173 | +- Verify caching behavior works correctly |
| 174 | +- Ensure trait functions return appropriate values |
| 175 | +- Test integration with the automatic algorithm selection system |
0 commit comments