|
| 1 | +# LinearSolveAutotune.jl |
| 2 | + |
| 3 | +Automatic benchmarking and tuning for LinearSolve.jl algorithms. |
| 4 | + |
| 5 | +## Quick Start |
| 6 | + |
| 7 | +```julia |
| 8 | +using LinearSolve, LinearSolveAutotune |
| 9 | + |
| 10 | +# Run benchmarks with default settings (small, medium, and large sizes) |
| 11 | +results = autotune_setup() |
| 12 | + |
| 13 | +# View a summary of results |
| 14 | +display(results) |
| 15 | + |
| 16 | +# Plot all benchmark results |
| 17 | +plot(results) |
| 18 | + |
| 19 | +# Share your results with the community (optional) |
| 20 | +share_results(results) |
| 21 | +``` |
| 22 | + |
| 23 | +## Features |
| 24 | + |
| 25 | +- **Automatic Algorithm Benchmarking**: Tests all available LU factorization methods |
| 26 | +- **Multi-size Testing**: Flexible size categories from small to very large matrices |
| 27 | +- **Element Type Support**: Tests with Float32, Float64, ComplexF32, ComplexF64 |
| 28 | +- **GPU Support**: Automatically detects and benchmarks GPU algorithms if available |
| 29 | +- **Performance Visualization**: Generate plots on demand with `plot(results)` |
| 30 | +- **Community Sharing**: Optional telemetry to help improve algorithm selection |
| 31 | + |
| 32 | +## Size Categories |
| 33 | + |
| 34 | +The package now uses flexible size categories: |
| 35 | + |
| 36 | +- `:tiny` - Matrices from 5×5 to 20×20 (very small problems) |
| 37 | +- `:small` - Matrices from 20×20 to 100×100 (small problems) |
| 38 | +- `:medium` - Matrices from 100×100 to 300×300 (typical problems) |
| 39 | +- `:large` - Matrices from 300×300 to 1000×1000 (larger problems) |
| 40 | +- `:big` - Matrices from 10000×10000 to 100000×100000 (GPU/HPC) |
| 41 | + |
| 42 | +## Usage Examples |
| 43 | + |
| 44 | +### Basic Benchmarking |
| 45 | + |
| 46 | +```julia |
| 47 | +# Default: small, medium, and large sizes |
| 48 | +results = autotune_setup() |
| 49 | + |
| 50 | +# Test all size ranges |
| 51 | +results = autotune_setup(sizes = [:small, :medium, :large, :big]) |
| 52 | + |
| 53 | +# Large matrices only (for GPU systems) |
| 54 | +results = autotune_setup(sizes = [:large, :big]) |
| 55 | + |
| 56 | +# Custom configuration |
| 57 | +results = autotune_setup( |
| 58 | + sizes = [:medium, :large], |
| 59 | + samples = 10, |
| 60 | + seconds = 1.0, |
| 61 | + eltypes = (Float64, ComplexF64) |
| 62 | +) |
| 63 | + |
| 64 | +# View results and plot |
| 65 | +display(results) |
| 66 | +plot(results) |
| 67 | +``` |
| 68 | + |
| 69 | +### Sharing Results |
| 70 | + |
| 71 | +After running benchmarks, you can optionally share your results with the LinearSolve.jl community to help improve automatic algorithm selection: |
| 72 | + |
| 73 | +```julia |
| 74 | +# Share your benchmark results |
| 75 | +share_results(results) |
| 76 | +``` |
| 77 | + |
| 78 | +## Setting Up GitHub Authentication |
| 79 | + |
| 80 | +To share results, you need GitHub authentication. We recommend using the GitHub CLI: |
| 81 | + |
| 82 | +### Method 1: GitHub CLI (Recommended) |
| 83 | + |
| 84 | +1. **Install GitHub CLI** |
| 85 | + - macOS: `brew install gh` |
| 86 | + - Windows: `winget install --id GitHub.cli` |
| 87 | + - Linux: See [cli.github.com](https://cli.github.com/manual/installation) |
| 88 | + |
| 89 | +2. **Authenticate** |
| 90 | + ```bash |
| 91 | + gh auth login |
| 92 | + ``` |
| 93 | + Follow the prompts to authenticate with your GitHub account. |
| 94 | + |
| 95 | +3. **Verify authentication** |
| 96 | + ```bash |
| 97 | + gh auth status |
| 98 | + ``` |
| 99 | + |
| 100 | +### Method 2: GitHub Personal Access Token |
| 101 | + |
| 102 | +If you prefer using a token: |
| 103 | + |
| 104 | +1. Go to [GitHub Settings > Tokens](https://github.com/settings/tokens/new) |
| 105 | +2. Add description: "LinearSolve.jl Telemetry" |
| 106 | +3. Select scope: `public_repo` |
| 107 | +4. Click "Generate token" and copy it |
| 108 | +5. In Julia: |
| 109 | + ```julia |
| 110 | + ENV["GITHUB_TOKEN"] = "your_token_here" |
| 111 | + share_results(results, sysinfo, plots) |
| 112 | + ``` |
| 113 | + |
| 114 | +## How It Works |
| 115 | + |
| 116 | +1. **Benchmarking**: The `autotune_setup()` function runs comprehensive benchmarks of all available LinearSolve.jl algorithms across different matrix sizes and element types. |
| 117 | + |
| 118 | +2. **Analysis**: Results are analyzed to find the best-performing algorithm for each size range and element type combination. |
| 119 | + |
| 120 | +3. **Preferences**: Optionally sets Julia preferences to automatically use the best algorithms for your system. |
| 121 | + |
| 122 | +4. **Sharing**: The `share_results()` function allows you to contribute your benchmark data to the community collection at [LinearSolve.jl Issue #669](https://github.com/SciML/LinearSolve.jl/issues/669). |
| 123 | + |
| 124 | +## Privacy and Telemetry |
| 125 | + |
| 126 | +- Sharing results is **completely optional** |
| 127 | +- Only benchmark performance data and system specifications are shared |
| 128 | +- No personal information is collected |
| 129 | +- All shared data is publicly visible on GitHub |
| 130 | +- You can review the exact data before sharing |
| 131 | + |
| 132 | +## API Reference |
| 133 | + |
| 134 | +### `autotune_setup` |
| 135 | + |
| 136 | +```julia |
| 137 | +autotune_setup(; |
| 138 | + sizes = [:small, :medium, :large], |
| 139 | + set_preferences = true, |
| 140 | + samples = 5, |
| 141 | + seconds = 0.5, |
| 142 | + eltypes = (Float32, Float64, ComplexF32, ComplexF64), |
| 143 | + skip_missing_algs = false |
| 144 | +) |
| 145 | +``` |
| 146 | + |
| 147 | +**Parameters:** |
| 148 | +- `sizes`: Vector of size categories to test |
| 149 | +- `set_preferences`: Update LinearSolve preferences |
| 150 | +- `samples`: Number of benchmark samples per test |
| 151 | +- `seconds`: Maximum time per benchmark |
| 152 | +- `eltypes`: Element types to benchmark |
| 153 | +- `skip_missing_algs`: Continue if algorithms are missing |
| 154 | + |
| 155 | +**Returns:** |
| 156 | +- `results`: AutotuneResults object containing benchmark data and system info |
| 157 | + |
| 158 | +### `share_results` |
| 159 | + |
| 160 | +```julia |
| 161 | +share_results(results) |
| 162 | +``` |
| 163 | + |
| 164 | +**Parameters:** |
| 165 | +- `results`: AutotuneResults object from `autotune_setup` |
| 166 | + |
| 167 | +## Contributing |
| 168 | + |
| 169 | +Your benchmark contributions help improve LinearSolve.jl for everyone! By sharing results from diverse hardware configurations, we can build better automatic algorithm selection heuristics. |
| 170 | + |
| 171 | +## License |
| 172 | + |
| 173 | +Part of the SciML ecosystem. See LinearSolve.jl for license information. |
0 commit comments