Skip to content

Commit dd9128b

Browse files
Improve LinearSolveAutotune UI/UX
- Remove all token authentication code from main autotune flow - Split autotuning and result sharing into separate functions - Add flexible size categories (small/medium/large/big) replacing binary large_matrices flag - Add clear gh CLI setup instructions in README - Make telemetry opt-in via explicit share_results() call 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent fd44500 commit dd9128b

File tree

4 files changed

+316
-114
lines changed

4 files changed

+316
-114
lines changed

lib/LinearSolveAutotune/README.md

Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,168 @@
1+
# LinearSolveAutotune.jl
2+
3+
Automatic benchmarking and tuning for LinearSolve.jl algorithms.
4+
5+
## Quick Start
6+
7+
```julia
8+
using LinearSolve, LinearSolveAutotune
9+
10+
# Run benchmarks with default settings (small and medium sizes)
11+
results, sysinfo, plots = autotune_setup()
12+
13+
# Share your results with the community (optional)
14+
share_results(results, sysinfo, plots)
15+
```
16+
17+
## Features
18+
19+
- **Automatic Algorithm Benchmarking**: Tests all available LU factorization methods
20+
- **Multi-size Testing**: Flexible size categories from small to very large matrices
21+
- **Element Type Support**: Tests with Float32, Float64, ComplexF32, ComplexF64
22+
- **GPU Support**: Automatically detects and benchmarks GPU algorithms if available
23+
- **Performance Visualization**: Creates plots showing algorithm performance
24+
- **Community Sharing**: Optional telemetry to help improve algorithm selection
25+
26+
## Size Categories
27+
28+
The package now uses flexible size categories instead of a binary large_matrices flag:
29+
30+
- `:small` - Matrices from 5×5 to 20×20 (quick tests)
31+
- `:medium` - Matrices from 20×20 to 100×100 (typical problems)
32+
- `:large` - Matrices from 100×100 to 1000×1000 (larger problems)
33+
- `:big` - Matrices from 10000×10000 to 100000×100000 (GPU/HPC)
34+
35+
## Usage Examples
36+
37+
### Basic Benchmarking
38+
39+
```julia
40+
# Default: small and medium sizes
41+
results, sysinfo, plots = autotune_setup()
42+
43+
# Test all size ranges
44+
results, sysinfo, plots = autotune_setup(sizes = [:small, :medium, :large, :big])
45+
46+
# Large matrices only (for GPU systems)
47+
results, sysinfo, plots = autotune_setup(sizes = [:large, :big])
48+
49+
# Custom configuration
50+
results, sysinfo, plots = autotune_setup(
51+
sizes = [:medium, :large],
52+
samples = 10,
53+
seconds = 1.0,
54+
eltypes = (Float64, ComplexF64)
55+
)
56+
```
57+
58+
### Sharing Results
59+
60+
After running benchmarks, you can optionally share your results with the LinearSolve.jl community to help improve automatic algorithm selection:
61+
62+
```julia
63+
# Share your benchmark results
64+
share_results(results, sysinfo, plots)
65+
```
66+
67+
## Setting Up GitHub Authentication
68+
69+
To share results, you need GitHub authentication. We recommend using the GitHub CLI:
70+
71+
### Method 1: GitHub CLI (Recommended)
72+
73+
1. **Install GitHub CLI**
74+
- macOS: `brew install gh`
75+
- Windows: `winget install --id GitHub.cli`
76+
- Linux: See [cli.github.com](https://cli.github.com/manual/installation)
77+
78+
2. **Authenticate**
79+
```bash
80+
gh auth login
81+
```
82+
Follow the prompts to authenticate with your GitHub account.
83+
84+
3. **Verify authentication**
85+
```bash
86+
gh auth status
87+
```
88+
89+
### Method 2: GitHub Personal Access Token
90+
91+
If you prefer using a token:
92+
93+
1. Go to [GitHub Settings > Tokens](https://github.com/settings/tokens/new)
94+
2. Add description: "LinearSolve.jl Telemetry"
95+
3. Select scope: `public_repo`
96+
4. Click "Generate token" and copy it
97+
5. In Julia:
98+
```julia
99+
ENV["GITHUB_TOKEN"] = "your_token_here"
100+
share_results(results, sysinfo, plots)
101+
```
102+
103+
## How It Works
104+
105+
1. **Benchmarking**: The `autotune_setup()` function runs comprehensive benchmarks of all available LinearSolve.jl algorithms across different matrix sizes and element types.
106+
107+
2. **Analysis**: Results are analyzed to find the best-performing algorithm for each size range and element type combination.
108+
109+
3. **Preferences**: Optionally sets Julia preferences to automatically use the best algorithms for your system.
110+
111+
4. **Sharing**: The `share_results()` function allows you to contribute your benchmark data to the community collection at [LinearSolve.jl Issue #669](https://github.com/SciML/LinearSolve.jl/issues/669).
112+
113+
## Privacy and Telemetry
114+
115+
- Sharing results is **completely optional**
116+
- Only benchmark performance data and system specifications are shared
117+
- No personal information is collected
118+
- All shared data is publicly visible on GitHub
119+
- You can review the exact data before sharing
120+
121+
## API Reference
122+
123+
### `autotune_setup`
124+
125+
```julia
126+
autotune_setup(;
127+
sizes = [:small, :medium],
128+
make_plot = true,
129+
set_preferences = true,
130+
samples = 5,
131+
seconds = 0.5,
132+
eltypes = (Float32, Float64, ComplexF32, ComplexF64),
133+
skip_missing_algs = false
134+
)
135+
```
136+
137+
**Parameters:**
138+
- `sizes`: Vector of size categories to test
139+
- `make_plot`: Generate performance plots
140+
- `set_preferences`: Update LinearSolve preferences
141+
- `samples`: Number of benchmark samples per test
142+
- `seconds`: Maximum time per benchmark
143+
- `eltypes`: Element types to benchmark
144+
- `skip_missing_algs`: Continue if algorithms are missing
145+
146+
**Returns:**
147+
- `results_df`: DataFrame with benchmark results
148+
- `sysinfo`: System information dictionary
149+
- `plots`: Performance plots (if `make_plot=true`)
150+
151+
### `share_results`
152+
153+
```julia
154+
share_results(results_df, sysinfo, plots=nothing)
155+
```
156+
157+
**Parameters:**
158+
- `results_df`: Benchmark results from `autotune_setup`
159+
- `sysinfo`: System information from `autotune_setup`
160+
- `plots`: Optional plots from `autotune_setup`
161+
162+
## Contributing
163+
164+
Your benchmark contributions help improve LinearSolve.jl for everyone! By sharing results from diverse hardware configurations, we can build better automatic algorithm selection heuristics.
165+
166+
## License
167+
168+
Part of the SciML ecosystem. See LinearSolve.jl for license information.

0 commit comments

Comments
 (0)