You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LinearSolve.jl includes an automatic tuning system that benchmarks all available linear algebra algorithms on your specific hardware and automatically selects optimal algorithms for different problem sizes and data types. This tutorial will show you how to use the `LinearSolveAutotune` sublibrary to optimize your linear solve performance.
4
4
5
+
!!! warn
6
+
7
+
This is still in development. At this point the tuning will not result in different settings
8
+
but it will run the benchmarking and create plots of the performance of the algorithms. A
9
+
future version will use the results to set preferences for the algorithms.
10
+
5
11
## Quick Start
6
12
7
13
The simplest way to use the autotuner is to run it with default settings:
@@ -16,7 +22,7 @@ results = autotune_setup()
16
22
17
23
This will:
18
24
- Benchmark 4 element types: `Float32`, `Float64`, `ComplexF32`, `ComplexF64`
19
-
- Test matrix sizes from small (4×4) to medium (500×500)
25
+
- Test matrix sizes from small (4×4), medium (500×500), to large (10,000×10,000)
Usage of autotune preferences is still in development.
258
209
259
210
After running autotune, LinearSolve.jl will automatically use the optimal algorithms:
260
211
@@ -272,98 +223,4 @@ A_large = rand(300, 300) # Different size range
272
223
b_large =rand(300)
273
224
prob_large =LinearProblem(A_large, b_large)
274
225
sol_large =solve(prob_large) # May use different algorithm
275
-
```
276
-
277
-
## Best Practices
278
-
279
-
1.**Run autotune once per system**: Results are system-specific and should be rerun when hardware changes.
280
-
281
-
2.**Use appropriate matrix sizes**: Set `large_matrices=true` only if you regularly solve large systems.
282
-
283
-
3.**Consider element types**: Only benchmark the types you actually use to save time.
284
-
285
-
4.**Benchmark thoroughly for production**: Use higher `samples` and `seconds` values for production systems.
286
-
287
-
5.**Respect privacy**: Disable telemetry on sensitive or proprietary systems.
288
-
289
-
6.**Save results**: The DataFrame returned contains valuable performance data for analysis.
290
-
291
-
## Troubleshooting
292
-
293
-
### No Algorithms Available
294
-
If you get "No algorithms found", ensure LinearSolve.jl is properly installed:
295
-
```julia
296
-
using Pkg
297
-
Pkg.test("LinearSolve")
298
-
```
299
-
300
-
### GPU Algorithms Missing
301
-
GPU algorithms require additional packages:
302
-
```julia
303
-
# For CUDA
304
-
using CUDA, LinearSolve
305
-
306
-
# For Metal (Apple Silicon)
307
-
using Metal, LinearSolve
308
-
```
309
-
310
-
### Preferences Not Applied
311
-
Restart Julia after running autotune for preferences to take effect, or check:
312
-
```julia
313
-
LinearSolveAutotune.show_current_preferences()
314
-
```
315
-
316
-
### Slow BigFloat Performance
317
-
This is expected - arbitrary precision arithmetic is much slower than hardware floating point. Consider using `DoubleFloats.jl` or `MultiFloats.jl` for better performance if extreme precision isn't required.
318
-
319
-
## Community and Telemetry
320
-
321
-
By default, autotune results are shared with the LinearSolve.jl community via public GitHub gists to help improve algorithm selection for everyone. The shared data includes:
322
-
323
-
- System information (OS, CPU, core count, etc.)
324
-
- Algorithm performance results
325
-
- NO personal information or sensitive data
326
-
327
-
Results are uploaded as public gists that can be easily searched and viewed by the community.
328
-
329
-
### GitHub Authentication for Telemetry
330
-
331
-
When telemetry is enabled, the system will prompt you to set up GitHub authentication if not already configured:
332
-
333
-
```julia
334
-
# This will prompt for GitHub token setup if GITHUB_TOKEN not found
335
-
results =autotune_setup(telemetry =true)
336
-
```
337
-
338
-
The system will wait for you to create and paste a GitHub token. This helps the community by sharing performance data across different hardware configurations via easily discoverable GitHub gists.
339
-
340
-
**Interactive Setup:**
341
-
The autotune process will show step-by-step instructions and wait for you to:
342
-
1. Create a GitHub token at the provided link
343
-
2. Paste the token when prompted
344
-
3. Proceed with benchmarking and automatic result sharing
345
-
346
-
**Alternative - Pre-setup Environment Variable**:
347
-
```bash
348
-
export GITHUB_TOKEN=your_token_here
349
-
julia
350
-
```
351
-
352
-
**Creating the GitHub Token:**
353
-
1. Open [https://github.com/settings/tokens?type=beta](https://github.com/settings/tokens?type=beta)
This helps the community understand performance across different hardware configurations and improves the default algorithm selection for future users, but participation is entirely optional.
0 commit comments