Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions lib/LinearSolveAutotune/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ version = "1.0.0"
LinearSolve = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
Base64 = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
CPUSummary = "2a0fbf3d-bb9c-48f3-b0a9-814d99fd7ab9"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
GitHub = "bc5e4493-9b4d-5f90-b8aa-2b2bcaad7a26"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Expand All @@ -18,6 +19,7 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
RecursiveFactorization = "f2c3362d-daeb-58d1-803e-2bc74f2840b4"
blis_jll = "6136c539-28a5-5bf0-87cc-b183200dce32"
LAPACK_jll = "51474c39-65e3-53ba-86ba-03b1b862ec14"
Expand All @@ -28,6 +30,7 @@ Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
LinearSolve = "3"
BenchmarkTools = "1"
Base64 = "1"
CPUSummary = "0.2"
DataFrames = "1"
GitHub = "5"
Plots = "1"
Expand All @@ -39,6 +42,7 @@ LinearAlgebra = "1"
Printf = "1"
Dates = "1"
Test = "1"
ProgressMeter = "1"
RecursiveFactorization = "0.2"
blis_jll = "0.9.0"
LAPACK_jll = "3"
Expand Down
173 changes: 173 additions & 0 deletions lib/LinearSolveAutotune/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# LinearSolveAutotune.jl

Automatic benchmarking and tuning for LinearSolve.jl algorithms.

## Quick Start

```julia
using LinearSolve, LinearSolveAutotune

# Run benchmarks with default settings (small, medium, and large sizes)
results = autotune_setup()

# View a summary of results
display(results)

# Plot all benchmark results
plot(results)

# Share your results with the community (optional)
share_results(results)
```

## Features

- **Automatic Algorithm Benchmarking**: Tests all available LU factorization methods
- **Multi-size Testing**: Flexible size categories from small to very large matrices
- **Element Type Support**: Tests with Float32, Float64, ComplexF32, ComplexF64
- **GPU Support**: Automatically detects and benchmarks GPU algorithms if available
- **Performance Visualization**: Generate plots on demand with `plot(results)`
- **Community Sharing**: Optional telemetry to help improve algorithm selection

## Size Categories

The package now uses flexible size categories:

- `:tiny` - Matrices from 5×5 to 20×20 (very small problems)
- `:small` - Matrices from 20×20 to 100×100 (small problems)
- `:medium` - Matrices from 100×100 to 300×300 (typical problems)
- `:large` - Matrices from 300×300 to 1000×1000 (larger problems)
- `:big` - Matrices from 10000×10000 to 100000×100000 (GPU/HPC)

## Usage Examples

### Basic Benchmarking

```julia
# Default: small, medium, and large sizes
results = autotune_setup()

# Test all size ranges
results = autotune_setup(sizes = [:small, :medium, :large, :big])

# Large matrices only (for GPU systems)
results = autotune_setup(sizes = [:large, :big])

# Custom configuration
results = autotune_setup(
sizes = [:medium, :large],
samples = 10,
seconds = 1.0,
eltypes = (Float64, ComplexF64)
)

# View results and plot
display(results)
plot(results)
```

### Sharing Results

After running benchmarks, you can optionally share your results with the LinearSolve.jl community to help improve automatic algorithm selection:

```julia
# Share your benchmark results
share_results(results)
```

## Setting Up GitHub Authentication

To share results, you need GitHub authentication. We recommend using the GitHub CLI:

### Method 1: GitHub CLI (Recommended)

1. **Install GitHub CLI**
- macOS: `brew install gh`
- Windows: `winget install --id GitHub.cli`
- Linux: See [cli.github.com](https://cli.github.com/manual/installation)

2. **Authenticate**
```bash
gh auth login
```
Follow the prompts to authenticate with your GitHub account.

3. **Verify authentication**
```bash
gh auth status
```

### Method 2: GitHub Personal Access Token

If you prefer using a token:

1. Go to [GitHub Settings > Tokens](https://github.com/settings/tokens/new)
2. Add description: "LinearSolve.jl Telemetry"
3. Select scope: `public_repo`
4. Click "Generate token" and copy it
5. In Julia:
```julia
ENV["GITHUB_TOKEN"] = "your_token_here"
share_results(results, sysinfo, plots)
```

## How It Works

1. **Benchmarking**: The `autotune_setup()` function runs comprehensive benchmarks of all available LinearSolve.jl algorithms across different matrix sizes and element types.

2. **Analysis**: Results are analyzed to find the best-performing algorithm for each size range and element type combination.

3. **Preferences**: Optionally sets Julia preferences to automatically use the best algorithms for your system.

4. **Sharing**: The `share_results()` function allows you to contribute your benchmark data to the community collection at [LinearSolve.jl Issue #669](https://github.com/SciML/LinearSolve.jl/issues/669).

## Privacy and Telemetry

- Sharing results is **completely optional**
- Only benchmark performance data and system specifications are shared
- No personal information is collected
- All shared data is publicly visible on GitHub
- You can review the exact data before sharing

## API Reference

### `autotune_setup`

```julia
autotune_setup(;
sizes = [:small, :medium, :large],
set_preferences = true,
samples = 5,
seconds = 0.5,
eltypes = (Float32, Float64, ComplexF32, ComplexF64),
skip_missing_algs = false
)
```

**Parameters:**
- `sizes`: Vector of size categories to test
- `set_preferences`: Update LinearSolve preferences
- `samples`: Number of benchmark samples per test
- `seconds`: Maximum time per benchmark
- `eltypes`: Element types to benchmark
- `skip_missing_algs`: Continue if algorithms are missing

**Returns:**
- `results`: AutotuneResults object containing benchmark data and system info

### `share_results`

```julia
share_results(results)
```

**Parameters:**
- `results`: AutotuneResults object from `autotune_setup`

## Contributing

Your benchmark contributions help improve LinearSolve.jl for everyone! By sharing results from diverse hardware configurations, we can build better automatic algorithm selection heuristics.

## License

Part of the SciML ecosystem. See LinearSolve.jl for license information.
Loading
Loading