Skip to content

Commit 62b9932

Browse files
Enhance UI/UX with progress bars and AutotuneResults object
- Add progress bar showing algorithm being benchmarked with percentage - Adjust size ranges: medium now goes to 300, large is 300-1000 - Create AutotuneResults struct with nice display output - Add plot() method for AutotuneResults to create composite plots - Update default to include large matrices (small, medium, large) - Add clear call-to-action in results display for sharing - Add ProgressMeter dependency 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent dd9128b commit 62b9932

File tree

4 files changed

+163
-68
lines changed

4 files changed

+163
-68
lines changed

lib/LinearSolveAutotune/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1818
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
1919
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
2020
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
21+
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
2122
RecursiveFactorization = "f2c3362d-daeb-58d1-803e-2bc74f2840b4"
2223
blis_jll = "6136c539-28a5-5bf0-87cc-b183200dce32"
2324
LAPACK_jll = "51474c39-65e3-53ba-86ba-03b1b862ec14"
@@ -39,6 +40,7 @@ LinearAlgebra = "1"
3940
Printf = "1"
4041
Dates = "1"
4142
Test = "1"
43+
ProgressMeter = "1"
4244
RecursiveFactorization = "0.2"
4345
blis_jll = "0.9.0"
4446
LAPACK_jll = "3"

lib/LinearSolveAutotune/README.md

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,17 @@ Automatic benchmarking and tuning for LinearSolve.jl algorithms.
77
```julia
88
using LinearSolve, LinearSolveAutotune
99

10-
# Run benchmarks with default settings (small and medium sizes)
11-
results, sysinfo, plots = autotune_setup()
10+
# Run benchmarks with default settings (small, medium, and large sizes)
11+
results = autotune_setup()
12+
13+
# View a summary of results
14+
display(results)
15+
16+
# Plot all benchmark results
17+
plot(results)
1218

1319
# Share your results with the community (optional)
14-
share_results(results, sysinfo, plots)
20+
share_results(results)
1521
```
1622

1723
## Features
@@ -28,31 +34,35 @@ share_results(results, sysinfo, plots)
2834
The package now uses flexible size categories instead of a binary large_matrices flag:
2935

3036
- `:small` - Matrices from 5×5 to 20×20 (quick tests)
31-
- `:medium` - Matrices from 20×20 to 100×100 (typical problems)
32-
- `:large` - Matrices from 100×100 to 1000×1000 (larger problems)
37+
- `:medium` - Matrices from 20×20 to 300×300 (typical problems)
38+
- `:large` - Matrices from 300×300 to 1000×1000 (larger problems)
3339
- `:big` - Matrices from 10000×10000 to 100000×100000 (GPU/HPC)
3440

3541
## Usage Examples
3642

3743
### Basic Benchmarking
3844

3945
```julia
40-
# Default: small and medium sizes
41-
results, sysinfo, plots = autotune_setup()
46+
# Default: small, medium, and large sizes
47+
results = autotune_setup()
4248

4349
# Test all size ranges
44-
results, sysinfo, plots = autotune_setup(sizes = [:small, :medium, :large, :big])
50+
results = autotune_setup(sizes = [:small, :medium, :large, :big])
4551

4652
# Large matrices only (for GPU systems)
47-
results, sysinfo, plots = autotune_setup(sizes = [:large, :big])
53+
results = autotune_setup(sizes = [:large, :big])
4854

4955
# Custom configuration
50-
results, sysinfo, plots = autotune_setup(
56+
results = autotune_setup(
5157
sizes = [:medium, :large],
5258
samples = 10,
5359
seconds = 1.0,
5460
eltypes = (Float64, ComplexF64)
5561
)
62+
63+
# View results and plot
64+
display(results)
65+
plot(results)
5666
```
5767

5868
### Sharing Results
@@ -61,7 +71,7 @@ After running benchmarks, you can optionally share your results with the LinearS
6171

6272
```julia
6373
# Share your benchmark results
64-
share_results(results, sysinfo, plots)
74+
share_results(results)
6575
```
6676

6777
## Setting Up GitHub Authentication
@@ -124,7 +134,7 @@ If you prefer using a token:
124134

125135
```julia
126136
autotune_setup(;
127-
sizes = [:small, :medium],
137+
sizes = [:small, :medium, :large],
128138
make_plot = true,
129139
set_preferences = true,
130140
samples = 5,
@@ -144,20 +154,16 @@ autotune_setup(;
144154
- `skip_missing_algs`: Continue if algorithms are missing
145155

146156
**Returns:**
147-
- `results_df`: DataFrame with benchmark results
148-
- `sysinfo`: System information dictionary
149-
- `plots`: Performance plots (if `make_plot=true`)
157+
- `results`: AutotuneResults object containing benchmark data, system info, and plots
150158

151159
### `share_results`
152160

153161
```julia
154-
share_results(results_df, sysinfo, plots=nothing)
162+
share_results(results)
155163
```
156164

157165
**Parameters:**
158-
- `results_df`: Benchmark results from `autotune_setup`
159-
- `sysinfo`: System information from `autotune_setup`
160-
- `plots`: Optional plots from `autotune_setup`
166+
- `results`: AutotuneResults object from `autotune_setup`
161167

162168
## Contributing
163169

lib/LinearSolveAutotune/src/LinearSolveAutotune.jl

Lines changed: 109 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ using LinearAlgebra
1111
using Printf
1212
using Dates
1313
using Base64
14+
using ProgressMeter
1415

1516
# Hard dependency to ensure RFLUFactorization others solvers are available
1617
using RecursiveFactorization
@@ -24,7 +25,7 @@ using Metal
2425
using GitHub
2526
using Plots
2627

27-
export autotune_setup, share_results
28+
export autotune_setup, share_results, AutotuneResults
2829

2930
include("algorithms.jl")
3031
include("gpu_detection.jl")
@@ -33,9 +34,93 @@ include("plotting.jl")
3334
include("telemetry.jl")
3435
include("preferences.jl")
3536

37+
# Define the AutotuneResults struct
38+
struct AutotuneResults
39+
results_df::DataFrame
40+
sysinfo::Dict
41+
plots::Union{Nothing, Dict}
42+
end
43+
44+
# Display method for AutotuneResults
45+
function Base.show(io::IO, results::AutotuneResults)
46+
println(io, "="^60)
47+
println(io, "LinearSolve.jl Autotune Results")
48+
println(io, "="^60)
49+
50+
# System info summary
51+
println(io, "\n📊 System Information:")
52+
println(io, " • CPU: ", get(results.sysinfo, "cpu_name", "Unknown"))
53+
println(io, " • OS: ", get(results.sysinfo, "os", "Unknown"))
54+
println(io, " • Julia: ", get(results.sysinfo, "julia_version", "Unknown"))
55+
println(io, " • Threads: ", get(results.sysinfo, "num_threads", "Unknown"))
56+
57+
# Results summary
58+
successful_results = filter(row -> row.success, results.results_df)
59+
if nrow(successful_results) > 0
60+
println(io, "\n🏆 Top Performing Algorithms:")
61+
summary = combine(groupby(successful_results, :algorithm),
62+
:gflops => mean => :avg_gflops,
63+
:gflops => maximum => :max_gflops,
64+
nrow => :num_tests)
65+
sort!(summary, :avg_gflops, rev = true)
66+
67+
# Show top 5
68+
for (i, row) in enumerate(eachrow(first(summary, 5)))
69+
println(io, " ", i, ". ", row.algorithm, ": ",
70+
@sprintf("%.2f GFLOPs avg", row.avg_gflops))
71+
end
72+
end
73+
74+
# Element types tested
75+
eltypes = unique(results.results_df.eltype)
76+
println(io, "\n🔬 Element Types Tested: ", join(eltypes, ", "))
77+
78+
# Matrix sizes tested
79+
sizes = unique(results.results_df.size)
80+
println(io, "📏 Matrix Sizes: ", minimum(sizes), "×", minimum(sizes),
81+
" to ", maximum(sizes), "×", maximum(sizes))
82+
83+
# Call to action
84+
println(io, "\n" * "="^60)
85+
println(io, "💡 To share your results with the community, run:")
86+
println(io, " share_results(results)")
87+
println(io, "\n📈 See community results at:")
88+
println(io, " https://github.com/SciML/LinearSolve.jl/issues/669")
89+
println(io, "="^60)
90+
end
91+
92+
# Plot method for AutotuneResults
93+
function Plots.plot(results::AutotuneResults; kwargs...)
94+
if results.plots === nothing || isempty(results.plots)
95+
@warn "No plots available in results. Run autotune_setup with make_plot=true"
96+
return nothing
97+
end
98+
99+
# Create a composite plot from all element type plots
100+
plot_list = []
101+
for (eltype_name, p) in results.plots
102+
push!(plot_list, p)
103+
end
104+
105+
# Create composite plot
106+
n_plots = length(plot_list)
107+
if n_plots == 1
108+
return plot_list[1]
109+
elseif n_plots == 2
110+
return plot(plot_list..., layout=(1, 2), size=(1200, 500); kwargs...)
111+
elseif n_plots <= 4
112+
return plot(plot_list..., layout=(2, 2), size=(1200, 900); kwargs...)
113+
else
114+
ncols = ceil(Int, sqrt(n_plots))
115+
nrows = ceil(Int, n_plots / ncols)
116+
return plot(plot_list..., layout=(nrows, ncols),
117+
size=(400*ncols, 400*nrows); kwargs...)
118+
end
119+
end
120+
36121
"""
37122
autotune_setup(;
38-
sizes = [:small, :medium],
123+
sizes = [:small, :medium, :large],
39124
make_plot::Bool = true,
40125
set_preferences::Bool = true,
41126
samples::Int = 5,
@@ -52,7 +137,7 @@ Run a comprehensive benchmark of all available LU factorization methods and opti
52137
53138
# Arguments
54139
55-
- `sizes = [:small, :medium]`: Size categories to test. Options: :small (5-20), :medium (20-100), :large (100-1000), :big (10000-100000)
140+
- `sizes = [:small, :medium, :large]`: Size categories to test. Options: :small (5-20), :medium (20-300), :large (300-1000), :big (10000-100000)
56141
- `make_plot::Bool = true`: Generate performance plots for each element type
57142
- `set_preferences::Bool = true`: Update LinearSolve preferences with optimal algorithms
58143
- `samples::Int = 5`: Number of benchmark samples per algorithm/size
@@ -62,31 +147,29 @@ Run a comprehensive benchmark of all available LU factorization methods and opti
62147
63148
# Returns
64149
65-
- `DataFrame`: Detailed benchmark results with performance data for all element types
66-
- `Dict`: System information about the benchmark environment
67-
- `Dict` or `Plot`: Performance visualizations by element type (if `make_plot=true`)
150+
- `AutotuneResults`: Object containing benchmark results, system info, and plots
68151
69152
# Examples
70153
71154
```julia
72155
using LinearSolve
73156
using LinearSolveAutotune
74157
75-
# Basic autotune with small and medium sizes
76-
results, sysinfo, plots = autotune_setup()
158+
# Basic autotune with default sizes
159+
results = autotune_setup()
77160
78161
# Test all size ranges
79-
results, sysinfo, plots = autotune_setup(sizes = [:small, :medium, :large, :big])
162+
results = autotune_setup(sizes = [:small, :medium, :large, :big])
80163
81164
# Large matrices only
82-
results, sysinfo, plots = autotune_setup(sizes = [:large, :big], samples = 10, seconds = 1.0)
165+
results = autotune_setup(sizes = [:large, :big], samples = 10, seconds = 1.0)
83166
84167
# After running autotune, share results (requires gh CLI or GitHub token)
85-
share_results(results, sysinfo, plots)
168+
share_results(results)
86169
```
87170
"""
88171
function autotune_setup(;
89-
sizes = [:small, :medium],
172+
sizes = [:small, :medium, :large],
90173
make_plot::Bool = true,
91174
set_preferences::Bool = true,
92175
samples::Int = 5,
@@ -175,18 +258,12 @@ function autotune_setup(;
175258

176259
sysinfo = get_detailed_system_info()
177260

178-
@info "To share your results with the community, run: share_results(results_df, sysinfo, plots_dict)"
179-
180-
# Return results and plots
181-
if make_plot && plots_dict !== nothing && !isempty(plots_dict)
182-
return results_df, sysinfo, plots_dict
183-
else
184-
return results_df, sysinfo, nothing
185-
end
261+
# Return AutotuneResults object
262+
return AutotuneResults(results_df, sysinfo, plots_dict)
186263
end
187264

188265
"""
189-
share_results(results_df::DataFrame, sysinfo::Dict, plots_dict=nothing)
266+
share_results(results::AutotuneResults)
190267
191268
Share your benchmark results with the LinearSolve.jl community to help improve
192269
automatic algorithm selection across different hardware configurations.
@@ -211,28 +288,27 @@ your results as a comment to the community benchmark collection issue.
211288
6. Run this function
212289
213290
# Arguments
214-
- `results_df`: Benchmark results DataFrame from autotune_setup
215-
- `sysinfo`: System information Dict from autotune_setup
216-
- `plots_dict`: Optional plots dictionary from autotune_setup
291+
- `results`: AutotuneResults object from autotune_setup
217292
218293
# Examples
219294
```julia
220295
# Run benchmarks
221-
results, sysinfo, plots = autotune_setup()
296+
results = autotune_setup()
222297
223298
# Share results with the community
224-
share_results(results, sysinfo, plots)
299+
share_results(results)
225300
```
226301
"""
227-
function share_results(results_df::DataFrame, sysinfo::Dict, plots_dict=nothing)
302+
function share_results(results::AutotuneResults)
228303
@info "📤 Preparing to share benchmark results with the community..."
229304

230-
# Get system info if not provided
231-
system_info = if haskey(sysinfo, "os")
232-
sysinfo
233-
else
234-
get_system_info()
235-
end
305+
# Extract from AutotuneResults
306+
results_df = results.results_df
307+
sysinfo = results.sysinfo
308+
plots_dict = results.plots
309+
310+
# Get system info
311+
system_info = sysinfo
236312

237313
# Categorize results
238314
categories = categorize_results(results_df)

0 commit comments

Comments
 (0)