Skip to content

Commit 5f88fbd

Browse files
authored
Merge branch 'JuliaSmoothOptimizers:main' into fix/link-checker
2 parents e6621fa + 02bb424 commit 5f88fbd

File tree

11 files changed

+137
-63
lines changed

11 files changed

+137
-63
lines changed

.github/workflows/LinkChecker.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ jobs:
1717
ref: 'gh-pages'
1818

1919
- name: Link Checker
20-
uses: lycheeverse/lychee-action@v1.1.0
20+
uses: lycheeverse/lychee-action@v2.0.2
2121
with:
2222
args: >-
2323
--verbose --no-progress **/*.html
76.1 KB
Loading

tutorials/advanced-jsosolvers/index.md

Lines changed: 57 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,16 @@
55
\preamble{Tangi Migot}
66

77

8-
[![OptimizationProblems 0.7.3](https://img.shields.io/badge/OptimizationProblems-0.7.3-8b0000?style=flat-square&labelColor=cb3c33)](https://jso.dev/OptimizationProblems.jl/stable/)
9-
[![SolverBenchmark 0.6.0](https://img.shields.io/badge/SolverBenchmark-0.6.0-006400?style=flat-square&labelColor=389826)](https://jso.dev/SolverBenchmark.jl/stable/)
10-
![Plots 1.39.0](https://img.shields.io/badge/Plots-1.39.0-000?style=flat-square&labelColor=999)
11-
[![ADNLPModels 0.7.0](https://img.shields.io/badge/ADNLPModels-0.7.0-8b0000?style=flat-square&labelColor=cb3c33)](https://jso.dev/ADNLPModels.jl/stable/)
12-
[![Krylov 0.9.5](https://img.shields.io/badge/Krylov-0.9.5-4b0082?style=flat-square&labelColor=9558b2)](https://jso.dev/Krylov.jl/stable/)
13-
[![JSOSolvers 0.11.0](https://img.shields.io/badge/JSOSolvers-0.11.0-006400?style=flat-square&labelColor=389826)](https://jso.dev/JSOSolvers.jl/stable/)
8+
[![OptimizationProblems 0.7.4](https://img.shields.io/badge/OptimizationProblems-0.7.4-8b0000?style=flat-square&labelColor=cb3c33)](https://jso.dev/OptimizationProblems.jl/stable/)
9+
[![SolverBenchmark 0.6.2](https://img.shields.io/badge/SolverBenchmark-0.6.2-006400?style=flat-square&labelColor=389826)](https://jso.dev/SolverBenchmark.jl/stable/)
10+
![Plots 1.41.1](https://img.shields.io/badge/Plots-1.41.1-000?style=flat-square&labelColor=999)
11+
[![ADNLPModels 0.7.2](https://img.shields.io/badge/ADNLPModels-0.7.2-8b0000?style=flat-square&labelColor=cb3c33)](https://jso.dev/ADNLPModels.jl/stable/)
12+
[![Krylov 0.10.2](https://img.shields.io/badge/Krylov-0.10.2-4b0082?style=flat-square&labelColor=9558b2)](https://jso.dev/Krylov.jl/stable/)
13+
[![JSOSolvers 0.14.3](https://img.shields.io/badge/JSOSolvers-0.14.3-006400?style=flat-square&labelColor=389826)](https://jso.dev/JSOSolvers.jl/stable/)
1414

1515

1616

17-
# Comparing subsolvers for nonlinear least squares JSOSolvers solvers
17+
# Comparing subsolvers for nonlinear least squares in JSOSolvers
1818

1919
This tutorial showcases some advanced features of solvers in JSOSolvers.
2020

@@ -25,20 +25,20 @@ using JSOSolvers
2525

2626

2727

28-
We benchmark different subsolvers used in the solvers TRUNK for unconstrained nonlinear least squares problems.
28+
We benchmark different subsolvers used in the solver TRUNK for unconstrained nonlinear least squares problems.
2929
The first step is to select a set of problems that are nonlinear least squares.
3030

3131
```julia
3232
using ADNLPModels
3333
using OptimizationProblems
3434
using OptimizationProblems.ADNLPProblems
3535
df = OptimizationProblems.meta
36-
names = df[(df.objtype .== :least_squares) .& (df.contype .== :unconstrained), :name]
37-
ad_problems = (eval(Meta.parse(problem))(use_nls = true) for problem names)
36+
problem_names = df[(df.objtype .== :least_squares) .& (df.contype .== :unconstrained), :name]
37+
ad_problems = (eval(Meta.parse(problem))(use_nls = true) for problem problem_names)
3838
```
3939

4040
```plaintext
41-
Base.Generator{Vector{String}, Main.var"##WeaveSandBox#292".var"#1#2"}(Main.var"##WeaveSandBox#292".var"#1#2"(), ["arglina", "arglinb", "bard", "bdqrtic", "beale", "bennett5", "boxbod", "brownal", "br
41+
Base.Generator{Vector{String}, Main.var"##WeaveSandBox#277".var"#2#3"}(Main.var"##WeaveSandBox#277".var"#2#3"(), ["arglina", "arglinb", "bard", "bdqrtic", "beale", "bennett5", "boxbod", "brownal", "br
4242
ownbs", "brownden" … "power", "rat42", "rat43", "rozman1", "sbrybnd", "spmsrtls", "thurber", "tquartic", "vibrbeam", "watson"])
4343
```
4444

@@ -69,62 +69,78 @@ JSOSolvers.trunkls_allowed_subsolvers
6969
```
7070

7171
```plaintext
72-
4-element Vector{UnionAll}:
73-
Krylov.CglsSolver
74-
Krylov.CrlsSolver
75-
Krylov.LsqrSolver
76-
Krylov.LsmrSolver
72+
(:cgls, :crls, :lsqr, :lsmr)
7773
```
7874

7975

8076

8177

8278

83-
This benchmark could also be followed for the solver TRON where the following subsolver are available.
79+
This benchmark could also be followed for the solver TRON where the following subsolvers are available.
8480

8581
```julia
8682
JSOSolvers.tronls_allowed_subsolvers
8783
```
8884

8985
```plaintext
90-
4-element Vector{UnionAll}:
91-
Krylov.CglsSolver
92-
Krylov.CrlsSolver
93-
Krylov.LsqrSolver
94-
Krylov.LsmrSolver
86+
(:cgls, :crls, :lsqr, :lsmr)
9587
```
9688

9789

9890

9991

10092

10193
These linear least squares solvers are implemented in the package [Krylov.jl](https://github.com/JuliaSmoothOptimizers/Krylov.jl).
94+
For detailed descriptions of each subsolver's algorithm and when to use it, see the [Krylov.jl documentation](https://jso.dev/Krylov.jl/stable/).
95+
96+
We define a dictionary of the different solvers that will be benchmarked.
97+
We consider here four variants of TRUNK using the different subsolvers.
98+
99+
For example, to call TRUNK with an explicit subsolver:
102100

103101
```julia
104-
using Krylov
102+
stats = trunk(nls, subsolver = :cgls)
105103
```
106104

105+
```plaintext
106+
"Execution stats: first-order stationary"
107+
```
107108

108109

109110

110-
We define a dictionary of the different solvers that will be benchmarked.
111-
We consider here four variants of TRUNK using the different subsolvers.
111+
112+
113+
The same subsolver selection pattern applies to TRON's least-squares specialization:
114+
115+
```julia
116+
stats_tron = tron(nls, subsolver = :lsmr)
117+
```
118+
119+
```plaintext
120+
"Execution stats: first-order stationary"
121+
```
122+
123+
124+
125+
126+
127+
Now we define the solver dictionary for benchmarking:
112128

113129
```julia
114130
solvers = Dict(
115-
:trunk_cgls => model -> trunk(model, subsolver_type = CglsSolver),
116-
:trunk_crls => model -> trunk(model, subsolver_type = CrlsSolver),
117-
:trunk_lsqr => model -> trunk(model, subsolver_type = LsqrSolver),
118-
:trunk_lsmr => model -> trunk(model, subsolver_type = LsmrSolver)
131+
:trunk_cgls => model -> trunk(model, subsolver = :cgls),
132+
:trunk_crls => model -> trunk(model, subsolver = :crls),
133+
:trunk_lsqr => model -> trunk(model, subsolver = :lsqr),
134+
:trunk_lsmr => model -> trunk(model, subsolver = :lsmr)
119135
)
120136
```
121137

122138
```plaintext
123139
Dict{Symbol, Function} with 4 entries:
124-
:trunk_lsqr => #5
125-
:trunk_cgls => #3
126-
:trunk_crls => #4
127-
:trunk_lsmr => #6
140+
:trunk_lsqr => #12
141+
:trunk_cgls => #8
142+
:trunk_crls => #10
143+
:trunk_lsmr => #14
128144
```
129145

130146

@@ -140,10 +156,10 @@ stats = bmark_solvers(solvers, ad_problems)
140156

141157
```plaintext
142158
Dict{Symbol, DataFrames.DataFrame} with 4 entries:
143-
:trunk_lsqr => 66×39 DataFrame…
144-
:trunk_cgls => 66×39 DataFrame…
145-
:trunk_crls => 66×39 DataFrame…
146-
:trunk_lsmr => 66×39 DataFrame…
159+
:trunk_lsqr => 66×40 DataFrame…
160+
:trunk_cgls => 66×40 DataFrame…
161+
:trunk_crls => 66×40 DataFrame…
162+
:trunk_lsmr => 66×40 DataFrame…
147163
```
148164

149165

@@ -162,8 +178,8 @@ costs = [df -> .!solved(df) .* Inf .+ df.elapsed_time]
162178
```
163179

164180
```plaintext
165-
1-element Vector{Main.var"##WeaveSandBox#292".var"#11#12"}:
166-
#11 (generic function with 1 method)
181+
1-element Vector{Main.var"##WeaveSandBox#277".var"#17#18"}:
182+
#17 (generic function with 1 method)
167183
```
168184

169185

@@ -180,10 +196,10 @@ gr()
180196
profile_solvers(stats, costs, costnames)
181197
```
182198

183-
![](figures/index_10_1.png)
199+
![](figures/index_11_1.png)
184200

185201

186202

187203
The CRLS and CGLS variants are the ones solving more problems, and even though the difference is rather small the CGLS variant is consistently faster which seems to indicate that it is the most appropriate subsolver for TRUNK.
188-
The size of the problems were rather small here, so this should be confirmed on larger instance.
204+
The size of the problems was rather small here, so this should be confirmed on larger instances.
189205
Moreover, the results may vary depending on the origin of the test problems.
10.1 KB
Loading
9.89 KB
Loading
10.1 KB
Loading
1.38 KB
Loading

tutorials/introduction-to-benchmarkprofiles/index.md

Lines changed: 41 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,31 @@
55
\preamble{Abel Soares Siqueira and Dominique Orban}
66

77

8-
![JSON 0.21.3](https://img.shields.io/badge/JSON-0.21.3-000?style=flat-square&labelColor=999)
9-
[![BenchmarkProfiles 0.4.3](https://img.shields.io/badge/BenchmarkProfiles-0.4.3-006400?style=flat-square&labelColor=389826)](https://juliasmoothoptimizers.github.io/BenchmarkProfiles.jl/stable/)
10-
![Plots 1.38.5](https://img.shields.io/badge/Plots-1.38.5-000?style=flat-square&labelColor=999)
8+
![JSON 0.21.4](https://img.shields.io/badge/JSON-0.21.4-000?style=flat-square&labelColor=999)
9+
[![BenchmarkProfiles 0.4.6](https://img.shields.io/badge/BenchmarkProfiles-0.4.6-006400?style=flat-square&labelColor=389826)](https://jso.dev/BenchmarkProfiles.jl/stable/)
10+
![Plots 1.41.1](https://img.shields.io/badge/Plots-1.41.1-000?style=flat-square&labelColor=999)
11+
![Random 1.11.0](https://img.shields.io/badge/Random-1.11.0-000?style=flat-square&labelColor=999)
1112

1213

1314

14-
This tutorial is essentially a collection of examples.
15+
This tutorial demonstrates how to use BenchmarkProfiles.jl to visualize and compare solver performance across multiple test problems.
1516

1617
## Performance Profile
1718

18-
Performance profiles are straightforward to use. The input is a matrix `T` with entries `T[i,j]` indicating the cost to solve problem `i` using solver `j`. Cost can be, for instance, elapsed time, or number of evaluations. The cost should be positive. If any cost is zero, all measures will be shifted by 1.
19+
Performance profiles, introduced by Dolan and Moré (2002), provide a graphical way to compare the performance of multiple solvers across a test set. They show the fraction of problems solved by each solver as a function of a performance tolerance.
20+
21+
### Understanding Performance Profiles
22+
23+
The input is a matrix `T` with entries `T[i,j]` indicating the cost to solve problem `i` using solver `j`. Cost can be, for instance, elapsed time, number of iterations, or function evaluations. The cost should be positive. If any cost is zero, all measures will be shifted by 1.
24+
25+
The performance profile plots:
26+
- **x-axis (τ)**: Performance ratio - how much slower a solver is compared to the best solver for each problem. τ=1 means the solver was fastest, τ=2 means it took twice as long as the fastest solver.
27+
- **y-axis (ρ(τ))**: Fraction of problems solved within the performance ratio τ. ρ(2) = 0.8 means the solver solved 80% of problems within twice the time of the best solver.
28+
29+
**Key interpretations**:
30+
- The height at τ=1 (left side) shows the fraction of problems where the solver was fastest
31+
- The right-side height (as τ→∞) shows the fraction of problems successfully solved (robustness)
32+
- Higher curves are better - the solver solves more problems with smaller performance ratios
1933

2034
Basic usage:
2135

@@ -54,9 +68,19 @@ performance_profile(PlotsBackend(), T, ["Solver 1", "Solver 2", "Solver 3"])
5468

5569

5670

71+
### Customization Options
72+
5773
`Plots` arguments can be passed to `performance_profile()` or used as they normally would be with `Plots`.
5874
In the example below, we pass `xlabel` to `performance_profile` and set `ylabel` through `ylabel!`.
5975

76+
Common customization options:
77+
- `lw`: Line width
78+
- `c` or `color`: Line colors
79+
- `linestyles`: Line styles (`:solid`, `:dash`, `:dot`, etc.)
80+
- `xlabel`, `ylabel`: Axis labels
81+
- `title`: Plot title
82+
- `legend`: Legend position (e.g., `:bottomright`, `:topleft`)
83+
6084
```julia
6185
using Plots
6286

@@ -67,3 +91,15 @@ ylabel!("ρ(τ)")
6791
```
6892

6993
![](figures/index_4_1.png)
94+
95+
96+
97+
### Additional Parameters
98+
99+
The `performance_profile` function accepts several optional keyword arguments:
100+
101+
- `logscale::Bool=true`: Use logarithmic scale on the x-axis (default: true). Useful for viewing performance across a wide range of ratios.
102+
- `sampletol::Number=0`: Tolerance for sampling data points. Can reduce plot complexity for large datasets.
103+
- `title::String=""`: Title for the plot
104+
105+
For more details on performance profiles, see: Dolan, E. D., & Moré, J. J. (2002). Benchmarking optimization software with performance profiles. Mathematical Programming, 91(2), 201-213.
246 KB
Loading
22.4 KB
Loading

0 commit comments

Comments
 (0)