Skip to content

Commit e501118

Browse files
authored
Add performance tips tutorial (#191)
* Add performance tip tutorials
1 parent d35eef0 commit e501118

File tree

3 files changed

+214
-0
lines changed

3 files changed

+214
-0
lines changed

docs/Project.toml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,27 @@
11
[deps]
22
ADNLPModels = "54578032-b7ea-4c30-94aa-7cbd1cce6c9a"
3+
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
4+
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
35
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
46
ManualNLPModels = "30dfa513-9b2f-4fb3-9796-781eabac1617"
57
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
8+
NLPModelsJuMP = "792afdf1-32c1-5681-94e0-d7bf7a5df49e"
9+
OptimizationProblems = "5049e819-d29b-5fba-b941-0eee7e64c1c6"
10+
Percival = "01435c0c-c90d-11e9-3788-63660f8fbccc"
11+
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
12+
SolverBenchmark = "581a75fa-a23a-52d0-a590-d6201de2218a"
613
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"
14+
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
715

816
[compat]
17+
DataFrames = "1"
918
Documenter = "0.27"
1019
ManualNLPModels = "0.1"
1120
NLPModels = "0.20"
21+
NLPModelsJuMP = "0.12"
22+
OptimizationProblems = "0.7"
23+
Percival = "0.7"
24+
Plots = "1"
25+
SolverBenchmark = "0.5"
1226
Symbolics = "5.3"
27+
Zygote = "0.6.62"

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ makedocs(
1717
"Default backends" => "predefined.md",
1818
"Build a hybrid NLPModel" => "mixed.md",
1919
"Support multiple precision" => "generic.md",
20+
"Performance tips" => "performance.md",
2021
"Reference" => "reference.md",
2122
],
2223
)

docs/src/performance.md

Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
# Performance tips
2+
3+
The package `ADNLPModels.jl` is designed to easily model optimization problems andto allow an efficient access to the [`NLPModel API`](https://github.com/JuliaSmoothOptimizers/NLPModels.jl).
4+
In this tutorial, we will see some tips to ensure the maximum performance of the model.
5+
6+
## Use in-place constructor
7+
8+
When dealing with a constrained optimization problem, it is recommended to use in-place constraint functions.
9+
10+
```@example ex1
11+
using ADNLPModels, NLPModels
12+
f(x) = sum(x)
13+
x0 = ones(2)
14+
lcon = ucon = ones(1)
15+
c_out(x) = [x[1]]
16+
nlp_out = ADNLPModel(f, x0, c_out, lcon, ucon)
17+
18+
c_in(cx, x) = begin
19+
cx[1] = x[1]
20+
return cx
21+
end
22+
nlp_in = ADNLPModel!(f, x0, c_in, lcon, ucon)
23+
```
24+
25+
```@example ex1
26+
using BenchmarkTools
27+
cx = rand(1)
28+
x = 18 * ones(2)
29+
@btime cons!(nlp_out, x, cx)
30+
```
31+
32+
```@example ex1
33+
@btime cons!(nlp_in, x, cx)
34+
```
35+
36+
The difference between the two increases with the dimension.
37+
38+
Note that the same applies to nonlinear least squares problems.
39+
40+
```@example ex1
41+
F(x) = [
42+
x[1];
43+
x[1] + x[2]^2;
44+
sin(x[2]);
45+
exp(x[1] + 0.5)
46+
]
47+
x0 = ones(2)
48+
nequ = 4
49+
nls_out = ADNLSModel(F, x0, nequ)
50+
51+
F!(Fx, x) = begin
52+
Fx[1] = x[1]
53+
Fx[2] = x[1] + x[2]^2
54+
Fx[3] = sin(x[2])
55+
Fx[4] = exp(x[1] + 0.5)
56+
return Fx
57+
end
58+
nls_in = ADNLSModel!(F!, x0, nequ)
59+
```
60+
61+
```@example ex1
62+
Fx = rand(4)
63+
@btime residual!(nls_out, x, Fx)
64+
```
65+
66+
```@example ex1
67+
@btime residual!(nls_in, x, Fx)
68+
```
69+
70+
This phenomenon also extends to related backends.
71+
72+
```@example ex1
73+
Fx = rand(4)
74+
v = ones(2)
75+
@btime jprod_residual!(nls_out, x, v, Fx)
76+
```
77+
78+
```@example ex1
79+
@btime jprod_residual!(nls_in, x, v, Fx)
80+
```
81+
82+
## Use only the needed operations
83+
84+
It is tempting to define the most generic and efficient `ADNLPModel` from the start.
85+
86+
```@example ex2
87+
using ADNLPModels, NLPModels, Symbolics
88+
f(x) = (x[1] - x[2])^2
89+
x0 = ones(2)
90+
lcon = ucon = ones(1)
91+
c_in(cx, x) = begin
92+
cx[1] = x[1]
93+
return cx
94+
end
95+
nlp = ADNLPModel!(f, x0, c_in, lcon, ucon, show_time = true)
96+
```
97+
98+
However, depending on the size of the problem this might time consuming as initializing each backend takes time.
99+
Besides, some solvers may not require all the API to solve the problem.
100+
For instance, [`Percival.jl`](https://github.com/JuliaSmoothOptimizers/Percival.jl) is matrix-free solver in the sense that it only uses `jprod`, `jtprod` and `hprod`.
101+
102+
```@example ex2
103+
using Percival
104+
stats = percival(nlp)
105+
```
106+
107+
```@example ex2
108+
nlp.counters
109+
```
110+
111+
Therefore, it is more efficient to avoid preparing Jacobian and Hessian backends in this case.
112+
113+
```@example ex2
114+
nlp = ADNLPModel!(f, x0, c_in, lcon, ucon, jacobian_backend = ADNLPModels.EmptyADbackend, hessian_backend = ADNLPModels.EmptyADbackend, show_time = true)
115+
```
116+
117+
or, equivalently, using the `matrix_free` keyword argument
118+
119+
```@example ex2
120+
nlp = ADNLPModel!(f, x0, c_in, lcon, ucon, show_time = true, matrix_free = true)
121+
```
122+
123+
## Benchmarks
124+
125+
This package implements several backends for each method and it is possible to design your own backend as well.
126+
Then, one way to choose the most efficient one is to run benchmarks.
127+
128+
```@example ex3
129+
using ADNLPModels, NLPModels, OptimizationProblems
130+
```
131+
132+
The package [`OptimizationProblems.jl`](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl) provides a collection of optimization problems in JuMP and ADNLPModels syntax.
133+
134+
```@example ex3
135+
meta = OptimizationProblems.meta;
136+
```
137+
138+
We select the problems that are scalable, so that there size can be modified. By default, the size is close to `100`.
139+
140+
```@example ex3
141+
scalable_problems = meta[(meta.variable_nvar .== true) .& (meta.ncon .> 0), :name]
142+
```
143+
144+
```@example ex3
145+
using NLPModelsJuMP, Zygote
146+
list_backends = Dict(
147+
:forward => ADNLPModels.ForwardDiffADGradient,
148+
:reverse => ADNLPModels.ReverseDiffADGradient,
149+
:zygote => ADNLPModels.ZygoteADGradient,
150+
)
151+
```
152+
153+
```@example ex3
154+
using DataFrames
155+
nprob = length(scalable_problems)
156+
stats = Dict{Symbol, DataFrame}()
157+
for back in union(keys(list_backends), [:jump])
158+
stats[back] = DataFrame("name" => scalable_problems,
159+
"time" => zeros(nprob),
160+
"allocs" => zeros(Int, nprob))
161+
end
162+
```
163+
164+
```@example ex3
165+
using BenchmarkTools
166+
nscal = 1000
167+
for name in scalable_problems
168+
n = eval(Meta.parse("OptimizationProblems.get_" * name * "_nvar(n = $(nscal))"))
169+
m = eval(Meta.parse("OptimizationProblems.get_" * name * "_ncon(n = $(nscal))"))
170+
@info " $(name) with $n vars and $m cons"
171+
global x = ones(n)
172+
global g = zeros(n)
173+
global pb = Meta.parse(name)
174+
global nlp = MathOptNLPModel(OptimizationProblems.PureJuMP.eval(pb)(n = nscal))
175+
b = @benchmark grad!(nlp, x, g)
176+
stats[:jump][stats[:jump].name .== name, :time] = [median(b.times)]
177+
stats[:jump][stats[:jump].name .== name, :allocs] = [median(b.allocs)]
178+
for back in keys(list_backends)
179+
nlp = OptimizationProblems.ADNLPProblems.eval(pb)(n = nscal, gradient_backend = list_backends[back], matrix_free = true)
180+
b = @benchmark grad!(nlp, x, g)
181+
stats[back][stats[back].name .== name, :time] = [median(b.times)]
182+
stats[back][stats[back].name .== name, :allocs] = [median(b.allocs)]
183+
end
184+
end
185+
```
186+
187+
```@example ex3
188+
using Plots, SolverBenchmark
189+
costnames = ["median time (in ns)", "median allocs"]
190+
costs = [
191+
df -> df.time,
192+
df -> df.allocs,
193+
]
194+
195+
gr()
196+
197+
profile_solvers(stats, costs, costnames)
198+
```

0 commit comments

Comments
 (0)