Skip to content

Commit eaf2c37

Browse files
tmigotamontoison
andauthored
Add tutorial with manual sparsity pattern (#209)
* Add tutorial with manual sparsity pattern --------- Co-authored-by: Alexis Montoison <[email protected]>
1 parent 3ef874f commit eaf2c37

File tree

2 files changed

+114
-0
lines changed

2 files changed

+114
-0
lines changed

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ makedocs(
2020
"Support multiple precision" => "generic.md",
2121
"Sparse Jacobian and Hessian" => "sparse.md",
2222
"Performance tips" => "performance.md",
23+
"Provide sparsity pattern for sparse derivatives" => "sparsity_pattern.md",
2324
"Reference" => "reference.md",
2425
],
2526
)

docs/src/sparsity_pattern.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# Improve sparse derivatives
2+
3+
In this tutorial, we show a feature of ADNLPModels.jl to potentially improve the computation of sparse Jacobian and Hessian.
4+
5+
Our test problem is an academic investment control problem:
6+
7+
```math
8+
\begin{aligned}
9+
\min_{u,x} \quad & \int_0^1 (u(t) - 1) x(t) \\
10+
& \dot{x}(t) = \gamma u(t) x(t).
11+
\end{aligned}
12+
```
13+
14+
Using a simple quadrature formula for the objective functional and a forward finite difference for the differential equation, one can obtain a finite-dimensional continuous optimization problem.
15+
One implementation is available in the package [`OptimizationProblems.jl`](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl).
16+
17+
```@example ex1
18+
using ADNLPModels
19+
using SparseArrays
20+
21+
T = Float64
22+
n = 100000
23+
N = div(n, 2)
24+
h = 1 // N
25+
x0 = 1
26+
gamma = 3
27+
function f(y; N = N, h = h)
28+
@views x, u = y[1:N], y[(N + 1):end]
29+
return 1 // 2 * h * sum((u[k] - 1) * x[k] + (u[k + 1] - 1) * x[k + 1] for k = 1:(N - 1))
30+
end
31+
function c!(cx, y; N = N, h = h, gamma = gamma)
32+
@views x, u = y[1:N], y[(N + 1):end]
33+
for k = 1:(N - 1)
34+
cx[k] = x[k + 1] - x[k] - 1 // 2 * h * gamma * (u[k] * x[k] + u[k + 1] * x[k + 1])
35+
end
36+
return cx
37+
end
38+
lvar = vcat(-T(Inf) * ones(T, N), zeros(T, N))
39+
uvar = vcat(T(Inf) * ones(T, N), ones(T, N))
40+
xi = vcat(ones(T, N), zeros(T, N))
41+
lcon = ucon = vcat(one(T), zeros(T, N - 1))
42+
43+
@elapsed begin
44+
nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend)
45+
end
46+
47+
```
48+
49+
`ADNLPModel` will automatically prepare an AD backend for computing sparse Jacobian and Hessian.
50+
We disabled the Hessian computation here to focus the measurement on the Jacobian computation.
51+
The keyword argument `show_time = true` can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.
52+
53+
```@example ex1
54+
using NLPModels
55+
x = sqrt(2) * ones(n)
56+
jac_nln(nlp, x)
57+
```
58+
59+
However, it can be rather costly to determine for a given function the sparsity pattern of the Jacobian and the Hessian of the Lagrangian.
60+
The good news is that determining this pattern a priori can be relatively straightforward, especially for problems like our optimal control investment problem and other problems with differential equations in the constraints.
61+
62+
The following example instantiates the Jacobian backend while manually providing the sparsity pattern.
63+
64+
```@example ex2
65+
using ADNLPModels
66+
using SparseArrays
67+
68+
T = Float64
69+
n = 100000
70+
N = div(n, 2)
71+
h = 1 // N
72+
x0 = 1
73+
gamma = 3
74+
function f(y; N = N, h = h)
75+
@views x, u = y[1:N], y[(N + 1):end]
76+
return 1 // 2 * h * sum((u[k] - 1) * x[k] + (u[k + 1] - 1) * x[k + 1] for k = 1:(N - 1))
77+
end
78+
function c!(cx, y; N = N, h = h, gamma = gamma)
79+
@views x, u = y[1:N], y[(N + 1):end]
80+
for k = 1:(N - 1)
81+
cx[k] = x[k + 1] - x[k] - 1 // 2 * h * gamma * (u[k] * x[k] + u[k + 1] * x[k + 1])
82+
end
83+
return cx
84+
end
85+
lvar = vcat(-T(Inf) * ones(T, N), zeros(T, N))
86+
uvar = vcat(T(Inf) * ones(T, N), ones(T, N))
87+
xi = vcat(ones(T, N), zeros(T, N))
88+
lcon = ucon = vcat(one(T), zeros(T, N - 1))
89+
90+
@elapsed begin
91+
Is = Vector{Int}(undef, 4 * (N - 1))
92+
Js = Vector{Int}(undef, 4 * (N - 1))
93+
Vs = ones(Bool, 4 * (N - 1))
94+
for i = 1:(N - 1)
95+
Is[((i - 1) * 4 + 1):(i * 4)] = [i; i; i; i]
96+
Js[((i - 1) * 4 + 1):(i * 4)] = [i; i + 1; N + i; N + i + 1]
97+
end
98+
J = sparse(Is, Js, Vs, N - 1, n)
99+
100+
jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
101+
nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
102+
end
103+
```
104+
105+
We recover the same Jacobian.
106+
107+
```@example ex2
108+
using NLPModels
109+
x = sqrt(2) * ones(n)
110+
jac_nln(nlp, x)
111+
```
112+
113+
The same can be done for the Hessian of the Lagrangian.

0 commit comments

Comments
 (0)