Skip to content

Commit 6c5578d

Browse files
committed
Add a function get_sparsity_pattern
1 parent ffbaba9 commit 6c5578d

File tree

8 files changed

+177
-33
lines changed

8 files changed

+177
-33
lines changed

docs/make.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ makedocs(
2020
"Support multiple precision" => "generic.md",
2121
"Sparse Jacobian and Hessian" => "sparse.md",
2222
"Performance tips" => "performance.md",
23-
"Provide sparsity pattern for sparse derivatives" => "sparsity_pattern.md",
23+
"Providing sparsity pattern for sparse derivatives" => "sparsity_pattern.md",
2424
"Reference" => "reference.md",
2525
],
2626
)

docs/src/sparse.md

Lines changed: 79 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
# Sparse Hessian and Jacobian computations
1+
# [Sparse Hessian and Jacobian computations](@id sparse)
22

3-
It is to be noted that by default the Jacobian and Hessian are sparse.
3+
By default, the Jacobian and Hessian are treated as sparse.
44

55
```@example ex1
66
using ADNLPModels, NLPModels
77
88
f(x) = (x[1] - 1)^2
99
T = Float64
1010
x0 = T[-1.2; 1.0]
11-
lvar, uvar = zeros(T, 2), ones(T, 2) # must be of same type than `x0`
11+
lvar, uvar = zeros(T, 2), ones(T, 2)
1212
lcon, ucon = -T[0.5], T[0.5]
1313
c!(cx, x) = begin
1414
cx[1] = x[2]
@@ -18,7 +18,7 @@ nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon, backend = :optimized)
1818
```
1919

2020
```@example ex1
21-
(get_nnzj(nlp), get_nnzh(nlp)) # number of nonzeros elements in the Jacobian and Hessian
21+
(get_nnzj(nlp), get_nnzh(nlp)) # Number of nonzero elements in the Jacobian and Hessian
2222
```
2323

2424
```@example ex1
@@ -31,24 +31,67 @@ x = rand(T, 2)
3131
H = hess(nlp, x)
3232
```
3333

34-
The available backends for sparse derivatives (`SparseADJacobian`, `SparseADHessian` and `SparseReverseADHessian`) have keyword arguments `detector` and `coloring_algorithm` to specify the sparsity pattern detector and the coloring algorithm, respectively.
34+
The backends available for sparse derivatives (`SparseADJacobian`, `SparseADHessian`, and `SparseReverseADHessian`) allow for customization through keyword arguments such as `detector` and `coloring_algorithm`.
35+
These arguments specify the sparsity pattern detector and the coloring algorithm, respectively.
3536

3637
- A **`detector`** must be of type `ADTypes.AbstractSparsityDetector`.
37-
The default detector is `TracerSparsityDetector()` from the package `SparseConnectivityTracer.jl`.
38-
Prior to version 0.8.0, the default detector was `SymbolicSparsityDetector()` from `Symbolics.jl`.
38+
The default detector is `TracerSparsityDetector()` from the package `SparseConnectivityTracer.jl`.
39+
Prior to version 0.8.0, the default was `SymbolicSparsityDetector()` from `Symbolics.jl`.
3940

4041
- A **`coloring_algorithm`** must be of type `SparseMatrixColorings.GreedyColoringAlgorithm`.
41-
The default algorithm is `GreedyColoringAlgorithm{:direct}()` for `SparseADJacobian` and `SparseADHessian`, while it is `GreedyColoringAlgorithm{:substitution}()` for `SparseReverseADHessian`.
42-
These algorithms are available in the package `SparseMatrixColorings.jl`.
42+
The default algorithm is `GreedyColoringAlgorithm{:direct}()` for `SparseADJacobian` and `SparseADHessian`, while it is `GreedyColoringAlgorithm{:substitution}()` for `SparseReverseADHessian`.
43+
These algorithms are provided by the package `SparseMatrixColorings.jl`.
4344

4445
The `GreedyColoringAlgorithm{:direct}()` performs column coloring for Jacobians and star coloring for Hessians.
45-
In contrast, `GreedyColoringAlgorithm{:substitution}()` applies acyclic coloring for Hessians.
46-
The `:substitution` coloring mode usually finds fewer colors than the `:direct` mode and thus fewer directional derivatives are needed to recover all non-zeros of the sparse Hessian.
47-
However, it requires storing the compressed sparse Hessian, while `:direct` coloring only stores one column of the compressed Hessian.
46+
In contrast, `GreedyColoringAlgorithm{:substitution}()` applies acyclic coloring for Hessians. The `:substitution` mode generally requires fewer colors than `:direct`, thus fewer directional derivatives are needed to reconstruct the sparse Hessian.
47+
However, it necessitates storing the compressed sparse Hessian, while `:direct` coloring only requires storage for one column of the compressed Hessian.
4848

49-
The `:direct` coloring mode is numerically more stable and may be preferable for highly ill-conditioned Hessian as it doesn't require solving triangular systems to compute the non-zeros from the compressed Hessian.
49+
The `:direct` coloring mode is numerically more stable and may be preferable for highly ill-conditioned Hessians, as it avoids solving triangular systems to compute nonzero entries from the compressed Hessian.
50+
51+
## Extracting sparsity patterns
52+
53+
`ADNLPModels.jl` provides the function `get_sparsity_pattern` to retrieve the sparsity patterns of the Jacobian or Hessian from a model.
54+
55+
```@docs
56+
get_sparsity_pattern
57+
```
58+
59+
```@example ex3
60+
using SparseArrays, ADNLPModels, NLPModels
61+
62+
nvar = 10
63+
ncon = 5
64+
65+
f(x) = sum((x[i] - i)^2 for i = 1:nvar) + x[nvar] * sum(x[j] for j = 1:nvar-1)
66+
67+
function c!(cx, x)
68+
cx[1] = x[1] + x[2]
69+
cx[2] = x[1] + x[2] + x[3]
70+
cx[3] = x[2] + x[3] + x[4]
71+
cx[4] = x[3] + x[4] + x[5]
72+
cx[5] = x[4] + x[5]
73+
return cx
74+
end
75+
76+
T = Float64
77+
x0 = -ones(T, nvar)
78+
lvar = zeros(T, nvar)
79+
uvar = 2 * ones(T, nvar)
80+
lcon = -0.5 * ones(T, ncon)
81+
ucon = 0.5 * ones(T, ncon)
82+
83+
nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon)
84+
85+
J = get_sparsity_pattern(nlp, :jacobian)
86+
H = get_sparsity_pattern(nlp, :hessian)
87+
```
88+
89+
## Using known sparsity patterns
90+
91+
If the sparsity pattern of the Jacobian or the Hessian is already known, you can provide it directly.
92+
This may happen when the pattern is derived from the application or has been computed previously and saved for reuse.
93+
Note that both the lower and upper triangular parts of the Hessian are required during the coloring phase.
5094

51-
If the sparsity pattern of the Jacobian of the constraint or the Hessian of the Lagrangian is available, you can directly provide them.
5295
```@example ex2
5396
using SparseArrays, ADNLPModels, NLPModels
5497
@@ -57,17 +100,17 @@ ncon = 5
57100
58101
f(x) = sum((x[i] - i)^2 for i = 1:nvar) + x[nvar] * sum(x[j] for j = 1:nvar-1)
59102
60-
H = SparseMatrixCSC{Bool, Int64}(
61-
[ 1 0 0 0 0 0 0 0 0 1 ;
62-
0 1 0 0 0 0 0 0 0 1 ;
63-
0 0 1 0 0 0 0 0 0 1 ;
64-
0 0 0 1 0 0 0 0 0 1 ;
65-
0 0 0 0 1 0 0 0 0 1 ;
66-
0 0 0 0 0 1 0 0 0 1 ;
67-
0 0 0 0 0 0 1 0 0 1 ;
68-
0 0 0 0 0 0 0 1 0 1 ;
69-
0 0 0 0 0 0 0 0 1 1 ;
70-
1 1 1 1 1 1 1 1 1 1 ]
103+
H = SparseMatrixCSC{Bool,Int}(
104+
[ 1 0 0 0 0 0 0 0 0 1 ;
105+
0 1 0 0 0 0 0 0 0 1 ;
106+
0 0 1 0 0 0 0 0 0 1 ;
107+
0 0 0 1 0 0 0 0 0 1 ;
108+
0 0 0 0 1 0 0 0 0 1 ;
109+
0 0 0 0 0 1 0 0 0 1 ;
110+
0 0 0 0 0 0 1 0 0 1 ;
111+
0 0 0 0 0 0 0 1 0 1 ;
112+
0 0 0 0 0 0 0 0 1 1 ;
113+
1 1 1 1 1 1 1 1 1 1 ]
71114
)
72115
73116
function c!(cx, x)
@@ -79,12 +122,12 @@ function c!(cx, x)
79122
return cx
80123
end
81124
82-
J = SparseMatrixCSC{Bool, Int64}(
83-
[ 1 1 0 0 0 ;
84-
1 1 1 0 0 ;
85-
0 1 1 1 0 ;
86-
0 0 1 1 1 ;
87-
0 0 0 1 1 ]
125+
J = SparseMatrixCSC{Bool,Int}(
126+
[ 1 1 0 0 0 0 0 0 0 0 ;
127+
1 1 1 0 0 0 0 0 0 0 ;
128+
0 1 1 1 0 0 0 0 0 0 ;
129+
0 0 1 1 1 0 0 0 0 0 ;
130+
0 0 0 1 1 0 0 0 0 0 ]
88131
)
89132
90133
T = Float64
@@ -100,6 +143,11 @@ H_backend = ADNLPModels.SparseADHessian(nvar, f, ncon, c!, H)
100143
nlp = ADNLPModel!(f, x0, lvar, uvar, c!, lcon, ucon, jacobian_backend=J_backend, hessian_backend=H_backend)
101144
```
102145

146+
The section ["providing the sparsity pattern for sparse derivatives"](@ref sparsity-pattern) illustrates this feature with a more advanced application.
147+
148+
149+
### Acknowledgements
150+
103151
The package [`SparseConnectivityTracer.jl`](https://github.com/adrhill/SparseConnectivityTracer.jl) is used to compute the sparsity pattern of Jacobians and Hessians.
104152
The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using [`SparseMatrixColorings.jl`](https://github.com/gdalle/SparseMatrixColorings.jl).
105153
As of release v0.8.1, it has replaced [`ColPack.jl`](https://github.com/exanauts/ColPack.jl).

docs/src/sparsity_pattern.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Improve sparse derivatives
1+
# [Improve sparse derivatives](@id sparsity-pattern)
22

33
In this tutorial, we show a feature of ADNLPModels.jl to potentially improve the computation of sparse Jacobian and Hessian.
44

src/sparsity_pattern.jl

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
export get_sparsity_pattern
2+
13
"""
24
compute_jacobian_sparsity(c, x0; detector)
35
compute_jacobian_sparsity(c!, cx, x0; detector)
@@ -51,3 +53,63 @@ function compute_hessian_sparsity(
5153
S = ADTypes.hessian_sparsity(lagrangian, x0, detector)
5254
return S
5355
end
56+
57+
"""
58+
S = get_sparsity_pattern(model::ADModel, derivate::Symbol)
59+
60+
Retrieve the sparsity pattern of a Jacobian or Hessian from an `ADModel`.
61+
For the Hessian, only the lower triangular part of its sparsity pattern is returned.
62+
The user can reconstruct the upper triangular part by exploiting symmetry.
63+
64+
To compute the sparsity pattern, the model must use a sparse backend.
65+
Supported backends include `SparseADJacobian`, `SparseADHessian`, and `SparseReverseADHessian`.
66+
67+
#### Input arguments
68+
69+
* `model`: An automatic differentiation model (either `AbstractADNLPModel` or `AbstractADNLSModel`).
70+
* `derivate`: The type of derivative for which the sparsity pattern is needed. The supported values are `:jacobian`, `:hessian`, `:jacobian_residual` and `:hessian_residual`.
71+
72+
#### Output argument
73+
74+
* `S`: A sparse matrix of type `SparseMatrixCSC{Bool,Int}` indicating the sparsity pattern of the requested derivative.
75+
"""
76+
function get_sparsity_pattern(model::ADModel, derivate::Symbol)
77+
if (derivate != :jacobian) && (derivate != :hessian)
78+
if model isa AbstractADNLPModel
79+
error("The only supported sparse derivates for an AbstractADNLPModel are `:jacobian` and `:hessian`.")
80+
elseif (derivate != :jacobian_residual) && (derivate != :hessian_resiual)
81+
error("The only supported sparse derivates for an AbstractADNLSModel are `:jacobian`, `:jacobian_residual`, `:hessian` and `:hessian_resiual`.")
82+
end
83+
end
84+
if (derivate == :jacobian) || (derivate == :jacobian_residual)
85+
backend = derivate == :jacobian ? model.adbackend.jacobian_backend : model.adbackend.jacobian_residual_backend
86+
if backend isa SparseADJacobian
87+
m = model.meta.nvar
88+
n = derivate == :jacobian ? model.meta.ncon : model.nls_meta.nequ
89+
colptr = backend.colptr
90+
rowval = backend.rowval
91+
nnzJ = length(rowval)
92+
nzval = ones(Bool, nnzJ)
93+
J = SparseMatrixCSC(m, n, colptr, rowval, nzval)
94+
return J
95+
else
96+
B = typeof(backend)
97+
error("The current backend ($B) doesn't compute a sparse Jacobian.")
98+
end
99+
end
100+
if (derivate == :hessian) || (derivate == :hessian_residual)
101+
backend = derivate == :hessian ? model.adbackend.hessian_backend : model.adbackend.hessian_residual_backend
102+
if (backend isa SparseADHessian) || (backend isa SparseReverseADHessian)
103+
n = model.meta.nvar
104+
colptr = backend.colptr
105+
rowval = backend.rowval
106+
nnzH = length(rowval)
107+
nzval = ones(Bool, nnzH)
108+
H = SparseMatrixCSC(n, n, colptr, rowval, nzval)
109+
return H
110+
else
111+
B = typeof(backend)
112+
error("The current backend ($B) doesn't compute a sparse Hessian.")
113+
end
114+
end
115+
end

test/sparse_hessian.jl

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,14 @@ dt = (Float32, Float64)
6363
H = sparse(rows, cols, vals, nvar, nvar)
6464
@test H == [x[2] 0; x[1]+x[2] x[1]] + y[2] * [-20 0; 0 0]
6565

66+
if (backend == ADNLPModels.SparseADHessian) || (backend == ADNLPModels.SparseReverseADHessian)
67+
H_sp = get_sparsity_pattern(nlp, :hessian)
68+
@test H_sp == SparseMatrixCSC{Bool,Int}(
69+
[ 1 0 ;
70+
1 1 ]
71+
)
72+
end
73+
6674
nlp = ADNLPModel!(
6775
x -> x[1] * x[2]^2 + x[1]^2 * x[2],
6876
x0,

test/sparse_hessian_nls.jl

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,14 @@ dt = (Float32, Float64)
5050
H = Symmetric(sparse(rows, cols, vals, nvar, nvar), :L)
5151
@test H == [-20*v[2] 0; 0 0]
5252

53+
if (backend == ADNLPModels.SparseADHessian) || (backend == ADNLPModels.SparseReverseADHessian)
54+
H_sp = get_sparsity_pattern(nls, :hessian_residual)
55+
@test H_sp == SparseMatrixCSC{Bool,Int}(
56+
[ 1 0 ;
57+
0 0 ]
58+
)
59+
end
60+
5361
nls = ADNLPModels.ADNLSModel!(F!, x0, 3, matrix_free = true; kw...)
5462
@test nls.adbackend.hessian_backend isa ADNLPModels.EmptyADbackend
5563
@test nls.adbackend.hessian_residual_backend isa ADNLPModels.EmptyADbackend

test/sparse_jacobian.jl

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,15 @@ dt = (Float32, Float64)
5151
0 1
5252
]
5353

54+
if backend == ADNLPModels.SparseADJacobian
55+
J_sp = get_sparsity_pattern(nlp, :jacobian)
56+
@test J_sp == SparseMatrixCSC{Bool,Int}(
57+
[ 1 0 ;
58+
1 1 ;
59+
0 1 ]
60+
)
61+
end
62+
5463
nlp = ADNLPModel!(x -> sum(x), x0, c!, zeros(T, ncon), zeros(T, ncon), matrix_free = true; kw...)
5564
@test nlp.adbackend.jacobian_backend isa ADNLPModels.EmptyADbackend
5665
end

test/sparse_jacobian_nls.jl

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,15 @@ dt = (Float32, Float64)
4343
0 1
4444
]
4545

46+
if backend == ADNLPModels.SparseADJacobian
47+
J_sp = get_sparsity_pattern(nls, :jacobian_residual)
48+
@test J_sp == SparseMatrixCSC{Bool,Int}(
49+
[ 1 0 ;
50+
1 1 ;
51+
0 1 ]
52+
)
53+
end
54+
4655
nls = ADNLPModels.ADNLSModel!(F!, x0, 3, matrix_free = true; kw...)
4756
@test nls.adbackend.jacobian_backend isa ADNLPModels.EmptyADbackend
4857
@test nls.adbackend.jacobian_residual_backend isa ADNLPModels.EmptyADbackend

0 commit comments

Comments
 (0)