Skip to content

Commit 33510b3

Browse files
incorporate suggestions
1 parent 2bb4f05 commit 33510b3

File tree

2 files changed

+72
-30
lines changed

2 files changed

+72
-30
lines changed

paper/paper.bib

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,3 +85,29 @@ @article{bezanson-edelman-karpinski-shah-2017
8585
doi = {10.1137/141000671},
8686
publisher = {SIAM},
8787
}
88+
89+
@Misc{orban-siqueira-cutest-2020,
90+
author = {D. Orban and A. S. Siqueira and {contributors}},
91+
title = {{CUTEst.jl}: {J}ulia's {CUTEst} interface},
92+
month = {October},
93+
url = {https://github.com/JuliaSmoothOptimizers/CUTEst.jl},
94+
year = {2020},
95+
DOI = {10.5281/zenodo.1188851},
96+
}
97+
98+
@Misc{orban-siqueira-nlpmodels-2020,
99+
author = {D. Orban and A. S. Siqueira and {contributors}},
100+
title = {{NLPModels.jl}: Data Structures for Optimization Models},
101+
month = {July},
102+
url = {https://github.com/JuliaSmoothOptimizers/NLPModels.jl},
103+
year = {2020},
104+
DOI = {10.5281/zenodo.2558627},
105+
}
106+
107+
@Misc{jso,
108+
author = {T. Migot and D. Orban and A. S. Siqueira},
109+
title = {The {JuliaSmoothOptimizers} Ecosystem for Linear and Nonlinear Optimization},
110+
year = {2021},
111+
url = {https://juliasmoothoptimizers.github.io/},
112+
doi = {10.5281/zenodo.2655082},
113+
}

paper/paper.md

Lines changed: 46 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -56,28 +56,20 @@ Moreover, they can handle cases where Hessian approximations are unbounded[@diou
5656
There exists a way to solve \eqref{eq:nlp} in Julia via [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
5757
It implements several proximal algorithms for nonsmooth optimization.
5858
However, the available examples only consider convex instances of $h$, namely the $\ell_1$ norm and there are no tests for memory allocations.
59-
Moreover, it implements only one quasi-Newton method (L-BFGS) and does not support Hessian approximations via linear operators.
60-
In contrast, **RegularizedOptimization.jl** leverages [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl)[@leconte_linearoperators_jl_linear_operators_2023] to represent a variety of Hessian approximations, such as L-SR1, L-BFGS, and diagonal approximations.
59+
Moreover, it implements only one quasi-Newton method (L-BFGS) and does not support other Hessian approximations.
6160

62-
**RegularizedOptimization.jl** implements a broad class of regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
61+
**RegularizedOptimization.jl**, in contrast, implements a broad class of regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
6362
The package offers a consistent API to formulate optimization problems and apply different regularization methods.
64-
It enables researchers to:
63+
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) ecosystem, an academic organization for nonlinear optimization software development, testing, and benchmarking.
64+
Specifically, **RegularizedOptimization.jl** interoperates with:
6565

66-
- Test and compare algorithms within a unified framework.
67-
- Switch between exact Hessians, quasi-Newton updates, and diagonal Hessian approximations via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
68-
- Incorporate nonsmooth terms $h$ through proximal mappings.
66+
- **Definition of smooth problems $f$** via [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl) @[orban-siqueira-nlpmodels-2020] which provides a standardized Julia API for representing nonlinear programming (NLP) problems.
67+
Large collections of such problems are available in [Cutest.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl) @[orban-siqueira-cutest-2020] and [OptimizationProblems.jl](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl).
68+
Another option is to use [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl), which provides instances commonly used in the nonsmooth optimization literature.
69+
- **Hessian approximations (quasi-Newton, diagonal approximations)** via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl), which represents Hessians as linear operators and implements efficient Hessian–vector products.
70+
- **Definition of nonsmooth terms $h$** via [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which offers a large collection of nonsmooth functions, and [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings for nonsmooth functions.
6971

70-
The design of the package is motivated by recent advances in the complexity analysis of regularization and trust-region methods.
71-
72-
## Compatibility with JuliaSmoothOptimizers ecosystem
73-
74-
**RegularizedOptimization.jl** integrates seamlessly with other [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) packages:
75-
76-
- **Definition of $f$** via [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl), which provides efficient implementations of smooth problems $f$ together with their gradients.
77-
- **Model Hessians (quasi-Newton, diagonal approximations)** via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl), which represents Hessians as linear operators and implements efficient Hessian–vector products.
78-
- **Definition of $h$** via [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which offers a large collection of nonsmooth terms $h$, and [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings.
79-
80-
This modularity makes it easy to prototype, benchmark, and extend regularization-based methods [@diouane-habiboullah-orban-2024],[@aravkin-baraldi-orban-2022],[@aravkin-baraldi-orban-2024] and[@leconte-orban-2023-2].
72+
This modularity makes it easy to benchmark existing solvers available in the repository [@diouane-habiboullah-orban-2024], [@aravkin-baraldi-orban-2022], [@aravkin-baraldi-orban-2024], and [@leconte-orban-2023-2].
8173

8274
## Support for inexact subproblem solves
8375

@@ -94,14 +86,15 @@ In contrast, many problems admit efficient implementations of Hessian–vector o
9486
## In-place methods
9587

9688
All solvers in **RegularizedOptimization.jl** are implemented in an in-place fashion, minimizing memory allocations and improving performance.
97-
This is particularly important for large-scale problems where memory usage can be a bottleneck.
89+
This is particularly important for large-scale problems, where memory usage can become a bottleneck.
90+
Even in low-dimensional settings, Julia may exhibit significantly slower performance due to extra allocations, making the in-place design a key feature of the package.
9891

9992
# Examples
10093

10194
A simple example is the solution of a regularized quadratic problem with an $\ell_1$ penalty, as described in @[aravkin-baraldi-orban-2022].
10295
Such problems are common in statistical learning and compressed sensing applications.The formulation is
10396
$$
104-
\min_{x \in \mathbb{R}^n} \ \tfrac{1}{2}\|Ax-b\|_2^2+\lambda\|x\|_1,
97+
\min_{x \in \mathbb{R}^n} \ \tfrac{1}{2}\|Ax-b\|_2^2+\lambda\|x\|_0,
10598
$$
10699
where $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^m$, and $\lambda>0$ is a regularization parameter.
107100

@@ -115,31 +108,54 @@ Random.seed!(1234)
115108

116109
# Define a basis pursuit denoising problem
117110
compound = 10
118-
bpdn, bpdn_nls, sol = bpdn_model(compound)
111+
bpdn_model, _, _ = bpdn_model(compound)
119112

120113
# Define the Hessian approximation
121-
f = LSR1Model(bpdn)
114+
f = SpectralGradientModel(bpdn)
122115

123116
# Define the nonsmooth regularizer (L1 norm)
124-
λ = 1.0
125-
h = NormL1(λ)
117+
λ = norm(grad(bpdn_model, zeros(bpdn_model.meta.nvar)), Inf) / 10
118+
h = NormL0(λ)
126119

127120
# Define the regularized NLP model
128121
reg_nlp = RegularizedNLPModel(f, h)
129122

130-
# Choose a solver (R2N) and execution statistics tracker
131-
solver_r2N = R2NSolver(reg_nlp)
123+
# Choose a solver (R2DH) and execution statistics tracker
124+
solver_r2dh= R2DHSolver(reg_nlp)
132125
stats = RegularizedExecutionStats(reg_nlp)
133126

134127
# Solve the problem
135-
solve!(solver_r2N, reg_nlp, stats, x = f.meta.x0, σk = 1.0, atol = 1e-8, rtol = 1e-8, verbose = 1)
128+
solve!(solver_r2dh, reg_nlp, stats, x = f.meta.x0, σk = 1.0, atol = 1e-8, rtol = 1e-8, verbose = 1)
129+
130+
```
131+
132+
Another example is the FitzHugh-Nagumo inverse problem with an $\ell_1$ penalty, as described in @[aravkin-baraldi-orban-2022] and @[aravkin-baraldi-orban-2024].
133+
134+
```julia
135+
using LinearAlgebra
136+
using DifferentialEquations, ProximalOperators
137+
using ADNLPModels, NLPModels, NLPModelsModifiers, RegularizedOptimization, RegularizedProblems
136138

137-
# Choose another solver (TR) and execution statistics tracker
139+
# Define the Fitzagerald Higgs problem
140+
data, _, _, _, _ = RegularizedProblems.FH_smooth_term()
141+
fh_model = ADNLPModel(misfit, ones(5))
142+
143+
# Define the Hessian approximation
144+
f = LBFGSModel(fh_model)
145+
146+
# Define the nonsmooth regularizer (L1 norm)
147+
λ = 0.1
148+
h = NormL1(λ)
149+
150+
# Define the regularized NLP model
151+
reg_nlp = RegularizedNLPModel(f, h)
152+
153+
# Choose a solver (TR) and execution statistics tracker
138154
solver_tr = TRSolver(reg_nlp)
139-
stats_tr = RegularizedExecutionStats(reg_nlp)
155+
stats = RegularizedExecutionStats(reg_nlp)
140156

141157
# Solve the problem
142-
solve!(solver_tr, reg_nlp, stats_tr, x = f.meta.x0, Δk = 1.0, atol = 1e-8, rtol = 1e-8, verbose = 1)
158+
solve!(solver_tr, reg_nlp, stats, x = f.meta.x0, atol = 1e-3, rtol = 1e-4, verbose = 10, ν = 1.0e+2)
143159
```
144160

145161
# Acknowledgements

0 commit comments

Comments
 (0)