You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: paper/paper.md
+46-30Lines changed: 46 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,28 +56,20 @@ Moreover, they can handle cases where Hessian approximations are unbounded[@diou
56
56
There exists a way to solve \eqref{eq:nlp} in Julia via [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
57
57
It implements several proximal algorithms for nonsmooth optimization.
58
58
However, the available examples only consider convex instances of $h$, namely the $\ell_1$ norm and there are no tests for memory allocations.
59
-
Moreover, it implements only one quasi-Newton method (L-BFGS) and does not support Hessian approximations via linear operators.
60
-
In contrast, **RegularizedOptimization.jl** leverages [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl)[@leconte_linearoperators_jl_linear_operators_2023] to represent a variety of Hessian approximations, such as L-SR1, L-BFGS, and diagonal approximations.
59
+
Moreover, it implements only one quasi-Newton method (L-BFGS) and does not support other Hessian approximations.
61
60
62
-
**RegularizedOptimization.jl** implements a broad class of regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
61
+
**RegularizedOptimization.jl**, in contrast, implements a broad class of regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
63
62
The package offers a consistent API to formulate optimization problems and apply different regularization methods.
64
-
It enables researchers to:
63
+
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) ecosystem, an academic organization for nonlinear optimization software development, testing, and benchmarking.
- Test and compare algorithms within a unified framework.
67
-
- Switch between exact Hessians, quasi-Newton updates, and diagonal Hessian approximations via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
68
-
- Incorporate nonsmooth terms $h$ through proximal mappings.
66
+
-**Definition of smooth problems $f$** via [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl) @[orban-siqueira-nlpmodels-2020] which provides a standardized Julia API for representing nonlinear programming (NLP) problems.
67
+
Large collections of such problems are available in [Cutest.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl) @[orban-siqueira-cutest-2020] and [OptimizationProblems.jl](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl).
68
+
Another option is to use [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl), which provides instances commonly used in the nonsmooth optimization literature.
69
+
-**Hessian approximations (quasi-Newton, diagonal approximations)** via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl), which represents Hessians as linear operators and implements efficient Hessian–vector products.
70
+
-**Definition of nonsmooth terms $h$** via [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which offers a large collection of nonsmooth functions, and [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings for nonsmooth functions.
69
71
70
-
The design of the package is motivated by recent advances in the complexity analysis of regularization and trust-region methods.
71
-
72
-
## Compatibility with JuliaSmoothOptimizers ecosystem
73
-
74
-
**RegularizedOptimization.jl** integrates seamlessly with other [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) packages:
75
-
76
-
-**Definition of $f$** via [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl), which provides efficient implementations of smooth problems $f$ together with their gradients.
77
-
-**Model Hessians (quasi-Newton, diagonal approximations)** via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl), which represents Hessians as linear operators and implements efficient Hessian–vector products.
78
-
-**Definition of $h$** via [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which offers a large collection of nonsmooth terms $h$, and [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings.
79
-
80
-
This modularity makes it easy to prototype, benchmark, and extend regularization-based methods [@diouane-habiboullah-orban-2024],[@aravkin-baraldi-orban-2022],[@aravkin-baraldi-orban-2024] and[@leconte-orban-2023-2].
72
+
This modularity makes it easy to benchmark existing solvers available in the repository [@diouane-habiboullah-orban-2024], [@aravkin-baraldi-orban-2022], [@aravkin-baraldi-orban-2024], and [@leconte-orban-2023-2].
81
73
82
74
## Support for inexact subproblem solves
83
75
@@ -94,14 +86,15 @@ In contrast, many problems admit efficient implementations of Hessian–vector o
94
86
## In-place methods
95
87
96
88
All solvers in **RegularizedOptimization.jl** are implemented in an in-place fashion, minimizing memory allocations and improving performance.
97
-
This is particularly important for large-scale problems where memory usage can be a bottleneck.
89
+
This is particularly important for large-scale problems, where memory usage can become a bottleneck.
90
+
Even in low-dimensional settings, Julia may exhibit significantly slower performance due to extra allocations, making the in-place design a key feature of the package.
98
91
99
92
# Examples
100
93
101
94
A simple example is the solution of a regularized quadratic problem with an $\ell_1$ penalty, as described in @[aravkin-baraldi-orban-2022].
102
95
Such problems are common in statistical learning and compressed sensing applications.The formulation is
Another example is the FitzHugh-Nagumo inverse problem with an $\ell_1$ penalty, as described in @[aravkin-baraldi-orban-2022] and @[aravkin-baraldi-orban-2024].
133
+
134
+
```julia
135
+
using LinearAlgebra
136
+
using DifferentialEquations, ProximalOperators
137
+
using ADNLPModels, NLPModels, NLPModelsModifiers, RegularizedOptimization, RegularizedProblems
136
138
137
-
# Choose another solver (TR) and execution statistics tracker
0 commit comments