Skip to content

Commit 7d73828

Browse files
Merge branch 'JuliaSmoothOptimizers:paper' into paper
2 parents c5f95a3 + 83e66ad commit 7d73828

File tree

7 files changed

+23
-5
lines changed

7 files changed

+23
-5
lines changed

CITATION.bib

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
@Misc{baraldi-leconte-orban-regularized-optimization-2024,
2-
author = {R. Baraldi and G. Leconte and D. Orban},
1+
@Misc{baraldi-diouane-gollier-habiboullah-leconte-orban-regularized-optimization-2024,
2+
author = {R. Baraldi and Y. Diouane and M. Gollier and M. L. Habiboullah and G. Leconte and D. Orban},
33
title = {{RegularizedOptimization.jl}: Algorithms for Regularized Optimization},
44
month = {September},
55
howpublished = {\url{https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl}},

Project.toml

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
name = "RegularizedOptimization"
2-
uuid = "196f2941-2d58-45ba-9f13-43a2532b2fa8"
3-
author = ["Robert Baraldi <[email protected]> and Dominique Orban <[email protected]>"]
2+
uuid = "20620ad1-4fe4-4467-ae46-fb087718fe7b"
3+
author = ["Robert Baraldi <[email protected]>",
4+
"Youssef Diouane <[email protected]>",
5+
"Maxence Gollier <[email protected]>",
6+
"Mohamed Laghdaf Habiboullah <[email protected]>",
7+
"Geoffroy Leconte <[email protected]>",
8+
"Dominique Orban <[email protected]>"]
49
version = "0.1.0"
510

611
[deps]

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Both f and h can be nonconvex.
2626

2727
To install the package, hit `]` from the Julia command line to enter the package manager and type
2828
```julia
29-
pkg> add https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl
29+
pkg> add RegularizedOptimization
3030
```
3131

3232
## What is Implemented?
@@ -56,3 +56,7 @@ Please refer to the documentation.
5656
abstract = { We develop a trust-region method for minimizing the sum of a smooth term (f) and a nonsmooth term (h), both of which can be nonconvex. Each iteration of our method minimizes a possibly nonconvex model of (f + h) in a trust region. The model coincides with (f + h) in value and subdifferential at the center. We establish global convergence to a first-order stationary point when (f) satisfies a smoothness condition that holds, in particular, when it has a Lipschitz-continuous gradient, and (h) is proper and lower semicontinuous. The model of (h) is required to be proper, lower semi-continuous and prox-bounded. Under these weak assumptions, we establish a worst-case (O(1/\epsilon^2)) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization. We detail a special instance, named TR-PG, in which we use a limited-memory quasi-Newton model of (f) and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method. We establish similar convergence properties and complexity bound for a quadratic regularization variant, named R2, and provide an interpretation as a proximal gradient method with adaptive step size for nonconvex problems. R2 may also be used to compute steps inside the trust-region method, resulting in an implementation named TR-R2. We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing. Both TR-PG and TR-R2 exhibit promising performance and compare favorably with two linesearch proximal quasi-Newton methods based on convex models. }
5757
}
5858
```
59+
60+
## Contributing
61+
62+
Please refer to [this](https://jso.dev/contributing/) for contribution guidelines.

paper/examples/Project.toml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,12 @@
11
[deps]
2+
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
23
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
34
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
45
NLPModelsModifiers = "e01155f1-5c6f-4375-a9d8-616dd036575f"
56
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
7+
RegularizedOptimization = "196f2941-2d58-45ba-9f13-43a2532b2fa8"
68
RegularizedProblems = "ea076b23-609f-44d2-bb12-a4ae45328278"
79
ShiftedProximalOperators = "d4fd37fa-580c-4e43-9b30-361c21aae263"
10+
11+
[sources]
12+
RegularizedOptimization = {path = "../.."}

paper/paper.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,7 @@ solve!(solver, reg_nlp, stats; atol=1e-5, rtol=1e-5, verbose=1, sub_kwargs=(max_
127127
## Numerical results
128128

129129
We compare **TR**, **R2N**, **LM** and **LMTR** from our library on the SVM problem.
130+
Experiments were performed on macOS (arm64) on an Apple M2 (8-core) machine, using Julia 1.11.7.
130131

131132
The table reports the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time and the final objective value.
132133
For TR and R2N, we use limited-memory SR1 Hessian approximations.

src/RegularizedOptimization.jl

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ using LinearOperators,
1616
SolverCore
1717
using Percival: AugLagModel, update_y!, update_μ!
1818

19+
import SolverCore.reset!
20+
1921
const callback_docstring = "
2022
The callback is called at each iteration.
2123
The expected signature of the callback is `callback(nlp, solver, stats)`, and its output is ignored.

test/test_allocs.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ macro wrappedallocs(expr)
3333
kwargs_dict = Dict{Symbol, Any}(a.args[1] => a.args[2] for a in kwargs if a.head == :kw)
3434
quote
3535
function g($(argnames...); kwargs_dict...)
36+
$(Expr(expr.head, argnames..., kwargs...)) # Call the function twice to make the allocated macro more stable
3637
@allocated $(Expr(expr.head, argnames..., kwargs...))
3738
end
3839
$(Expr(:call, :g, [esc(a) for a in args]...))

0 commit comments

Comments
 (0)