Skip to content

Commit 157b986

Browse files
minor changes
1 parent ad42d67 commit 157b986

File tree

2 files changed

+14
-2
lines changed

2 files changed

+14
-2
lines changed

paper/paper.bib

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,3 +137,15 @@ @InProceedings{ stella-themelis-sopasakis-patrinos-2017
137137
Pages = {1939--1944},
138138
doi = {10.1109/CDC.2017.8263933},
139139
}
140+
141+
@article{demarchi-jia-kanzow-mehlitz-2023,
142+
author = {De~Marchi, Alberto and Jia, Xiaoxi and Kanzow, Christian and Mehlitz, Patrick},
143+
title = {Constrained composite optimization and augmented {L}agrangian methods},
144+
journal = {Mathematical Programming},
145+
year = {2023},
146+
month = {9},
147+
volume = {201},
148+
number = {1},
149+
pages = {863--896},
150+
doi = {10.1007/s10107-022-01922-4},
151+
}

paper/paper.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The library provides a modular and extensible framework for experimenting with n
4444
- **Trust-region methods (TR, TRDH)** [@aravkin-baraldi-orban-2022;@leconte-orban-2023],
4545
- **Quadratic regularization methods (R2, R2N)** [@diouane-habiboullah-orban-2024;@aravkin-baraldi-orban-2022],
4646
- **Levenbergh-Marquardt methods (LM, LMTR)** [@aravkin-baraldi-orban-2024].
47-
- **Augmented Lagrangian methods (ALTR)** (cite?).
47+
- **Augmented Lagrangian methods (AL)** [@demarchi-jia-kanzow-mehlitz-2023].
4848

4949
These methods rely solely on the gradient and Hessian(-vector) information of the smooth part $f$ and the proximal mapping of the nonsmooth part $h$ in order to compute steps.
5050
Then, the objective function $f + h$ is used only to accept or reject trial points.
@@ -330,7 +330,7 @@ However, **LM** requires significantly fewer function evaluations, which is expe
330330

331331
The experiments highlight the effectiveness of the solvers implemented in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) compared to **PANOC** from [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
332332

333-
The performance can be summarized as follows:
333+
On these examples, the performance of the solvers can be summarized as follows:
334334

335335
- **Function and gradient evaluations:** **TR** and **R2N** are the most efficient choices when aiming to minimize both.
336336
- **Function evaluations only:** **LM** is preferable when the problem is a nonlinear least squares problem, as it achieves the lowest number of function evaluations.

0 commit comments

Comments
 (0)