Skip to content

Commit 9ac9e83

Browse files
MaxenceGollierdpo
andcommitted
Apply suggestions from code review
Co-authored-by: Dominique <[email protected]>
1 parent 915f3a1 commit 9ac9e83

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

paper/paper.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -77,10 +77,9 @@ RegularizedOptimization.jl provides an API to formulate optimization problems an
7777
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) [@jso] ecosystem.
7878

7979
The smooth objective $f$ can be defined via [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl) [@orban-siqueira-nlpmodels-2020], which provides a standardized Julia API for representing nonlinear programming (NLP) problems.
80+
The nonsmooth term $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl).
8081

81-
The nonsmooth term $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which provides a broad collection of regularizers and indicators of simple sets.
82-
83-
With $f$ and $h$ modeled, the companion package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl) provides a way to pair them into a *Regularized Nonlinear Programming Model*
82+
Given $f$ and $h$, the companion package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl) provides a way to pair them into a *Regularized Nonlinear Programming Model*
8483

8584
```julia
8685
reg_nlp = RegularizedNLPModel(f, h)
@@ -92,7 +91,7 @@ They can also be paired into a *Regularized Nonlinear Least Squares Model* if $f
9291
reg_nls = RegularizedNLSModel(f, h)
9392
```
9493

95-
RegularizedProblems.jl also provides a set of instances commonly used in data science and in the nonsmooth optimization, where several choices of $f$ can be paired with various nonsmooth terms $h$.
94+
RegularizedProblems.jl also provides a set of instances commonly used in data science and in nonsmooth optimization, where several choices of $f$ can be paired with various regularizers.
9695
This design makes for a convenient source of reproducible problem instances for benchmarking the solvers in [RegularizedOptimization.jl](https://www.github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
9796

9897
## Support for both exact and approximate Hessian
@@ -130,12 +129,12 @@ solve!(solver, reg_nlp, stats; atol=1e-5, rtol=1e-5, verbose=1, sub_kwargs=(max_
130129
We compare **TR**, **R2N**, **LM** and **LMTR** from our library on the SVM problem.
131130

132131
The table reports the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time and the final objective value.
133-
On the SVM and NNMF problems, we use limited-memory SR1 and BFGS Hessian approximations, respectively.
132+
We use limited-memory SR1 Hessian approximations.
134133
The subproblem solver is **R2**.
135134

136135
\input{examples/Benchmark.tex}
137136

138-
Note that for the **LM** and **LMTR** solvers, gradient evaluations count $\#\nabla f$ equals the number of Jacobian–vector and adjoint-Jacobian–vector products.
137+
For the **LM** and **LMTR** solvers, $\#\nabla f$ counts the number of Jacobian–vector and adjoint-Jacobian–vector products.
139138

140139
All methods successfully reduced the optimality measure below the specified tolerance of $10^{-4}$, and thus converged to an approximate first-order stationary point.
141140
Note that the final objective values differ due to the nonconvexity of the problem.

0 commit comments

Comments
 (0)