You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: paper/paper.md
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,10 +77,9 @@ RegularizedOptimization.jl provides an API to formulate optimization problems an
77
77
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers)[@jso] ecosystem.
78
78
79
79
The smooth objective $f$ can be defined via [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl)[@orban-siqueira-nlpmodels-2020], which provides a standardized Julia API for representing nonlinear programming (NLP) problems.
80
+
The nonsmooth term $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl).
80
81
81
-
The nonsmooth term $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which provides a broad collection of regularizers and indicators of simple sets.
82
-
83
-
With $f$ and $h$ modeled, the companion package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl) provides a way to pair them into a *Regularized Nonlinear Programming Model*
82
+
Given $f$ and $h$, the companion package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl) provides a way to pair them into a *Regularized Nonlinear Programming Model*
84
83
85
84
```julia
86
85
reg_nlp =RegularizedNLPModel(f, h)
@@ -92,7 +91,7 @@ They can also be paired into a *Regularized Nonlinear Least Squares Model* if $f
92
91
reg_nls =RegularizedNLSModel(f, h)
93
92
```
94
93
95
-
RegularizedProblems.jl also provides a set of instances commonly used in data science and in the nonsmooth optimization, where several choices of $f$ can be paired with various nonsmooth terms $h$.
94
+
RegularizedProblems.jl also provides a set of instances commonly used in data science and in nonsmooth optimization, where several choices of $f$ can be paired with various regularizers.
96
95
This design makes for a convenient source of reproducible problem instances for benchmarking the solvers in [RegularizedOptimization.jl](https://www.github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
We compare **TR**, **R2N**, **LM** and **LMTR** from our library on the SVM problem.
131
130
132
131
The table reports the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time and the final objective value.
133
-
On the SVM and NNMF problems, we use limited-memory SR1 and BFGS Hessian approximations, respectively.
132
+
We use limited-memory SR1 Hessian approximations.
134
133
The subproblem solver is **R2**.
135
134
136
135
\input{examples/Benchmark.tex}
137
136
138
-
Note that for the **LM** and **LMTR** solvers, gradient evaluations count $\#\nabla f$ equals the number of Jacobian–vector and adjoint-Jacobian–vector products.
137
+
For the **LM** and **LMTR** solvers, $\#\nabla f$ counts the number of Jacobian–vector and adjoint-Jacobian–vector products.
139
138
140
139
All methods successfully reduced the optimality measure below the specified tolerance of $10^{-4}$, and thus converged to an approximate first-order stationary point.
141
140
Note that the final objective values differ due to the nonconvexity of the problem.
0 commit comments