You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We compare **PANOC**[@stella-themelis-sopasakis-patrinos-2017] (from [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)) against **TR**, **R2N**, and **LM** from our library.
171
-
In order to do so, we implemented a wrapper for **PANOC** to make it compatible with our problem definition.
170
+
We compare **TR**, **R2N**, and **LM** from our library.
172
171
173
172
We report the following solver statistics in the table: the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time in seconds and the final objective value.
174
173
On the SVM and NNMF problems, we use limited-memory SR1 and BFGS Hessian approximations, respectively.
@@ -181,8 +180,9 @@ The subproblem solver is **R2**.
181
180
All methods successfully reduced the optimality measure below the specified tolerance of $10^{-4}$, and thus converged to an approximate first-order stationary point.
182
181
However, the final objective values differ due to the nonconvexity of the problems.
183
182
184
-
-**SVM with $\ell^{1/2}$ penalty:****TR** and **R2N** require far fewer function and gradient evaluations than **PANOC**, at the expense of more proximal iterations. Since each proximal step is inexpensive, **TR** and **R2N** are much faster overall.
185
-
-**NNMF with constrained $\ell_0$ penalty:****PANOC** is the fastest, even though it requires a larger number of function and gradient evaluations than **TR** and **R2N**. **LM** is competitive in terms of function calls but incurs many Jacobian–vector products; it nevertheless achieves the lowest objective value.
183
+
-**SVM with $\ell^{1/2}$ penalty:****R2N** is the fastest, requiring the fewest function and gradient evaluations compared to **TR**.
184
+
However, it requires more proximal evaluations, but these are inexpensive.
185
+
-**NNMF with constrained $\ell_0$ penalty:****TR** is the fastest, and requires a fewer number of function and gradient evaluations than **R2N**. **LM** is competitive in terms of function calls but incurs many Jacobian–vector products; it nevertheless achieves the lowest objective value.
186
186
187
187
Additional tests (e.g., other regularizers, constraint types, and scaling dimensions) have also been conducted, and a full benchmarking campaign is currently underway.
0 commit comments