You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To model $f$, we are going to use [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl).
21
30
For the nonsmooth regularizer, we observe that $h$ is actually readily available in [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl), you can refer to [this section](@ref regularizers) for a list of readily available regularizers.
22
31
We then wrap the smooth function and the regularizer in a `RegularizedNLPModel`
f_model = ADNLPModel(f_fun, x0, name = "AD model of f")
59
-
h = NormL1(1.0)
60
-
regularized_pb = RegularizedNLPModel(f_model, h)
60
+
Suppose for example that we don't want to use a quasi-Newton approach and that we don't have access to the Hessian of f, or that we don't want to incur the cost of computing it.
61
+
In this case, the most appropriate solver would be R2.
62
+
For this example, we also choose a relatively small tolerance by specifying the keyword arguments `atol` and `rtol` across all solvers.
61
63
64
+
```@example basic
62
65
using RegularizedOptimization
63
-
64
-
# Suppose for example that we don't want to use a quasi-Newton approach
65
-
# and that we don't have access to the Hessian of f, or that we don't want to incur the cost of computing it
66
-
# In this case, the most appropriate solver would be R2.
67
-
# For this example, we also choose a relatively small tolerance by specifying the keyword argument atol across all solvers.
68
-
out = R2(regularized_pb, verbose = 10, atol = 1e-3)
0 commit comments