|
1 | | -# A regularized least-square problem |
| 1 | +# A regularized least-square problem |
| 2 | + |
| 3 | +In this tutorial, we will show how to model and solve the nonconvex nonsmooth least-square problem |
| 4 | +```math |
| 5 | + \min_{x \in \mathbb{R}^n} \frac{1}{2} \|Ax - b\|_2^2 + \lambda \|x\|_0. |
| 6 | +``` |
| 7 | + |
| 8 | +## Modelling the problem |
| 9 | +We first formulate the objective function as the sum of a smooth function $f$ and a nonsmooth regularizer $h$: |
| 10 | +```math |
| 11 | + \frac{1}{2} \|Ax - b\|_2^2 + \lambda \|x\|_0 = f(x) + h(x), |
| 12 | +``` |
| 13 | +where |
| 14 | +```math |
| 15 | +\begin{align*} |
| 16 | +f(x) &:= \frac{1}{2} \|Ax - b\|_2^2,\\ |
| 17 | +h(x) &:= \lambda\|x\|_0. |
| 18 | +\end{align*} |
| 19 | +``` |
| 20 | + |
| 21 | +To model $f$, we are going to use [LLSModels.jl](https://github.com/JuliaSmoothOptimizers/LLSModels.jl). |
| 22 | +For the nonsmooth regularizer, we observe that $h$ is actually readily available in [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl), you can refer to [this section](@ref regularizers) for a list of readily available regularizers. |
| 23 | +We then wrap the smooth function and the regularizer in a `RegularizedNLPModel`. |
| 24 | + |
| 25 | +```@example |
| 26 | +using LLSModels |
| 27 | +using ProximalOperators |
| 28 | +using Random |
| 29 | +using RegularizedProblems |
| 30 | +
|
| 31 | +Random.seed!(0) |
| 32 | +
|
| 33 | +# Generate A, b |
| 34 | +m, n = 5, 10 |
| 35 | +A = randn((m, n)) |
| 36 | +b = randn(m) |
| 37 | +
|
| 38 | +# Choose a starting point for the optimization process |
| 39 | +x0 = randn(n) |
| 40 | +
|
| 41 | +# Get an NLSModel corresponding to the smooth function f |
| 42 | +f_model = LLSModel(A, b, x0 = x0, name = "NLS model of f") |
| 43 | +
|
| 44 | +# Get the regularizer from ProximalOperators |
| 45 | +λ = 1.0 |
| 46 | +h = NormL0(λ) |
| 47 | +
|
| 48 | +# Wrap into a RegularizedNLPModel |
| 49 | +regularized_pb = RegularizedNLPModel(f_model, h) |
| 50 | +``` |
| 51 | + |
| 52 | +## Solving the problem |
| 53 | +We can now choose one of the algorithms presented [here](@ref algorithms) to solve the problem we defined above. |
| 54 | +In the case of least-squares, it is usually more appropriate to choose LM or LMTR. |
| 55 | +```@example |
| 56 | +using LLSModels |
| 57 | +using ProximalOperators |
| 58 | +using Random |
| 59 | +using RegularizedProblems |
| 60 | +
|
| 61 | +Random.seed!(0) |
| 62 | +
|
| 63 | +m, n = 5, 10 |
| 64 | +λ = 0.1 |
| 65 | +A = randn((m, n)) |
| 66 | +b = randn(m) |
| 67 | +
|
| 68 | +x0 = 10*randn(n) |
| 69 | +
|
| 70 | +f_model = LLSModel(A, b, x0 = x0, name = "NLS model of f") |
| 71 | +h = NormL0(λ) |
| 72 | +regularized_pb = RegularizedNLPModel(f_model, h) |
| 73 | +
|
| 74 | +using RegularizedOptimization |
| 75 | +
|
| 76 | +# LM is a quadratic regularization method, we specify the verbosity and the tolerance of the solver |
| 77 | +out = LM(regularized_pb, verbose = 1, atol = 1e-3) |
| 78 | +println("LM converged after $(out.iter) iterations.") |
| 79 | +println("--------------------------------------------------------------------------------------") |
| 80 | +
|
| 81 | +# We can choose LMTR instead which is a trust-region method |
| 82 | +out = LMTR(regularized_pb, verbose = 1, atol = 1e-3) |
| 83 | +println("LMTR converged after $(out.iter) iterations.") |
| 84 | +
|
| 85 | +``` |
0 commit comments