Skip to content

Commit 62b105f

Browse files
add least squares example
1 parent 13d63e1 commit 62b105f

File tree

2 files changed

+87
-1
lines changed

2 files changed

+87
-1
lines changed

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,10 @@
22
ADNLPModels = "54578032-b7ea-4c30-94aa-7cbd1cce6c9a"
33
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
44
DocumenterCitations = "daee34ce-89f3-4625-b898-19384cb65244"
5+
LLSModels = "39f5bc3e-5160-4bf8-ac48-504fd2534d24"
56
NLPModelsModifiers = "e01155f1-5c6f-4375-a9d8-616dd036575f"
67
ProximalOperators = "a725b495-10eb-56fe-b38b-717eba820537"
8+
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
79
RegularizedProblems = "ea076b23-609f-44d2-bb12-a4ae45328278"
810

911
[compat]

docs/src/examples/ls.md

Lines changed: 85 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,85 @@
1-
# A regularized least-square problem
1+
# A regularized least-square problem
2+
3+
In this tutorial, we will show how to model and solve the nonconvex nonsmooth least-square problem
4+
```math
5+
\min_{x \in \mathbb{R}^n} \frac{1}{2} \|Ax - b\|_2^2 + \lambda \|x\|_0.
6+
```
7+
8+
## Modelling the problem
9+
We first formulate the objective function as the sum of a smooth function $f$ and a nonsmooth regularizer $h$:
10+
```math
11+
\frac{1}{2} \|Ax - b\|_2^2 + \lambda \|x\|_0 = f(x) + h(x),
12+
```
13+
where
14+
```math
15+
\begin{align*}
16+
f(x) &:= \frac{1}{2} \|Ax - b\|_2^2,\\
17+
h(x) &:= \lambda\|x\|_0.
18+
\end{align*}
19+
```
20+
21+
To model $f$, we are going to use [LLSModels.jl](https://github.com/JuliaSmoothOptimizers/LLSModels.jl).
22+
For the nonsmooth regularizer, we observe that $h$ is actually readily available in [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl), you can refer to [this section](@ref regularizers) for a list of readily available regularizers.
23+
We then wrap the smooth function and the regularizer in a `RegularizedNLPModel`.
24+
25+
```@example
26+
using LLSModels
27+
using ProximalOperators
28+
using Random
29+
using RegularizedProblems
30+
31+
Random.seed!(0)
32+
33+
# Generate A, b
34+
m, n = 5, 10
35+
A = randn((m, n))
36+
b = randn(m)
37+
38+
# Choose a starting point for the optimization process
39+
x0 = randn(n)
40+
41+
# Get an NLSModel corresponding to the smooth function f
42+
f_model = LLSModel(A, b, x0 = x0, name = "NLS model of f")
43+
44+
# Get the regularizer from ProximalOperators
45+
λ = 1.0
46+
h = NormL0(λ)
47+
48+
# Wrap into a RegularizedNLPModel
49+
regularized_pb = RegularizedNLPModel(f_model, h)
50+
```
51+
52+
## Solving the problem
53+
We can now choose one of the algorithms presented [here](@ref algorithms) to solve the problem we defined above.
54+
In the case of least-squares, it is usually more appropriate to choose LM or LMTR.
55+
```@example
56+
using LLSModels
57+
using ProximalOperators
58+
using Random
59+
using RegularizedProblems
60+
61+
Random.seed!(0)
62+
63+
m, n = 5, 10
64+
λ = 0.1
65+
A = randn((m, n))
66+
b = randn(m)
67+
68+
x0 = 10*randn(n)
69+
70+
f_model = LLSModel(A, b, x0 = x0, name = "NLS model of f")
71+
h = NormL0(λ)
72+
regularized_pb = RegularizedNLPModel(f_model, h)
73+
74+
using RegularizedOptimization
75+
76+
# LM is a quadratic regularization method, we specify the verbosity and the tolerance of the solver
77+
out = LM(regularized_pb, verbose = 1, atol = 1e-3)
78+
println("LM converged after $(out.iter) iterations.")
79+
println("--------------------------------------------------------------------------------------")
80+
81+
# We can choose LMTR instead which is a trust-region method
82+
out = LMTR(regularized_pb, verbose = 1, atol = 1e-3)
83+
println("LMTR converged after $(out.iter) iterations.")
84+
85+
```

0 commit comments

Comments
 (0)