Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
434c15b
add index.md documentation
MaxenceGollier Sep 12, 2025
87affc1
add doc skeleton
MaxenceGollier Sep 12, 2025
9f96d15
remove contents from reference
MaxenceGollier Sep 12, 2025
4a4d899
add bibliography
MaxenceGollier Sep 12, 2025
8584c9e
algorithms in the general case
MaxenceGollier Sep 12, 2025
e4d3a6f
add nonlinear least-squares
MaxenceGollier Sep 12, 2025
cbc716d
add skeleton for introduction
MaxenceGollier Sep 12, 2025
0b4c4d6
fix make and refs
MaxenceGollier Sep 13, 2025
0286d75
update make for documenter 1.x
MaxenceGollier Sep 13, 2025
e0335d5
update index
MaxenceGollier Sep 13, 2025
b8d11a8
rework documentation
MaxenceGollier Sep 13, 2025
8ee36b7
add constrained algorithm doc
MaxenceGollier Sep 14, 2025
8f5c249
Merge branch 'JuliaSmoothOptimizers:master' into documentation
MaxenceGollier Sep 29, 2025
5e00a62
Merge branch 'JuliaSmoothOptimizers:master' into documentation
MaxenceGollier Sep 29, 2025
69fc0c9
update doc style
MaxenceGollier Sep 30, 2025
57559cf
remove red theme style from doc
MaxenceGollier Sep 30, 2025
d9cc27e
add basic example
MaxenceGollier Sep 30, 2025
685f911
move regularizers to a different section
MaxenceGollier Sep 30, 2025
444ffde
add empty examples and update make
MaxenceGollier Sep 30, 2025
6894b43
add deps for basic example
MaxenceGollier Sep 30, 2025
4e1dcf3
add reference to regularizers in index
MaxenceGollier Sep 30, 2025
6cd1cf8
remove older examples
MaxenceGollier Sep 30, 2025
f75f245
add RegularizedProblem dep
MaxenceGollier Sep 30, 2025
13d63e1
add JSO logo
MaxenceGollier Sep 30, 2025
62b105f
add least squares example
MaxenceGollier Sep 30, 2025
9859f1f
add custom regularizer example
MaxenceGollier Sep 30, 2025
eaa3a1c
add a few regularizers in the list
MaxenceGollier Sep 30, 2025
bdddf6a
documentation: apply suggestions from code review
MaxenceGollier Oct 1, 2025
3156e9b
remove boldface
MaxenceGollier Oct 1, 2025
983c2a8
remove sections in index.md
MaxenceGollier Oct 1, 2025
df2f5a0
rename "algorithms" as "solvers"
MaxenceGollier Oct 1, 2025
45ff78b
change "frac" to "tfrac"
MaxenceGollier Oct 1, 2025
4bb680d
improve basic example with standard rosenbrock function
MaxenceGollier Oct 1, 2025
8c8947b
update least squares example
MaxenceGollier Oct 1, 2025
d4755d4
remove regularizers
MaxenceGollier Oct 1, 2025
f56f77d
documentation: update index
MaxenceGollier Oct 1, 2025
ccb8b85
add reference to proximalOperators
MaxenceGollier Oct 1, 2025
93749c2
mention c(x) in index
MaxenceGollier Oct 1, 2025
2bee900
add plots to least squares
MaxenceGollier Oct 1, 2025
d6b8182
decrease tolerance in solvers
MaxenceGollier Oct 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterCitations = "daee34ce-89f3-4625-b898-19384cb65244"

[compat]
Documenter = "~0.25"
Documenter = "1"
Copy link

Copilot AI Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The version constraint for Documenter changed from ~0.25 to 1, which is a major version jump. Consider using 1.0 to be more explicit about the minimum required version, and ensure compatibility with the current codebase.

Suggested change
Documenter = "1"
Documenter = "1.0"

Copilot uses AI. Check for mistakes.
DocumenterCitations = "1.2"
20 changes: 17 additions & 3 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,16 +1,30 @@
using Documenter, RegularizedOptimization
using Documenter, DocumenterCitations

using RegularizedOptimization

bib = CitationBibliography(joinpath(@__DIR__, "references.bib"))

makedocs(
modules = [RegularizedOptimization],
doctest = true,
# linkcheck = true,
strict = true,
warnonly = false,
Copy link

Copilot AI Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The strict = true option was replaced with warnonly = false. However, warnonly is a newer Documenter.jl option. Consider using strict = [:missing_docs] or similar for more granular control over documentation strictness.

Suggested change
warnonly = false,
strict = [:missing_docs],

Copilot uses AI. Check for mistakes.
format = Documenter.HTML(
assets = ["assets/style.css"],
prettyurls = get(ENV, "CI", nothing) == "true",
),
sitename = "RegularizedOptimization.jl",
pages = Any["Home" => "index.md", "Tutorial" => "tutorial.md", "Reference" => "reference.md"],
pages = [
"Home" => "index.md",
"Algorithms" => "algorithms.md",
"Examples" => [
joinpath("examples", "bpdn.md"),
joinpath("examples", "fh.md")
],
"Reference" => "reference.md",
"Bibliography" => "bibliography.md"
],
plugins = [bib],
)

deploydocs(
Expand Down
56 changes: 56 additions & 0 deletions docs/references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
@Article{aravkin-baraldi-orban-2022,
author = {Aravkin, Aleksandr Y. and Baraldi, Robert and Orban, Dominique},
title = {A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization},
journal = {SIAM Journal on Optimization},
volume = {32},
number = {2},
pages = {900-929},
year = {2022},
doi = {10.1137/21M1409536}
}

@Article{aravkin-baraldi-orban-2024,
author = {Aravkin, Aleksandr Y. and Baraldi, Robert and Orban, Dominique},
title = {A Levenberg–Marquardt Method for Nonsmooth Regularized Least Squares},
journal = {SIAM Journal on Scientific Computing},
volume = {46},
number = {4},
pages = {A2557-A2581},
year = {2024},
doi = {10.1137/22M1538971},
}

@Article{leconte-orban-2025,
author = {Leconte, Geoffroy and Orban, Dominique},
title = {The indefinite proximal gradient method},
journal = {Computational Optimization and Applications},
volume = {91},
number = {2},
pages = {861-903},
year = {2025},
doi = {10.1007/s10589-024-00604-5}
}

@TechReport{diouane-gollier-orban-2024,
Author = {Diouane, Youssef and Gollier, Maxence and Orban, Dominique},
Title = {A nonsmooth exact penalty method for equality-constrained optimization: complexity and implementation},
Institution = {Groupe d’études et de recherche en analyse des décisions},
Year = {2024},
Type = {Les Cahiers du GERAD},
Number = {G-2024-65},
Address = {Montreal, Canada},
doi = {10.48550/arxiv.2103.15993},
url = {https://www.gerad.ca/fr/papers/G-2024-65},
}

@TechReport{diouane-habiboullah-orban-2024,
Author = {Diouane, Youssef and Laghdaf Habiboullah, Mohamed and Orban, Dominique},
Title = {A proximal modified quasi-Newton method for nonsmooth regularized optimization},
Institution = {Groupe d’études et de recherche en analyse des décisions},
Year = {2024},
Type = {Les Cahiers du GERAD},
Number = {G-2024-64},
Address = {Montreal, Canada},
doi = {10.48550/arxiv.2409.19428},
url = {https://www.gerad.ca/fr/papers/G-2024-64},
}
67 changes: 67 additions & 0 deletions docs/src/algorithms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# [Algorithms](@id algorithms)

## General case
The algorithms in this package are based upon the approach of [aravkin-baraldi-orban-2022](@cite).
Suppose we are given the general regularized problem
```math
\underset{x \in \mathbb{R}^n}{\text{minimize}} \quad f(x) + h(x),
```
where $f : \mathbb{R}^n \mapsto \mathbb{R}$ is continuously differentiable and $h : \mathbb{R}^n \mapsto \mathbb{R} \cup \{\infty\}$ is lower semi-continuous.
Instead of solving the above directly, which is often impossible, we will solve a simplified version of it repeatedly until we reach a minimizer of the problem above.
To do so, suppose we are given an iterate $x_0 \in \mathbb{R}^n$, we wish to compute a step, $s_0 \in \mathbb{R}^n$ and improve our iterate with $x_1 := x_0 + s_0$.
Now, we are going to approximate the functions $f$ and $h$ around $x_0$ with simpler functions (models), which we denote respectively $\varphi(\cdot; x_0)$ and $\psi(\cdot; x_0)$ so that
```math
\varphi(s; x_0) \approx f(x_0 + s) \quad \text{and} \quad \psi(s; x_0) \approx h(x_0 + s).
```
We then wish to compute the step as
```math
s_0 \in \underset{s \in \mathbb{R}^n}{\argmin} \ \varphi(s; x_0) + \psi(s; x_0).
```
In order to ensure convergence and to handle the potential nonconvexity of the objective function, we either add a trust-region,
```math
s_0 \in \underset{s \in \mathbb{R}^n}{\argmin} \ \varphi(s; x_0) + \psi(s; x_0) \quad \text{subject to} \ \|s\| \leq \Delta,
```
or a quadratic regularization
```math
s_0 \in \underset{s \in \mathbb{R}^n}{\argmin} \ \varphi(s; x_0) + \psi(s; x_0) + \sigma \|s\|^2_2.
```
Algorithms that work with a trust-region are [`TR`](@ref TR) and [`TRDH`](@ref TRDH) and the ones working with a quadratic regularization are [`R2`](@ref R2), [`R2N`](@ref R2N) and [`R2DH`](@ref R2DH)

The models for the smooth part `f` in this package are always quadratic models of the form
```math
\varphi(s; x_0) = f(x_0) + \nabla f(x_0)^T s + \frac{1}{2} s^T H(x_0) s,
```
where $H(x_0)$ is a symmetric matrix that can be either $0$, the Hessian of $f$ (if it exists) or a quasi-Newton approximation.
Some algorithms require a specific structure for $H$, for an overview, refer to the table below.

The following table gives an overview of the available algorithms in the general case.

Algorithm | Quadratic Regularization | Trust Region | Quadratic term for $\varphi$ : H | Reference
----------|--------------------------|--------------|---------------|----------
[`R2`](@ref R2) | Yes | No | $H = 0$ | [aravkin-baraldi-orban-2022; Algorithm 6.1](@cite)
[`R2N`](@ref R2N) | Yes | No | Any Symmetric| [diouane-habiboullah-orban-2024; Algorithm 1](@cite)
[`R2DH`](@ref R2DH) | Yes | No | Any Diagonal | [diouane-habiboullah-orban-2024; Algorithm 1](@cite)
[`TR`](@ref TR) | No | Yes | Any Symmetric | [aravkin-baraldi-orban-2022; Algorithm 3.1](@cite)
[`TRDH`](@ref TRDH) | No | Yes | Any Diagonal | [leconte-orban-2025; Algorithm 5.1](@cite)

## Nonlinear least-squares
This package provides two algorithms, [`LM`](@ref LM) and [`LMTR`](@ref LMTR), specialized for regularized, nonlinear least-squares.
That is, problems of the form
```math
\underset{x \in \mathbb{R}^n}{\text{minimize}} \quad \frac{1}{2}\|F(x)\|_2^2 + h(x),
```
where $F : \mathbb{R}^n \mapsto \mathbb{R}^m$ is continuously differentiable and $h : \mathbb{R}^n \mapsto \mathbb{R} \cup \{\infty\}$ is lower semi-continuous.
In that case, the model $\varphi$ is defined as
```math
\varphi(s; x) = \frac{1}{2}\|F(x) + J(x)s\|_2^2,
```
where $J(x)$ is the Jacobian of $F$ at $x$.
Similar to the algorithms in the previous section, we either add a quadratic regularization to the model ([`LM`](@ref LM)) or a trust-region ([`LMTR`](@ref LMTR)).
These algorithms are described in [aravkin-baraldi-orban-2024](@cite).

## Constrained Optimization
For constrained, regularized optimization,
```math
\underset{x \in \mathbb{R}^n}{\text{minimize}} \quad f(x) + h(x) \quad \text{subject to} \ l \leq x \leq u \ \text{and} \ c(x) = 0,
```
an augmented Lagrangian method is provided, [`AL`](@ref AL).
4 changes: 4 additions & 0 deletions docs/src/bibliography.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Bibliography

```@bibliography
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bibliography looks funny. What are those newlines?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not find a fix, sorry.

Will try it in a future PR

Empty file added docs/src/examples/bpdn.md
Empty file.
Empty file added docs/src/examples/fh.md
Empty file.
108 changes: 108 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1 +1,109 @@
# RegularizedOptimization.jl

This package implements a family of algorithms to solve nonsmooth optimization problems of the form

```math
\underset{x \in \mathbb{R}^n}{\text{minimize}} \quad f(x) + h(x),
```

where $f : \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable and $h : \mathbb{R}^n \to \mathbb{R} \cup \{\infty\}$ is lower semi-continuous and proper.
Both $f$ and $h$ may be **nonconvex**.

All solvers implemented in this package are **JuliaSmoothOptimizers-compliant**.
They take a [`RegularizedNLPModel`](https://jso.dev/RegularizedProblems.jl/dev/reference#RegularizedProblems.RegularizedNLPModel) as input and return a [`GenericExecutionStats`](https://jso.dev/SolverCore.jl/stable/reference/#SolverCore.GenericExecutionStats).

A [`RegularizedNLPModel`](https://jso.dev/RegularizedProblems.jl/stable/reference#RegularizedProblems.RegularizedNLPModel) contains:

- a smooth component `f` represented as an [`AbstractNLPModel`](https://github.com/JuliaSmoothOptimizers/NLPModels.jl),
- a nonsmooth regularizer `h`.

We refer to [jso.dev](https://jso.dev) for tutorials on the `NLPModel` API. This framework allows the usage of models from

- AMPL ([AmplNLReader.jl](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl)),
- CUTEst ([CUTEst.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl)),
- JuMP ([NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl)),
- PDE-constrained problems ([PDENLPModels.jl](https://github.com/JuliaSmoothOptimizers/PDENLPModels.jl)),
- models defined with automatic differentiation ([ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl)).

We refer to [ManualNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ManualNLPModels.jl) for users interested in defining their own model.

---

## Regularizers

Regularizers used in this package are based on the [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl) API, which is related to [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl).

The solvers in this package work by approximating the regularizer with a *shifted model*.
That is, at each iterate $x_k$, we approximate $h(x_k + s)$ with a (simpler) function $\psi(s; x_k)$.
For example, if $h(x) = \|x\|$, then its *shifted model* is simply the function $h$ itself : $\psi(s; x_k) = \|x_k + s\|$.
On the other hand, if $h$ is the composition of a norm with a function, $h(x) = \|c(x)\|$, then its *shifted model* can be the approximation
```math
\psi(s; x_k) = \|c(x_k) + J(x_k)s\| \approx \|c(x_k + s) \| = h(x_k + s),
```
where $J(x_k)$ is the Jacobian of $c$ at the point $x_k$.

Basically, we expect a regularizer `h::Foo` to

- Be callable with vectors, i.e. to implement `(h::Foo)(x::AbstractVector)`.
- Be *shifteable*, that is, to implement a function `shifted(h::Foo, x::AbstractVector)` that returns the shifted model `ψ::ShiftedFoo`.

Next, we expect the shifted model `ψ::ShiftedFoo` to

- Be callable with vectors, i.e. to implement `(ψ::ShiftedFoo)(x::AbstractVector)`.
- Be *shifteable*, that is, to implement a function `shifted(ψ::ShiftedFoo, x::AbstractVector)` that returns a shifted model `ψ'::ShiftedFoo`. Moreover, we should be able to change the shift in place, that is, the function `shift!(ψ::ShiftedFoo, x::AbstractVector)` should be implemented as well.
- Be *proximable*, that is, to implement the inplace proximal mapping `prox!(y::AbstractVector, ψ::ShiftedFoo, q::AbstractVector, σ::Real)`.

The proximal mapping is defined as
```math
\text{prox}(\psi, q, \sigma) := \argmin_y \ \psi(y) + \frac{\sigma}{2} \|y - q\|_2^2.
```

!!! note
The package [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl) mostly implements the shifted models `ψ`.
For the unshifted version, these are often implemented in [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl) so that you might actually need to install the latter. For example, if you wish to use the L0 norm as a regularizer, then you should define `h` as `h = NormL0(1.0)` with [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl), you don't need to do anything else in this case because the shifted model of the L0 norm is already implemented in [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl).

!!! warning
The shifted model being proximable means that our solvers will not be able to automagically solve with any nonsmooth function that is given to it. Rather, the user is expected to provide an efficient solver for the proximal mapping.

The following table shows which regularizers are readily available and which dependency is required to use the regularizer (the shifted model is always in `ShiftedProximalOperators.jl`).

Regularizer | Shifted Model | Julia | Dependency
------------|---------------|-------|-----------
$\lambda ∥x∥_0$ | $\lambda ∥x + s∥_0$ | [`NormL0(λ)`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/#ProximalOperators.NormL0) | [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
$\lambda ∥x∥_1$ | $\lambda ∥x + s∥_1$ | [`NormL1(λ)`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/#ProximalOperators.NormL1) | [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
$\lambda ∥x∥_2$ | $\lambda ∥x + s∥_2$ | [`NormL2(λ)`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/#ProximalOperators.NormL2) | [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
$\lambda ∥c(x)∥_2$ | $\lambda ∥c(x) + J(x)s∥_2$ | [`CompositeNormL2(λ)`](https://jso.dev/ShiftedProximalOperators.jl/dev/reference/#ShiftedProximalOperators.CompositeNormL2) | [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl)

---

## Algorithms

A presentation of each algorithm is given [here](@ref algorithms).

---

## Preallocating

All solvers in RegularizedOptimization.jl have **in-place versions**.
Users can preallocate a workspace and reuse it across solves to avoid memory allocations, which is useful in repetitive scenarios.

---

## How to Install

RegularizedOptimization can be installed through the Julia package manager:

```julia
julia> ]
pkg> add https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl
```

---

## Bug reports and discussions

If you think you found a bug, please open an [issue](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl/issues).
Focused suggestions and requests can also be opened as issues. Before opening a pull request, we recommend starting an issue or a discussion first.

For general questions not suited for a bug report, feel free to start a discussion [here](https://github.com/JuliaSmoothOptimizers/Organization/discussions).
This forum is for questions and discussions about any of the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) packages.
6 changes: 0 additions & 6 deletions docs/src/reference.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,5 @@
# Reference

## Contents

```@contents
Pages = ["reference.md"]
```

## Index

```@index
Expand Down
1 change: 0 additions & 1 deletion docs/src/tutorial.md

This file was deleted.

Loading