Skip to content

Commit eb9b89e

Browse files
committed
Update documentation
1 parent 2f8a5f0 commit eb9b89e

File tree

6 files changed

+74
-61
lines changed

6 files changed

+74
-61
lines changed

docs/src/benchmark.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Benchmarking is very important when researching new algorithms or selecting the
44

55
The package [`SolverBenchmark`](https://github.com/JuliaSmoothOptimizers/SolverBenchmark.jl) exports the function [`bmark_solvers`](https://github.com/JuliaSmoothOptimizers/SolverBenchmark.jl/blob/main/src/bmark_solvers.jl) that runs a set of optimizers on a set of problems. `JSOSuite.jl` specialize this function, see `bmark_solvers`.
66

7-
The [JuliaSmoothOptimizers organization](https://juliasmoothoptimizers.github.io) contains several packages of test problems ready to use for benchmarking. The main ones are
7+
The [JuliaSmoothOptimizers organization](https://jso.dev) contains several packages of test problems ready to use for benchmarking. The main ones are
88
- [`OptimizationProblems.jl`](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl): This package provides a collection of optimization problems in JuMP and ADNLPModels syntax;
99
- [`CUTEst.jl`](https://github.com/JuliaSmoothOptimizers/CUTEst.jl);
1010
- [`NLSProblems.jl`](https://github.com/JuliaSmoothOptimizers/NLSProblems.jl).
@@ -27,7 +27,7 @@ selected_meta = selected_meta[.!selected_meta.has_bounds .&& (selected_meta.ncon
2727
list = selected_meta[!, :name]
2828
```
2929

30-
Then, we generate the list of problems using [`ADNLPModel`](https://juliasmoothoptimizers.github.io/ADNLPModels.jl/dev/reference/#ADNLPModels.ADNLPModel-Union{Tuple{S},%20Tuple{Any,%20S}}%20where%20S).
30+
Then, we generate the list of problems using [`ADNLPModel`](https://jso.dev/ADNLPModels.jl/dev/reference/#ADNLPModels.ADNLPModel-Union{Tuple{S},%20Tuple{Any,%20S}}%20where%20S).
3131

3232
```@example op
3333
ad_problems = [
@@ -39,6 +39,7 @@ length(ad_problems) # return the number of problems
3939
We now want to select appropriate optimizers using the `JSOSuite.optimizers`.
4040

4141
```@example op
42+
using NLPModelsIpopt
4243
selected_optimizers = JSOSuite.optimizers
4344
# optimizers can solve general `nlp` as some are specific to variants (NLS, ...)
4445
selected_optimizers = selected_optimizers[selected_optimizers.can_solve_nlp, :]

docs/src/index.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,45 +9,46 @@ All these solvers rely on the `NLPModel API` from [NLPModels.jl](https://github.
99
\min \quad & f(x) \\
1010
& c_L \leq c(x) \leq c_U \\
1111
& c_A \leq Ax \leq l_A, \\
12-
& \ell \leq x \leq u,
12+
& \ell \leq x \leq u.
1313
\end{aligned}
1414
```
1515

1616
The package `JSOSuite` exports a function [`minimize`](@ref):
1717
```
1818
output = minimize(args...; kwargs...)
1919
```
20-
The arguments are used to define the problem, see [Tutorial](@ref tutorial-section).
20+
where the arguments define the problem, see [Tutorial](@ref tutorial-section).
2121

2222
It is also possible to define an `NLPModel` or a `JuMP` model representing the problem, and then call `minimize`:
2323
```
2424
output = minimize(nlpmodel; kwargs...)
25+
output = minimize(jump; kwargs...)
2526
```
2627

27-
The `NLPModel API` is a general consistent API for solvers to interact with models by providing flexible data types to represent the objective and constraint functions to evaluate their derivatives, and to provide essentially any information that a solver might request from a model. [JuliaSmoothOrganization's website](https://juliasmoothoptimizers.github.io) or [NLPModels.jl's documentation](https://juliasmoothoptimizers.github.io/NLPModels.jl/dev/) provide more tutorials on this topic.
28+
The `NLPModel API` is a general API for solvers to interact with models by providing flexible data types to represent the objective and constraint functions to evaluate their derivatives, and to provide essentially any information that a solver might request from a model. [JuliaSmoothOrganization's website jso.dev](https://jso.dev) or [NLPModels.jl's documentation](https://jso.dev/NLPModels.jl/dev/) provide more tutorials on this topic.
2829

2930
### NLPModel
3031

31-
JuliaSmoothOptimizers' compliant solvers accept any model compatible with the NLPModel API. See the [Tutorial](@ref tutorial-section) section for examples.
32+
JuliaSmoothOptimizers' compliant solvers accept any model compatible with the `NLPModel API`. See the [Tutorial](@ref tutorial-section) section for examples.
3233

3334
Depending on the origin of the problem several modeling tools are available. The following generic modeling tools are accepted:
34-
- `JuMP` models are internally made compatible with NLPModel via [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl).
35-
- `Ampl` models stored in a `.nl` file can `AmplModel("name_of_file.nl")` using [AmplNLReader.jl](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl).
36-
- [QPSReader.jl](https://github.com/JuliaSmoothOptimizers/QPSReader.jl) reads linear problems in MPS format and quadratic problems in QPS format.
37-
- Models using automatic differentiation can be generated using [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl).
35+
- `JuMP` models are internally made compatible with NLPModel via [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl);
36+
- `Ampl` models stored in a `.nl` file can be instantiated with `AmplModel("name_of_file.nl")` using [AmplNLReader.jl](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl);
37+
- [QPSReader.jl](https://github.com/JuliaSmoothOptimizers/QPSReader.jl) reads linear problems in MPS format and quadratic problems in QPS format;
38+
- Models using automatic differentiation can be generated using [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl);
3839
- Models with manually input derivatives can be defined using [ManualNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ManualNLPModels.jl).
3940

40-
It is also possible to define your NLPModel variant. Several examples are available within JuliaSmoothOptimizers' umbrella:
41+
It is also possible to define your `NLPModel` variant. Several examples are available within JuliaSmoothOptimizers' umbrella:
4142
- [KnetNLPModels.jl](https://github.com/JuliaSmoothOptimizers/KnetNLPModels.jl): An NLPModels Interface to Knet.
4243
- [PDENLPModels.jl](https://github.com/JuliaSmoothOptimizers/PDENLPModels.jl): A NLPModel API for optimization problems with PDE-constraints.
4344

4445
A nonlinear least squares problem is a special case with the objective function defined as ``f(x) = \tfrac{1}{2}\|F(x)\|^2_2``.
4546
Although the problem can be solved using only ``f``, knowing ``F`` independently allows the development of more efficient methods.
46-
See the [Nonlinear Least Squares](@ref nls-section) for special treatment of these problems.
47+
See the [Nonlinear Least Squares](@ref nls-section) for more on the special treatment of these problems.
4748

4849
### Output
4950

50-
The value returned is a [`GenericExecutionStats`](https://juliasmoothoptimizers.github.io/SolverCore.jl/dev/reference/#SolverCore.GenericExecutionStats), which is a structure containing the available information at the end of the execution, such as a solver status, the objective function value, the norm of the residuals, the elapsed time, etc.
51+
The value returned is a [`GenericExecutionStats`](https://jso.dev/SolverCore.jl/dev/reference/#SolverCore.GenericExecutionStats), which is a structure containing the available information at the end of the execution, such as a solver status, the objective function value, the norm of the residuals, the elapsed time, etc.
5152

5253
It contains the following fields:
5354
- `status`: Indicates the output of the solver. Use `show_statuses()` for the full list;

docs/src/nls.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ The nonlinear least squares (NLS) optimization problem is a specific case where
77
\min \quad & f(x):=\tfrac{1}{2}\|F(x)\|^2_2 \\
88
& c_L \leq c(x) \leq c_U \\
99
& c_A \leq Ax \leq l_A, \\
10-
& \ell \leq x \leq u,
10+
& \ell \leq x \leq u.
1111
\end{aligned}
1212
```
1313

@@ -24,9 +24,7 @@ In this tutorial, we consider the following equality-constrained problem
2424
```
2525
where ``1 \leq x[1] x[2] \leq 1`` implies that ``x[1] x[2] = 1``.
2626

27-
There are two important challenges in solving an optimization problem: (i) model the problem, and (ii) solve the problem with an appropriate solve.
28-
29-
Let's see two ways to model this problem exploiting the knowledge of the structure of the problem.
27+
In the rest of this tutorial, we will see two ways to model this problem exploiting the knowledge of the structure of the problem.
3028

3129
### NLS using automatic differentiation
3230

@@ -53,19 +51,19 @@ stats = minimize(nls)
5351
stats = minimize(F, x0, nres, c, l, l)
5452
```
5553

56-
By default, `JSOSuite.solve` will use a solver tailored for nonlineat least squares problem.
54+
By default, `JSOSuite.minimize` will use a solver tailored for nonlineat least squares problem.
5755
Nevertheless, it is also possible to specify the solver to be used.
5856

5957
```@example ex1
6058
using NLPModelsIpopt
6159
stats = minimize("IPOPT", F, x0, nres, c, l, l)
6260
```
6361

64-
We refer to the documentation of [`ADNLPModels.jl`](https://juliasmoothoptimizers.github.io/ADNLPModels.jl/dev/backend/) for more details on the AD system use and how to modify it.
62+
We refer to the documentation of [`ADNLPModels.jl`](https://jso.dev/ADNLPModels.jl/dev/backend/) for more details on the AD system use and how to modify it.
6563

6664
### NLS using JuMP
6765

68-
The package [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) exports a constructor, [`MathOptNLSModel`](https://juliasmoothoptimizers.github.io/NLPModelsJuMP.jl/dev/tutorial/#NLPModelsJuMP.MathOptNLSModel), to build an `AbstractNLSModel` using `JuMP`.
66+
The package [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) exports a constructor, [`MathOptNLSModel`](https://jso.dev/NLPModelsJuMP.jl/dev/tutorial/#NLPModelsJuMP.MathOptNLSModel), to build an `AbstractNLSModel` using `JuMP`.
6967

7068
```@example
7169
using JuMP, JSOSuite, NLPModelsJuMP
@@ -89,12 +87,13 @@ We show here how to find the feasible point of a given model.
8987
\begin{aligned}
9088
\min \quad & \tfrac{1}{2}\|s\|^2_2 \\
9189
& 0 \leq s - c(x) \leq 0
92-
& \ell \leq x \leq u,
90+
& \ell \leq x \leq u.
9391
\end{aligned}
9492
```
9593

9694
This formulation can also be used to solve a set of nonlinear equations.
97-
Finding a feasible point of an optimization problem is useful to find the problem is feasible and it is a good practice to find an initial guess.
95+
Finding a feasible point of an optimization problem is useful to determine whether the problem is feasible or not.
96+
Moreover, it is a good practice to find an initial guess.
9897

9998
```@example feas
10099
using ADNLPModels, JSOSuite
@@ -107,7 +106,7 @@ nlp = ADNLPModel(f, x0, c, b, b)
107106
stats = feasible_point(nlp)
108107
```
109108

110-
Using the function `cons` from the NLPModel API, we can verify that the obtained solution is feasible.
109+
Using the function `cons` from the `NLPModel API`, we can verify that the obtained solution is feasible.
111110

112111
```@example feas
113112
using NLPModels

docs/src/qp.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ The quadratic model with linear constraints is another specific case where the o
66
\begin{aligned}
77
\min \quad & x^TQx + c^Tx + c_0 \\
88
& c_A \leq Ax \leq l_A, \\
9-
& \ell \leq x \leq u,
9+
& \ell \leq x \leq u.
1010
\end{aligned}
1111
```
1212

1313
This problem is convex whenever the matrix `Q` is positive semi-definite. A key aspect here is the modeling of the matrices `Q` and `A`.
1414
The main data structure available in Julia are: `LinearAlgebra.Matrix`, `SparseArrays.sparse`, `SparseMatricesCOO.sparse`, `LinearOperators.LinearOperator`.
1515

16-
In JuliaSmoothOptimizers, the package [`QuadraticModels.jl`](https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl) can be used to access the NLPModel API for such instance.
16+
In JuliaSmoothOptimizers, the package [`QuadraticModels.jl`](https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl) can be used to access the `NLPModel API` for such instance.
1717

1818
The function `minimize` with the following sets of arguments will automatically build a `QuadraticModel` and choose the adequate solver.
1919

@@ -24,7 +24,7 @@ The function `minimize` with the following sets of arguments will automatically
2424
stats = minimize(c, H, lvar, uvar, A, lcon, ucon, c0 = c0, x0 = x0, name = name; kwargs...)
2525
```
2626

27-
## Example
27+
## Examples
2828

2929
```@example ex1
3030
using SparseArrays

docs/src/speed-up.md

Lines changed: 31 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ stats = minimize(f, x0, verbose = 0, highest_derivative_available = 1)
2020
stats
2121
```
2222

23+
This classification is straightforwardly extended to handling constraints with the Jacobian matrix explicitly of via matrix-vector products.
24+
2325
## Find a better initial guess
2426

2527
The majority of derivative-based optimizers are local methods whose performance are dependent of the initial guess.
@@ -41,14 +43,29 @@ Once a solver has been chosen it is also possible to play with the key parameter
4143

4244
Note that all optimizers presented here have been carefully optimized. All have different strengths. Trying another solver on the same problem sometimes provide a different solution.
4345

44-
### Unconstrained/Bound-constrained
46+
### Unconstrained
4547

46-
##### LBFGS
48+
##### LBFGS (1st order)
4749

4850
- `mem::Int = 5`: memory parameter of the `lbfgs` algorithm;
4951
- `τ₁::T = T(0.9999)`: slope factor in the Wolfe condition when performing the line search;
5052
- `bk_max:: Int = 25`: maximum number of backtracks when performing the line search.
5153

54+
##### R2 (1st order)
55+
56+
- `η1 = eps(T)^(1/4)`, `η2 = T(0.95)`: step acceptance parameters;
57+
- `γ1 = T(1/2)`, `γ2 = 1/γ1`: regularization update parameters;
58+
- `σmin = eps(T)`: step parameter for R2 algorithm;
59+
- `β = T(0) ∈ [0,1]` is the constant in the momentum term. If `β == 0`, R2 does not use momentum.
60+
61+
##### TRUNK (matrix-free)
62+
63+
- `bk_max::Int = 10`: algorithm parameter;
64+
- `monotone::Bool = true`: algorithm parameter;
65+
- `nm_itmax::Int = 25`: algorithm parameter.
66+
67+
### Bound-constrained (matrix-free)
68+
5269
##### TRON
5370

5471
- `μ₀::T = T(1e-2)`: algorithm parameter in (0, 0.5);
@@ -57,36 +74,29 @@ Note that all optimizers presented here have been carefully optimized. All have
5774
- `max_cgiter::Int = 50`: subproblem's iteration limit;
5875
- `cgtol::T = T(0.1)`: subproblem tolerance.
5976

60-
##### TRUNK
61-
62-
TODO
63-
64-
##### R2
65-
66-
- `η1 = eps(T)^(1/4)`, `η2 = T(0.95)`: step acceptance parameters;
67-
- `γ1 = T(1/2)`, `γ2 = 1/γ1`: regularization update parameters;
68-
- `σmin = eps(T)`: step parameter for R2 algorithm;
69-
- `β = T(0) ∈ [0,1]` is the constant in the momentum term. If `β == 0`, R2 does not use momentum.
70-
7177
### Constrained
7278

73-
##### Percival
79+
##### RipQP (quadratic with linear constraints)
7480

75-
- `μ::Real = T(10.0)`: Starting value of the penalty parameter.
81+
TODO
7682

77-
##### CaNNOLeS
83+
##### CaNNOLeS (NLS with nonlinear equality constraints)
7884

7985
- `linsolve::Symbol = :ma57`: solver to compute LDLt factorization. Available methods are: `:ma57`, `:ldlfactorizations`;
8086
- `method::Symbol = :Newton`: available methods `:Newton, :LM, :Newton_noFHess`, and `:Newton_vanishing`;
8187

82-
See [CaNNOLeS.jl tutorial](https://juliasmoothoptimizers.github.io/CaNNOLeS.jl/dev/tutorial/).
88+
See [CaNNOLeS.jl tutorial](https://jso.dev/CaNNOLeS.jl/dev/tutorial/).
8389

84-
##### DCISolver
90+
##### DCISolver (nonlinear equality constraints)
8591

8692
- `linear_solver = :ldlfact`: Solver for the factorization. options: `:ma57` if `HSL.jl` available.
8793

88-
See [`fine-tuneDCI`](https://juliasmoothoptimizers.github.io/DCISolver.jl/dev/fine-tuneDCI/).
94+
See the [`fine-tuneDCI tutorial`](https://jso.dev/DCISolver.jl/dev/fine-tuneDCI/).
8995

90-
##### RipQP
96+
##### FletcherPenaltySolver (nonlinear equality constraints)
9197

92-
TODO
98+
See the [`fine-tuneFPS tutorial`](https://jso.dev/FletcherPenaltySolver.jl/dev/fine-tuneFPS/).
99+
100+
##### Percival
101+
102+
- `μ::Real = T(10.0)`: Starting value of the penalty parameter.

docs/src/tutorial.md

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
In this tutorial, we provide examples of usage of the `minimize` function exported by `JSOSuite.jl`.
44

5-
There are two important challenges in solving an optimization problem: (i) model the problem, and (ii) solve the problem with an appropriate solver.
5+
There are two important challenges in solving an optimization problem: (i) model the problem, and (ii) solve the problem with an appropriate optimizer.
66

77
## Modeling
88

@@ -13,7 +13,7 @@ All these optimizers rely on the `NLPModel API` from [NLPModels.jl](https://gith
1313
\min \quad & f(x) \\
1414
& c_L \leq c(x) \leq c_U \\
1515
& c_A \leq Ax \leq l_A, \\
16-
& \ell \leq x \leq u,
16+
& \ell \leq x \leq u.
1717
\end{aligned}
1818
```
1919

@@ -24,7 +24,7 @@ output = minimize(nlpmodel::AbstractNLPModel; kwargs...)
2424

2525
In the rest of this section, we focus on examples using generic modeling tools.
2626

27-
It is generally of great interest if available to use a modeling that handles the structure of the problem, see [Nonlinear Least Squares](@ref nls-section) for an example with nonlinear least squares.
27+
It is generally of great interest if available to use a modeling that exploits the structure of the problem, see [Nonlinear Least Squares](@ref nls-section) for an example with nonlinear least squares.
2828

2929
### JuMP Model
3030

@@ -38,11 +38,11 @@ model = Model()
3838
minimize(model)
3939
```
4040

41-
We refer to [`JuMP tutorial`](https://jump.dev/JuMP.jl/stable/).
41+
We refer to [`JuMP tutorial`](https://jump.dev/JuMP.jl/stable/) for more on modeling problems with JuMP.
4242

4343
### NLPModel with Automatic Differentiation
4444

45-
We refer to [`ADNLPModel`](https://juliasmoothoptimizers.github.io/ADNLPModels.jl/dev/reference/#ADNLPModels.ADNLPModel-Union{Tuple{S},%20Tuple{Any,%20S}}%20where%20S) for the description of the different constructors.
45+
We refer to [`ADNLPModel`](https://jso.dev/ADNLPModels.jl/dev/reference/#ADNLPModels.ADNLPModel-Union{Tuple{S},%20Tuple{Any,%20S}}%20where%20S) for the description of the different constructors.
4646

4747
#### Unconstrained
4848

@@ -125,9 +125,9 @@ l = ones(2)
125125
stats = minimize(f, x0, A, c, l, l, verbose = 0)
126126
```
127127

128-
## Solving
128+
## Optimizing
129129

130-
Internally, the `minimize` function selects optimizers according to the problem's property and JSO-compliant optimizers available.
130+
Internally, the `minimize` function selects optimizers according to the problem's property and their availability.
131131

132132
### Available optimizers
133133

@@ -157,13 +157,15 @@ JSOSuite.select_optimizers(nlp)
157157
### Fine-tune solve call
158158

159159
All the keyword arguments are passed to the solver.
160-
Keywords available for all the optimizers are given below:
161-
162-
- `atol`: absolute tolerance;
163-
- `rtol`: relative tolerance;
164-
- `max_time`: maximum number of seconds;
165-
- `max_eval`: maximum number of cons + obj evaluations;
166-
- `verbose::Int = 0`: if > 0, display iteration details every `verbose` iteration.
160+
Keywords available for all the solvers are given below:
161+
162+
- `atol::T = √eps(T)`: absolute tolerance;
163+
- `rtol::T = √eps(T)`: relative tolerance;
164+
- `max_time::Float64 = 300.0`: maximum number of seconds;
165+
- `max_iter::Int = typemax(Int)`: maximum number of iterations;
166+
- `max_eval::Int = 10 000`: maximum number of constraint and objective functions evaluations;
167+
- `callback = (args...) -> nothing`: callback called at each iteration;
168+
- `verbose::Int = 0`: if > 0, display iteration details for every `verbose` iteration.
167169

168170
```@example
169171
using ADNLPModels, JSOSuite

0 commit comments

Comments
 (0)