Skip to content

Commit 7d3edcb

Browse files
Merge pull request #1001 from SebastianM-C/smc/ipopt2
Add docs for OptimizationIpopt
2 parents 8f8e4ff + d773461 commit 7d3edcb

File tree

5 files changed

+273
-5
lines changed

5 files changed

+273
-5
lines changed

docs/Project.toml

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ Ipopt_jll = "9cc047cb-c261-5740-88fc-0cf96f7bdcc7"
1010
IterTools = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
1111
Juniper = "2ddba703-00a4-53a7-87a5-e8b9971dde84"
1212
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
13+
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
1314
Manifolds = "1cead3c2-87b3-11e9-0ccd-23c62b72b94e"
1415
Manopt = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5"
15-
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
1616
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
1717
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
1818
NLPModelsTest = "7998695d-6960-4d3a-85c4-e1bceb8cd856"
@@ -23,6 +23,7 @@ OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb"
2323
OptimizationCMAEvolutionStrategy = "bd407f91-200f-4536-9381-e4ba712f53f8"
2424
OptimizationEvolutionary = "cb963754-43f6-435e-8d4b-99009ff27753"
2525
OptimizationGCMAES = "6f0a0517-dbc2-4a7a-8a20-99ae7f27e911"
26+
OptimizationIpopt = "43fad042-7963-4b32-ab19-e2a4f9a67124"
2627
OptimizationMOI = "fd9f6733-72f4-499f-8506-86b2bdd0dea1"
2728
OptimizationManopt = "e57b7fff-7ee7-4550-b4f0-90e9476e9fb6"
2829
OptimizationMetaheuristics = "3aafef2f-86ae-4776-b337-85a36adf0b55"
@@ -56,10 +57,10 @@ Ipopt = "1"
5657
IterTools = "1"
5758
Juniper = "0.9"
5859
Lux = "1"
60+
MLUtils = "0.4.4"
5961
Manifolds = "0.9"
6062
Manopt = "0.4"
61-
MLUtils = "0.4.4"
62-
ModelingToolkit = "9"
63+
ModelingToolkit = "10"
6364
NLPModels = "0.21"
6465
NLPModelsTest = "0.10"
6566
NLopt = "0.6, 1"
@@ -69,6 +70,7 @@ OptimizationBase = "2"
6970
OptimizationCMAEvolutionStrategy = "0.3"
7071
OptimizationEvolutionary = "0.4"
7172
OptimizationGCMAES = "0.3"
73+
OptimizationIpopt = "0.1"
7274
OptimizationMOI = "0.5"
7375
OptimizationManopt = "0.0.4"
7476
OptimizationMetaheuristics = "0.3"

docs/pages.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ pages = ["index.md",
2929
"CMAEvolutionStrategy.jl" => "optimization_packages/cmaevolutionstrategy.md",
3030
"Evolutionary.jl" => "optimization_packages/evolutionary.md",
3131
"GCMAES.jl" => "optimization_packages/gcmaes.md",
32+
"Ipopt.jl" => "optimization_packages/ipopt.md",
3233
"Manopt.jl" => "optimization_packages/manopt.md",
3334
"MathOptInterface.jl" => "optimization_packages/mathoptinterface.md",
3435
"Metaheuristics.jl" => "optimization_packages/metaheuristics.md",
Lines changed: 265 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,265 @@
1+
# OptimizationIpopt.jl
2+
3+
[`OptimizationIpopt.jl`](https://github.com/SciML/Optimization.jl/tree/master/lib/OptimizationIpopt) is a wrapper package that integrates [`Ipopt.jl`](https://github.com/jump-dev/Ipopt.jl) with the [`Optimization.jl`](https://github.com/SciML/Optimization.jl) ecosystem. This allows you to use the powerful Ipopt (Interior Point OPTimizer) solver through Optimization.jl's unified interface.
4+
5+
Ipopt is a software package for large-scale nonlinear optimization designed to find (local) solutions of mathematical optimization problems of the form:
6+
7+
```math
8+
\begin{aligned}
9+
\min_{x \in \mathbb{R}^n} \quad & f(x) \\
10+
\text{s.t.} \quad & g_L \leq g(x) \leq g_U \\
11+
& x_L \leq x \leq x_U
12+
\end{aligned}
13+
```
14+
15+
where ``f(x): \mathbb{R}^n \to \mathbb{R}`` is the objective function, ``g(x): \mathbb{R}^n \to \mathbb{R}^m`` are the constraint functions, and the vectors ``g_L`` and ``g_U`` denote the lower and upper bounds on the constraints, and the vectors ``x_L`` and ``x_U`` are the bounds on the variables ``x``.
16+
17+
## Installation: OptimizationIpopt.jl
18+
19+
To use this package, install the OptimizationIpopt package:
20+
21+
```julia
22+
import Pkg;
23+
Pkg.add("OptimizationIpopt");
24+
```
25+
26+
## Methods
27+
28+
OptimizationIpopt.jl provides the `IpoptOptimizer` algorithm, which wraps the Ipopt.jl solver for use with Optimization.jl. This is an interior-point algorithm that uses line search filter methods and is particularly effective for:
29+
- Large-scale nonlinear problems
30+
- Problems with nonlinear constraints
31+
- Problems requiring high accuracy solutions
32+
33+
### Algorithm Requirements
34+
35+
`IpoptOptimizer` requires:
36+
- Gradient information (via automatic differentiation or user-provided)
37+
- Hessian information (can be approximated or provided)
38+
- Constraint Jacobian (for constrained problems)
39+
- Constraint Hessian (for constrained problems)
40+
41+
The algorithm supports:
42+
- Box constraints via `lb` and `ub` in the `OptimizationProblem`
43+
- General nonlinear equality and inequality constraints via `lcons` and `ucons`
44+
45+
## Options and Parameters
46+
47+
### Common Options
48+
49+
The following options can be passed as keyword arguments to `solve`:
50+
51+
- `maxiters`: Maximum number of iterations (maps to Ipopt's `max_iter`)
52+
- `maxtime`: Maximum wall time in seconds (maps to Ipopt's `max_wall_time`)
53+
- `abstol`: Absolute tolerance (not directly used by Ipopt)
54+
- `reltol`: Convergence tolerance (maps to Ipopt's `tol`)
55+
- `verbose`: Control output verbosity
56+
- `false` or `0`: No output
57+
- `true` or `5`: Standard output
58+
- Integer values 0-12: Different verbosity levels (maps to `print_level`)
59+
- `hessian_approximation`: Method for Hessian computation
60+
- `"exact"` (default): Use exact Hessian
61+
- `"limited-memory"`: Use L-BFGS approximation
62+
63+
### Advanced Ipopt Options
64+
65+
Any Ipopt option can be passed directly as keyword arguments. The full list of available options is documented in the [Ipopt Options Reference](https://coin-or.github.io/Ipopt/OPTIONS.html). Common options include:
66+
67+
#### Convergence Options
68+
- `tol`: Desired convergence tolerance (relative)
69+
- `dual_inf_tol`: Dual infeasibility tolerance
70+
- `constr_viol_tol`: Constraint violation tolerance
71+
- `compl_inf_tol`: Complementarity tolerance
72+
73+
#### Algorithm Options
74+
- `linear_solver`: Linear solver to use
75+
- Default: "mumps" (included with Ipopt)
76+
- HSL solvers: "ma27", "ma57", "ma86", "ma97" (require [separate installation](https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers))
77+
- Others: "pardiso", "spral" (require [separate installation](https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers))
78+
- `nlp_scaling_method`: Scaling method ("gradient-based", "none", "equilibration-based")
79+
- `limited_memory_max_history`: History size for L-BFGS (when using `hessian_approximation="limited-memory"`)
80+
- `mu_strategy`: Update strategy for barrier parameter ("monotone", "adaptive")
81+
82+
#### Line Search Options
83+
- `line_search_method`: Line search method ("filter", "penalty")
84+
- `alpha_for_y`: Step size for constraint multipliers
85+
- `recalc_y`: Control when multipliers are recalculated
86+
87+
#### Output Options
88+
- `print_timing_statistics`: Print detailed timing information ("yes"/"no")
89+
- `print_info_string`: Print user-defined info string ("yes"/"no")
90+
91+
Example with advanced options:
92+
93+
```julia
94+
sol = solve(prob, IpoptOptimizer();
95+
maxiters = 1000,
96+
tol = 1e-8,
97+
linear_solver = "ma57",
98+
mu_strategy = "adaptive",
99+
print_timing_statistics = "yes"
100+
)
101+
```
102+
103+
## Examples
104+
105+
### Basic Unconstrained Optimization
106+
107+
The Rosenbrock function can be minimized using `IpoptOptimizer`:
108+
109+
```@example Ipopt1
110+
using Optimization, OptimizationIpopt
111+
using Zygote
112+
113+
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
114+
x0 = zeros(2)
115+
p = [1.0, 100.0]
116+
117+
# Ipopt requires gradient information
118+
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
119+
prob = OptimizationProblem(optfunc, x0, p)
120+
sol = solve(prob, IpoptOptimizer())
121+
```
122+
123+
### Box-Constrained Optimization
124+
125+
Adding box constraints to limit the search space:
126+
127+
```@example Ipopt2
128+
using Optimization, OptimizationIpopt
129+
using Zygote
130+
131+
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
132+
x0 = zeros(2)
133+
p = [1.0, 100.0]
134+
135+
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
136+
prob = OptimizationProblem(optfunc, x0, p;
137+
lb = [-1.0, -1.0],
138+
ub = [1.5, 1.5])
139+
sol = solve(prob, IpoptOptimizer())
140+
```
141+
142+
### Nonlinear Constrained Optimization
143+
144+
Solving problems with nonlinear equality and inequality constraints:
145+
146+
```@example Ipopt3
147+
using Optimization, OptimizationIpopt
148+
using Zygote
149+
150+
# Objective: minimize x[1]^2 + x[2]^2
151+
objective(x, p) = x[1]^2 + x[2]^2
152+
153+
# Constraint: x[1]^2 + x[2]^2 - 2*x[1] = 0 (equality)
154+
# and x[1] + x[2] >= 1 (inequality)
155+
function constraints(res, x, p)
156+
res[1] = x[1]^2 + x[2]^2 - 2*x[1] # equality constraint
157+
res[2] = x[1] + x[2] # inequality constraint
158+
end
159+
160+
x0 = [0.5, 0.5]
161+
optfunc = OptimizationFunction(objective, AutoZygote(); cons = constraints)
162+
163+
# First constraint is equality (lcons = ucons = 0)
164+
# Second constraint is inequality (lcons = 1, ucons = Inf)
165+
prob = OptimizationProblem(optfunc, x0;
166+
lcons = [0.0, 1.0],
167+
ucons = [0.0, Inf])
168+
169+
sol = solve(prob, IpoptOptimizer())
170+
```
171+
172+
### Using Limited-Memory BFGS Approximation
173+
174+
For large-scale problems where computing the exact Hessian is expensive:
175+
176+
```@example Ipopt4
177+
using Optimization, OptimizationIpopt
178+
using Zygote
179+
180+
# Large-scale problem
181+
n = 100
182+
rosenbrock_nd(x, p) = sum(p[2] * (x[i+1] - x[i]^2)^2 + (p[1] - x[i])^2 for i in 1:n-1)
183+
184+
x0 = zeros(n)
185+
p = [1.0, 100.0]
186+
187+
# Using automatic differentiation for gradients only
188+
optfunc = OptimizationFunction(rosenbrock_nd, AutoZygote())
189+
prob = OptimizationProblem(optfunc, x0, p)
190+
191+
# Use L-BFGS approximation for Hessian
192+
sol = solve(prob, IpoptOptimizer();
193+
hessian_approximation = "limited-memory",
194+
limited_memory_max_history = 10,
195+
maxiters = 1000)
196+
```
197+
198+
### Portfolio Optimization Example
199+
200+
A practical example of portfolio optimization with constraints:
201+
202+
```@example Ipopt5
203+
using Optimization, OptimizationIpopt
204+
using Zygote
205+
using LinearAlgebra
206+
207+
# Portfolio optimization: minimize risk subject to return constraint
208+
n_assets = 5
209+
μ = [0.05, 0.10, 0.15, 0.08, 0.12] # Expected returns
210+
Σ = [0.05 0.01 0.02 0.01 0.00; # Covariance matrix
211+
0.01 0.10 0.03 0.02 0.01;
212+
0.02 0.03 0.15 0.02 0.03;
213+
0.01 0.02 0.02 0.08 0.02;
214+
0.00 0.01 0.03 0.02 0.06]
215+
216+
target_return = 0.10
217+
218+
# Objective: minimize portfolio variance
219+
portfolio_risk(w, p) = dot(w, Σ * w)
220+
221+
# Constraints: sum of weights = 1, expected return >= target
222+
function portfolio_constraints(res, w, p)
223+
res[1] = sum(w) - 1.0 # Sum to 1 (equality)
224+
res[2] = dot(μ, w) - target_return # Minimum return (inequality)
225+
end
226+
227+
optfunc = OptimizationFunction(portfolio_risk, AutoZygote();
228+
cons = portfolio_constraints)
229+
w0 = fill(1.0/n_assets, n_assets)
230+
231+
prob = OptimizationProblem(optfunc, w0;
232+
lb = zeros(n_assets), # No short selling
233+
ub = ones(n_assets), # No single asset > 100%
234+
lcons = [0.0, 0.0], # Equality and inequality constraints
235+
ucons = [0.0, Inf])
236+
237+
sol = solve(prob, IpoptOptimizer();
238+
tol = 1e-8,
239+
print_level = 5)
240+
241+
println("Optimal weights: ", sol.u)
242+
println("Expected return: ", dot(μ, sol.u))
243+
println("Portfolio variance: ", sol.objective)
244+
```
245+
246+
## Tips and Best Practices
247+
248+
1. **Scaling**: Ipopt performs better when variables and constraints are well-scaled. Consider normalizing your problem if variables have very different magnitudes.
249+
250+
2. **Initial Points**: Provide good initial guesses when possible. Ipopt is a local optimizer and the solution quality depends on the starting point.
251+
252+
3. **Hessian Approximation**: For large problems or when Hessian computation is expensive, use `hessian_approximation = "limited-memory"`.
253+
254+
4. **Linear Solver Selection**: The choice of linear solver can significantly impact performance. For large problems, consider using HSL solvers (ma27, ma57, ma86, ma97). Note that HSL solvers require [separate installation](https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers) - see the Ipopt.jl documentation for setup instructions. The default MUMPS solver works well for small to medium problems.
255+
256+
5. **Constraint Formulation**: Ipopt handles equality constraints well. When possible, formulate constraints as equalities rather than pairs of inequalities.
257+
258+
6. **Warm Starting**: When solving a sequence of similar problems, use the solution from the previous problem as the initial point for the next.
259+
260+
## References
261+
262+
For more detailed information about Ipopt's algorithms and options, consult:
263+
- [Ipopt Documentation](https://coin-or.github.io/Ipopt/)
264+
- [Ipopt Options Reference](https://coin-or.github.io/Ipopt/OPTIONS.html)
265+
- [Ipopt Implementation Paper](https://link.springer.com/article/10.1007/s10107-004-0559-y)

lib/OptimizationIpopt/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "OptimizationIpopt"
22
uuid = "43fad042-7963-4b32-ab19-e2a4f9a67124"
33
authors = ["Sebastian Micluța-Câmpeanu <[email protected]> and contributors"]
4-
version = "0.1.0"
4+
version = "0.1.1"
55

66
[deps]
77
Ipopt = "b6b21f68-93f8-5de0-b562-5493be1d77c9"

lib/OptimizationIpopt/src/OptimizationIpopt.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ function __map_optimizer_args(cache,
104104
Ipopt.AddIpoptIntOption(prob, "max_iter", maxiters)
105105
end
106106
if !isnothing(maxtime)
107-
Ipopt.AddIpoptNumOption(prob, "max_cpu_time", maxtime)
107+
Ipopt.AddIpoptNumOption(prob, "max_wall_time", float(maxtime))
108108
end
109109
if !isnothing(reltol)
110110
Ipopt.AddIpoptNumOption(prob, "tol", reltol)

0 commit comments

Comments
 (0)