Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ pages = ["index.md",
"API/modelingtoolkit.md",
"API/FAQ.md"
],
"Optimizer Packages" => [
"Optimizer Packages" => [
"BlackBoxOptim.jl" => "optimization_packages/blackboxoptim.md",
"CMAEvolutionStrategy.jl" => "optimization_packages/cmaevolutionstrategy.md",
"Evolutionary.jl" => "optimization_packages/evolutionary.md",
Expand Down
6 changes: 3 additions & 3 deletions docs/src/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Tada! That's how you do it. Now let's dive in a little more into what each part

## Understanding the Solution Object

The solution object is a `SciMLBase.AbstractNoTimeSolution`, and thus it follows the
The solution object is a `SciMLBase.AbstractNoTimeSolution`, and thus it follows the
[SciMLBase Solution Interface for non-timeseries objects](https://docs.sciml.ai/SciMLBase/stable/interfaces/Solutions/) and is documented at the [solution type page](@ref solution).
However, for simplicity let's show a bit of it in action.

Expand All @@ -61,13 +61,13 @@ rosenbrock(sol.u, p)
sol.objective
```

The `sol.retcode` gives us more information about the solution process.
The `sol.retcode` gives us more information about the solution process.

```@example intro
sol.retcode
```

Here it says `ReturnCode.Success` which means that the solutuion successfully solved. We can learn more about the different return codes at
Here it says `ReturnCode.Success` which means that the solutuion successfully solved. We can learn more about the different return codes at
[the ReturnCode part of the SciMLBase documentation](https://docs.sciml.ai/SciMLBase/stable/interfaces/Solutions/#retcodes).

If we are interested about some of the statistics of the solving process, for example to help choose a better solver, we can investigate the `sol.stats`
Expand Down
10 changes: 5 additions & 5 deletions docs/src/optimization_packages/pycma.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ sol = solve(prob, PyCMAOpt())

## Passing solver-specific options

Any keyword that `Optimization.jl` does not interpret is forwarded directly to PyCMA.
Any keyword that `Optimization.jl` does not interpret is forwarded directly to PyCMA.

In the event an `Optimization.jl` keyword overlaps with a `PyCMA` keyword, the `Optimization.jl` keyword takes precedence.
In the event an `Optimization.jl` keyword overlaps with a `PyCMA` keyword, the `Optimization.jl` keyword takes precedence.

An exhaustive list of keyword arguments can be found by running the following python script:

Expand All @@ -44,6 +44,7 @@ print(options)
```

An example passing the `PyCMA` keywords "verbose" and "seed":

```julia
sol = solve(prob, PyCMA(), verbose = -9, seed = 42)
```
Expand All @@ -54,10 +55,9 @@ The original Python result object is attached to the solution in the `original`

```julia
sol = solve(prob, PyCMAOpt())
println(sol.original)
println(sol.original)
```

## Contributing

Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.

Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.
48 changes: 24 additions & 24 deletions docs/src/optimization_packages/scipy.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
[`SciPy`](https://scipy.org/) is a mature Python library that offers a rich family of optimization, root–finding and linear‐programming algorithms. `OptimizationSciPy.jl` gives access to these routines through the unified `Optimization.jl` interface just like any native Julia optimizer.

!!! note

`OptimizationSciPy.jl` relies on [`PythonCall`](https://github.com/cjdoris/PythonCall.jl). A minimal Python distribution containing SciPy will be installed automatically on first use, so no manual Python set-up is required.

## Installation: OptimizationSciPy.jl
Expand All @@ -20,37 +21,37 @@ Below is a catalogue of the solver families exposed by `OptimizationSciPy.jl` to

#### Derivative-Free

* `ScipyNelderMead()` – Simplex Nelder–Mead algorithm
* `ScipyPowell()` – Powell search along conjugate directions
* `ScipyCOBYLA()` – Linear approximation of constraints (supports nonlinear constraints)
- `ScipyNelderMead()` – Simplex Nelder–Mead algorithm
- `ScipyPowell()` – Powell search along conjugate directions
- `ScipyCOBYLA()` – Linear approximation of constraints (supports nonlinear constraints)

#### Gradient-Based

* `ScipyCG()` – Non-linear conjugate gradient
* `ScipyBFGS()` – Quasi-Newton BFGS
* `ScipyLBFGSB()` – Limited-memory BFGS with simple bounds
* `ScipyNewtonCG()` – Newton-conjugate gradient (requires Hessian-vector products)
* `ScipyTNC()` – Truncated Newton with bounds
* `ScipySLSQP()` – Sequential least-squares programming (supports constraints)
* `ScipyTrustConstr()` – Trust-region method for non-linear constraints
- `ScipyCG()` – Non-linear conjugate gradient
- `ScipyBFGS()` – Quasi-Newton BFGS
- `ScipyLBFGSB()` – Limited-memory BFGS with simple bounds
- `ScipyNewtonCG()` – Newton-conjugate gradient (requires Hessian-vector products)
- `ScipyTNC()` – Truncated Newton with bounds
- `ScipySLSQP()` – Sequential least-squares programming (supports constraints)
- `ScipyTrustConstr()` – Trust-region method for non-linear constraints

#### Hessian–Based / Trust-Region

* `ScipyDogleg()`, `ScipyTrustNCG()`, `ScipyTrustKrylov()`, `ScipyTrustExact()` – Trust-region algorithms that optionally use or build Hessian information
- `ScipyDogleg()`, `ScipyTrustNCG()`, `ScipyTrustKrylov()`, `ScipyTrustExact()` – Trust-region algorithms that optionally use or build Hessian information

### Global Optimizer

* `ScipyDifferentialEvolution()` – Differential evolution (requires bounds)
* `ScipyBasinhopping()` – Basin-hopping with local search
* `ScipyDualAnnealing()` – Dual annealing simulated annealing
* `ScipyShgo()` – Simplicial homology global optimisation (supports constraints)
* `ScipyDirect()` – Deterministic `DIRECT` algorithm (requires bounds)
* `ScipyBrute()` – Brute-force grid search (requires bounds)
- `ScipyDifferentialEvolution()` – Differential evolution (requires bounds)
- `ScipyBasinhopping()` – Basin-hopping with local search
- `ScipyDualAnnealing()` – Dual annealing simulated annealing
- `ScipyShgo()` – Simplicial homology global optimisation (supports constraints)
- `ScipyDirect()` – Deterministic `DIRECT` algorithm (requires bounds)
- `ScipyBrute()` – Brute-force grid search (requires bounds)

### Linear & Mixed-Integer Programming

* `ScipyLinprog("highs")` – LP solvers from the HiGHS project and legacy interior-point/simplex methods
* `ScipyMilp()` – Mixed-integer linear programming via HiGHS branch-and-bound
- `ScipyLinprog("highs")` – LP solvers from the HiGHS project and legacy interior-point/simplex methods
- `ScipyMilp()` – Mixed-integer linear programming via HiGHS branch-and-bound

### Root Finding & Non-Linear Least Squares *(experimental)*

Expand All @@ -65,9 +66,9 @@ using Optimization, OptimizationSciPy

rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
p = [1.0, 100.0]

f = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
f = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
prob = OptimizationProblem(f, x0, p)

sol = solve(prob, ScipyBFGS())
Expand All @@ -85,7 +86,7 @@ obj(x, p) = (x[1] + x[2] - 1)^2
# Single non-linear constraint: x₁² + x₂² ≈ 1 (with small tolerance)
cons(res, x, p) = (res .= [x[1]^2 + x[2]^2 - 1.0])

x0 = [0.5, 0.5]
x0 = [0.5, 0.5]
prob = OptimizationProblem(
OptimizationFunction(obj; cons = cons),
x0, nothing, lcons = [-1e-6], ucons = [1e-6]) # Small tolerance instead of exact equality
Expand Down Expand Up @@ -129,5 +130,4 @@ If SciPy raises an error it is re-thrown as a Julia `ErrorException` carrying th

## Contributing

Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.

Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.
26 changes: 13 additions & 13 deletions docs/src/tutorials/reusage_interface.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# Optimization Problem Reusage and Caching Interface


## Reusing Optimization Caches with `reinit!`

The `reinit!` function allows you to efficiently reuse an existing optimization cache with new parameters or initial values. This is particularly useful when solving similar optimization problems repeatedly with different parameter values, as it avoids the overhead of creating a new cache from scratch.
Expand Down Expand Up @@ -30,12 +29,12 @@ sol2 = Optimization.solve!(cache)

The `reinit!` function supports updating various fields of the optimization cache:

- `u0`: New initial values for the optimization variables
- `p`: New parameter values
- `lb`: New lower bounds (if applicable)
- `ub`: New upper bounds (if applicable)
- `lcons`: New lower bounds for constraints (if applicable)
- `ucons`: New upper bounds for constraints (if applicable)
- `u0`: New initial values for the optimization variables
- `p`: New parameter values
- `lb`: New lower bounds (if applicable)
- `ub`: New upper bounds (if applicable)
- `lcons`: New lower bounds for constraints (if applicable)
- `ucons`: New upper bounds for constraints (if applicable)

### Example: Parameter Sweep

Expand Down Expand Up @@ -75,12 +74,13 @@ end
### Performance Benefits

Using `reinit!` is more efficient than creating a new problem and cache for each parameter value, especially when:
- The optimization algorithm maintains internal state that can be reused
- The problem structure remains the same (only parameter values change)

- The optimization algorithm maintains internal state that can be reused
- The problem structure remains the same (only parameter values change)

### Notes

- The `reinit!` function modifies the cache in-place and returns it for convenience
- Not all fields need to be specified; only provide the ones you want to update
- The function is particularly useful in iterative algorithms, parameter estimation, and when solving families of related optimization problems
- For creating a new problem with different parameters (rather than modifying a cache), use `remake` on the `OptimizationProblem` instead
- The `reinit!` function modifies the cache in-place and returns it for convenience
- Not all fields need to be specified; only provide the ones you want to update
- The function is particularly useful in iterative algorithms, parameter estimation, and when solving families of related optimization problems
- For creating a new problem with different parameters (rather than modifying a cache), use `remake` on the `OptimizationProblem` instead
4 changes: 2 additions & 2 deletions lib/OptimizationBBO/src/OptimizationBBO.jl
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,12 @@ function decompose_trace(opt::BlackBoxOptim.OptRunController, progress)
if iszero(max_time)
# we stop at either convergence or max_steps
n_steps = BlackBoxOptim.num_steps(opt)
Base.@logmsg(Base.LogLevel(-1), msg, progress=n_steps / maxiters,
Base.@logmsg(Base.LogLevel(-1), msg, progress=n_steps/maxiters,
_id=:OptimizationBBO)
else
# we stop at either convergence or max_time
elapsed = BlackBoxOptim.elapsed_time(opt)
Base.@logmsg(Base.LogLevel(-1), msg, progress=elapsed / max_time,
Base.@logmsg(Base.LogLevel(-1), msg, progress=elapsed/max_time,
_id=:OptimizationBBO)
end
end
Expand Down
Loading