Skip to content

Commit 3b2d34e

Browse files
a few fixes
1 parent 50b060c commit 3b2d34e

File tree

3 files changed

+30
-38
lines changed

3 files changed

+30
-38
lines changed

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,8 @@ JLD2 = "033835bb-8acc-5ee8-8aae-3f567f8a3819"
2121
JumpProblemLibrary = "faf0f6d7-8cee-47cb-b27c-1eb80cef534e"
2222
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
2323
ODEProblemLibrary = "fdc4e326-1af4-4b90-96e7-779fcce2daa5"
24+
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
25+
OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb"
2426
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
2527
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
2628
SDEProblemLibrary = "c72e72a9-a271-4b2b-8966-303ed956772e"

docs/src/examples/diffusion_implicit_heat_equation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ prob = SplitODEProblem(
172172
tspan,
173173
params
174174
)
175-
alg = IMEXEuler(linsolve=LinSolveFactorize(lu!))
175+
alg = IMEXEuler()
176176
println("Solving...")
177177
sol = solve(
178178
prob,

docs/src/examples/min_and_max.md

Lines changed: 27 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22

33
### Setup
44

5-
In this tutorial we will show how to use Optim.jl to find the maxima and minima of solutions. Let's take a look at the double pendulum:
5+
In this tutorial we will show how to use
6+
[Optimization.jl](https://docs.sciml.ai/Optimization/stable/) to find the maxima and minima
7+
of solutions. Let's take a look at the double pendulum:
68

79
```@example minmax
810
#Constants and setup
@@ -52,25 +54,31 @@ Let's fine out what some of the local maxima and minima are. Optim.jl can be use
5254
f = (t) -> sol(t,idxs=4)
5355
```
5456

55-
`first(t)` is the same as `t[1]` which transforms the array of size 1 into a number. `idxs=4` is the same as `sol(first(t))[4]` but does the calculation without a temporary array and thus is faster. To find a local minima, we can simply call Optim on this function. Let's find a local minimum:
57+
`first(t)` is the same as `t[1]` which transforms the array of size 1 into a number. `idxs=4` is the same as `sol(first(t))[4]` but does the calculation without a temporary array and thus is faster. To find a local minima, we can solve the optimization problem where the loss
58+
function is `f`:
5659

5760
```@example minmax
58-
using Optim
59-
opt = optimize(f,18.0,22.0)
61+
using Optimization, OptimizationNLopt
62+
optf = OptimizationFunction(f, AutoForwardDiff())
63+
min_guess = 18.0
64+
optprob = OptimizationProblem(optf, min_guess)
65+
opt = solve(optprob, NLopt.LD_LBFGS())
6066
```
6167

62-
From this printout we see that the minimum is at `t=18.63` and the value is `-2.79e-2`. We can get these in code-form via:
68+
From this printout we see that the minimum is at `t=18.63` and the value is `-2.79e-2`. We
69+
can get these in code-form via:
6370

6471
```@example minmax
65-
println(opt.minimizer)
66-
println(opt.minimum)
72+
println(opt.u)
6773
```
6874

6975
To get the maximum, we just minimize the negative of the function:
7076

7177
```@example minmax
72-
f = (t) -> -sol(first(t),idxs=4)
73-
opt2 = optimize(f,0.0,22.0)
78+
optf = OptimizationFunction(f, AutoForwardDiff())
79+
min_guess = 22.0
80+
optprob2 = OptimizationProblem(optf, min_guess)
81+
opt2 = solve(optprob2, NLopt.LD_LBFGS())
7482
```
7583

7684
Let's add the maxima and minima to the plots:
@@ -81,39 +89,21 @@ scatter!([opt.minimizer],[opt.minimum],label="Local Min")
8189
scatter!([opt2.minimizer],[-opt2.minimum],label="Local Max")
8290
```
8391

84-
Brent's method will locally minimize over the full interval. If we instead want a local maxima nearest to a point, we can use `BFGS()`. In this case, we need to optimize a vector `[t]`, and thus dereference it to a number using `first(t)`.
85-
86-
```@example minmax
87-
f = (t) -> -sol(first(t),idxs=4)
88-
opt = optimize(f,[20.0],BFGS())
89-
```
90-
9192
### Global Optimization
9293

93-
If we instead want to find global maxima and minima, we need to look somewhere else. For this there are many choices. A pure Julia option is BlackBoxOptim.jl, but I will use NLopt.jl. Following the NLopt.jl tutorial but replacing their function with out own:
94+
If we instead want to find global maxima and minima, we need to look somewhere else. For
95+
this there are many choices. A pure Julia option are the
96+
[BlackBoxOptim solvers within Optimization.jl](https://docs.sciml.ai/Optimization/stable/optimization_packages/blackboxoptim/),
97+
but I will continue the story with the OptimizationNLopt methods. To do this, we simply
98+
swap out to one of the
99+
[global optimizers in the list](https://docs.sciml.ai/Optimization/stable/optimization_packages/nlopt/)
100+
Let's try `GN_ORIG_DIRECT_L`:
94101

95102
```@example minmax
96-
import NLopt, ForwardDiff
97-
98-
count = 0 # keep track of # function evaluations
103+
opt = solve(optprob, NLopt.GN_ORIG_DIRECT_L())
104+
opt2 = solve(optprob, NLopt.GN_ORIG_DIRECT_L())
99105
100-
function g(t::Vector, grad::Vector)
101-
if length(grad) > 0
102-
#use ForwardDiff for the gradients
103-
grad[1] = ForwardDiff.derivative((t)->sol(first(t),idxs=4),t)
104-
end
105-
sol(first(t),idxs=4)
106-
end
107-
opt = NLopt.Opt(:GN_ORIG_DIRECT_L, 1)
108-
NLopt.lower_bounds!(opt, [0.0])
109-
NLopt.upper_bounds!(opt, [40.0])
110-
NLopt.xtol_rel!(opt,1e-8)
111-
NLopt.min_objective!(opt, g)
112-
(minf,minx,ret) = NLopt.optimize(opt,[20.0])
113-
println(minf," ",minx," ",ret)
114-
NLopt.max_objective!(opt, g)
115-
(maxf,maxx,ret) = NLopt.optimize(opt,[20.0])
116-
println(maxf," ",maxx," ",ret)
106+
@show opt.u, opt2.u
117107
```
118108

119109
```@example minmax

0 commit comments

Comments
 (0)