|
1 | | -# Getting Started with Optimization in Julia |
| 1 | +# Getting Started with Optimization.jl |
2 | 2 |
|
3 | 3 | In this tutorial, we introduce the basics of Optimization.jl by showing |
4 | | -how to easily mix local optimizers from Optim.jl and global optimizers |
5 | | -from BlackBoxOptim.jl on the Rosenbrock equation. The simplest copy-pasteable |
6 | | -code to get started is the following: |
| 4 | +how to easily mix local optimizers and global optimizers on the Rosenbrock equation. |
| 5 | +The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve the Rosenbrock problem is the following: |
7 | 6 |
|
8 | 7 | ```@example intro |
9 | 8 | # Import the package and define the problem to optimize |
10 | | -using Optimization |
| 9 | +using Optimization, Zygote |
11 | 10 | rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2 |
12 | 11 | u0 = zeros(2) |
13 | 12 | p = [1.0, 100.0] |
14 | 13 |
|
15 | | -prob = OptimizationProblem(rosenbrock, u0, p) |
16 | | -
|
17 | | -# Import a solver package and solve the optimization problem |
18 | | -using OptimizationOptimJL |
19 | | -sol = solve(prob, NelderMead()) |
| 14 | +optf = OptimizationFunction(rosenbrock, AutoZygote()) |
| 15 | +prob = OptimizationProblem(optf, u0, p) |
20 | 16 |
|
21 | | -# Import a different solver package and solve the optimization problem a different way |
22 | | -using OptimizationBBO |
23 | | -prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) |
24 | | -sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited()) |
| 17 | +sol = solve(prob, Optimization.LBFGS()) |
25 | 18 | ``` |
26 | 19 |
|
27 | | -Notice that Optimization.jl is the core glue package that holds all the common |
28 | | -pieces, but to solve the equations, we need to use a solver package. Here, OptimizationOptimJL |
29 | | -is for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is for |
30 | | -[BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl). |
| 20 | +## Import a different solver package and solve the problem |
| 21 | + |
| 22 | +OptimizationOptimJL is a wrapper for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is a wrapper for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl). |
31 | 23 |
|
32 | | -The output of the first optimization task (with the `NelderMead()` algorithm) |
33 | | -is given below: |
| 24 | +First let's use the NelderMead a derivative free solver from Optim.jl: |
34 | 25 |
|
35 | 26 | ```@example intro |
36 | | -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) |
37 | | -prob = OptimizationProblem(optf, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) |
38 | | -sol = solve(prob, NelderMead()) |
| 27 | +using OptimizationOptimJL |
| 28 | +sol = solve(prob, Optim.NelderMead()) |
| 29 | +``` |
| 30 | + |
| 31 | +BlackBoxOptim.jl offers derivative-free global optimization solvers that requrie the bounds to be set via `lb` and `ub` in the `OptimizationProblem`. Let's use the BBO_adaptive_de_rand_1_bin_radiuslimited() solver: |
| 32 | + |
| 33 | +```@example intro |
| 34 | +using OptimizationBBO |
| 35 | +prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) |
| 36 | +sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited()) |
39 | 37 | ``` |
40 | 38 |
|
41 | 39 | The solution from the original solver can always be obtained via `original`: |
|
0 commit comments