|
| 1 | +# Optimization Problem Reusage and Caching Interface |
| 2 | + |
| 3 | + |
| 4 | +## Reusing Optimization Caches with `reinit!` |
| 5 | + |
| 6 | +The `reinit!` function allows you to efficiently reuse an existing optimization cache with new parameters or initial values. This is particularly useful when solving similar optimization problems repeatedly with different parameter values, as it avoids the overhead of creating a new cache from scratch. |
| 7 | + |
| 8 | +### Basic Usage |
| 9 | + |
| 10 | +```@example reinit |
| 11 | +# Create initial problem and cache |
| 12 | +using Optimization, OptimizationOptimJL |
| 13 | +rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2 |
| 14 | +u0 = zeros(2) |
| 15 | +p = [1.0, 100.0] |
| 16 | +
|
| 17 | +optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) |
| 18 | +prob = OptimizationProblem(optf, u0, p) |
| 19 | +
|
| 20 | +# Initialize cache and solve |
| 21 | +cache = Optimization.init(prob, Optim.BFGS()) |
| 22 | +sol = Optimization.solve!(cache) |
| 23 | +
|
| 24 | +# Reinitialize cache with new parameters |
| 25 | +cache = Optimization.reinit!(cache; p = [2.0, 50.0]) |
| 26 | +sol2 = Optimization.solve!(cache) |
| 27 | +``` |
| 28 | + |
| 29 | +### Supported Arguments |
| 30 | + |
| 31 | +The `reinit!` function supports updating various fields of the optimization cache: |
| 32 | + |
| 33 | +- `u0`: New initial values for the optimization variables |
| 34 | +- `p`: New parameter values |
| 35 | +- `lb`: New lower bounds (if applicable) |
| 36 | +- `ub`: New upper bounds (if applicable) |
| 37 | +- `lcons`: New lower bounds for constraints (if applicable) |
| 38 | +- `ucons`: New upper bounds for constraints (if applicable) |
| 39 | + |
| 40 | +### Example: Parameter Sweep |
| 41 | + |
| 42 | +```@example reinit |
| 43 | +# Solve for multiple parameter values efficiently |
| 44 | +results = [] |
| 45 | +p_values = [[1.0, 100.0], [2.0, 100.0], [3.0, 100.0]] |
| 46 | +
|
| 47 | +# Create initial cache |
| 48 | +cache = Optimization.init(prob, Optim.BFGS()) |
| 49 | +
|
| 50 | +function sweep(cache, p_values) |
| 51 | + for p in p_values |
| 52 | + cache = Optimization.reinit!(cache; p = p) |
| 53 | + sol = Optimization.solve!(cache) |
| 54 | + push!(results, (p = p, u = sol.u, objective = sol.objective)) |
| 55 | + end |
| 56 | +end |
| 57 | +
|
| 58 | +sweep(cache, p_values) |
| 59 | +``` |
| 60 | + |
| 61 | +### Example: Updating Initial Values |
| 62 | + |
| 63 | +```julia |
| 64 | +# Warm-start optimization from different initial points |
| 65 | +u0_values = [[0.0, 0.0], [0.5, 0.5], [1.0, 1.0]] |
| 66 | + |
| 67 | +for u0 in u0_values |
| 68 | + local cache |
| 69 | + cache = Optimization.reinit!(cache; u0 = u0) |
| 70 | + sol = Optimization.solve!(cache) |
| 71 | + println("Starting from ", u0, " converged to ", sol.u) |
| 72 | +end |
| 73 | +``` |
| 74 | + |
| 75 | +### Performance Benefits |
| 76 | + |
| 77 | +Using `reinit!` is more efficient than creating a new problem and cache for each parameter value, especially when: |
| 78 | +- The optimization algorithm maintains internal state that can be reused |
| 79 | +- The problem structure remains the same (only parameter values change) |
| 80 | + |
| 81 | +### Notes |
| 82 | + |
| 83 | +- The `reinit!` function modifies the cache in-place and returns it for convenience |
| 84 | +- Not all fields need to be specified; only provide the ones you want to update |
| 85 | +- The function is particularly useful in iterative algorithms, parameter estimation, and when solving families of related optimization problems |
| 86 | +- For creating a new problem with different parameters (rather than modifying a cache), use `remake` on the `OptimizationProblem` instead |
0 commit comments