You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Callbacks are functions that are called at certain points during the optimization process. They can be used to monitor progress, log information, or implement custom stopping criteria. Callbacks are called each **iteration** of an algorithm. By iteration, we mean each time the algorithm updates its current estimate of the solution and checks for convergence. This structure is not necessarily uniquely defined for all algorithms. For example, we could in principle call the callback function within the line search algorithm, or for each sampled point in a derivative-free algorithm.
4
+
5
+
### Callback Function Example
6
+
7
+
We show a simple example of a callback function that prints the current iteration number and objective value at each iteration.
Each algorithm in Optim.jl maintains an optimization state that encapsulates all relevant information about the current iteration of the optimization process. This state is represented by the sub-types of `Optim.OptimizationState` and contains various fields that provide insights into the progress of the optimization and any information needed to maintain and update the search direction.
4
+
5
+
### Exceptions
6
+
7
+
Currently, there are two main exceptions to this structure:
8
+
-**SAMIN**: This algorithm is currently not written using the main `optimize` loop and does not maintain an `OptimizationState`.
9
+
-**Univariate Optimization Algorithms**: These algorithms do not use the `OptimizationState` structure as they also do not use the main `optimize` loop.
10
+
11
+
The exceptions matter mostly for users who want to pre-allocate the `OptimizationState` for performance reasons. In these cases, users should check the documentation of the specific algorithm they are using to see if it supports pre-allocation. It also matters for users who want to make use of the callback functionality, as the callback functions receive the `OptimizationState` as an argument. If the algorithm does not use the `OptimizationState`, the callback will instead receive a `NamedTuple` with relevant information and the callback functions should not use type annotations for their arguments based on the `OptimizationState` hierarchy.
12
+
13
+
### Using the Optimization State
14
+
15
+
As mentioned above, the optimization state is passed to callback functions during the optimization process. Users can access various fields of the state to monitor progress or implement custom logic based on the current state of the optimization. It is also possible to pre-allocate the optimization state if users which to re-use it across multiple optimization runs for performance reasons. This can be done using the `initial_state` function, which takes the optimization method, options, differentiable object, and initial parameters as arguments.
# Verify that the state has the properties f_x and x
33
+
hasproperty(optstate, :f_x) # true
34
+
hasproperty(optstate, :x) # true
35
+
36
+
result = optimize(d, initial_x, method, options, optstate)
37
+
```
38
+
39
+
After the optimization is complete, the state has been updated as part of the optimization process and contains information about the final iteration. Users can access fields of the state to retrieve information about the final state. For example, we can verify that the final objective value matches the value stored in the state.
0 commit comments