Skip to content

Commit 0e00937

Browse files
committed
up
1 parent 67dae03 commit 0e00937

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/src/inverse_problems/behaviour_optimization.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
In previous tutorials we have described how to use [PEtab.jl](@ref petab_parameter_fitting) and [Optimization.jl](@ref optimization_parameter_fitting) for parameter fitting. This involves solving an optimisation problem (to find the parameter set yielding the best model-to-data fit). There are, however, other situations that require solving optimisation problems. Typically, these involve the creation of a custom cost function, which optimum can then be found using Optimization.jl. In this tutorial we will describe this process, demonstrating how parameter space can be searched to find values that achieve a desired system behaviour. A more throughout description on how to solve these problems are provided by [Optimization.jl's documentation](https://docs.sciml.ai/Optimization/stable/) and the literature [^1].
33

44
## Maximising the pulse amplitude of an incoherent feed forward loop.
5-
Incoherent feedforward loops (network motifs where a single component both activates and deactivates a downstream component) are able to generate pulses in response to step inputs [^2]. In this tutorial we will consider such an incoherent feedforward loop, attempting to generate a system with as prominent a response pulse as possible. Our model consists of 3 species: $X$ (the input node), $Y$ (an intermediary), and $Z$ (the output node). In it, $X$ activates the production of both $Y$ and $Z$, with $Y$ also deactivating $Z$. When $X$ is activated, there will be a brief time window where $Y$ is still inactive, and $Z$ is activated. However, as $Y$ becomes active, it will turn $Z$ off, creating a pulse.
5+
Incoherent feedforward loops (network motifs where a single component both activates and deactivates a downstream component) are able to generate pulses in response to step inputs [^2]. In this tutorial we will consider such an incoherent feedforward loop, attempting to generate a system with as prominent a response pulse as possible. Our model consists of 3 species: $X$ (the input node), $Y$ (an intermediary), and $Z$ (the output node). In it, $X$ activates the production of both $Y$ and $Z$, with $Y$ also deactivating $Z$. When $X$ is activated, there will be a brief time window where $Y$ is still inactive, and $Z$ is activated. However, as $Y$ becomes active, it will turn $Z$ off. This creates a pulse of $Z$ activity.
66
```@example behaviour_optimization
77
using Catalyst
88
incoherent_feed_forward = @reaction_network begin
@@ -40,7 +40,7 @@ function pulse_amplitude(p, _)
4040
end
4141
nothing # here
4242
```
43-
The cost function takes two arguments (a parameter value `p`, and an additional one which we will ignore here). It first calculates the new initial steady concentration (for the given parameter set), and then creates an updated `ODEProblem` using it as initial conditions and the, to the cost function provided, input parameter set. While we could create a new `ODEProblem` within the cost function, cost functions are often called a large number of times during the optimisation process (making performance important). Here, using [`remake` on a previously created `ODEProblem`](@id simulation_structure_interfacing_remake) is more performant than creating a new one. Next, we simulate our, remembering to use the activation callback. Just like [when using Optimization.jl to fit parameters](), we use the `verbose=false` options to prevent unnecessary printouts, and a reduced `maxiters` value to reduce time spent simulating (to the model) unsuitable parameter sets. We also use `SciMLBase.successful_retcode(sol)` to check whether the simulation return code indicates a successful simulation (and if it did not, returns a large cost function value). Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our cost function return the negation of that value.
43+
This cost function takes two arguments (a parameter value `p`, and an additional one which we will ignore here but discuss later). It first calculates the new initial steady state concentration (for the given parameter set), and then creates an updated `ODEProblem` using it as initial conditions and the, to the cost function provided, input parameter set. While we could create a new `ODEProblem` within the cost function, cost functions are often called a large number of times during the optimisation process (making performance important). Here, using [`remake` on a previously created `ODEProblem`](@id simulation_structure_interfacing_remake) is more performant than creating a new one. Next, we simulate our model (remembering to use the activation callback). Just like [when using Optimization.jl to fit parameters to data](@ref optimization_parameter_fitting), we use the `verbose=false` options to prevent unnecessary printouts, and a reduced `maxiters` value to reduce time spent simulating (for the model) unsuitable parameter sets. We also use `SciMLBase.successful_retcode(sol)` to check whether the simulation return code indicates a successful simulation (and if it did not, returns a large cost function value). Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our cost function return the negative pulse amplitude.
4444

4545
Just like for [parameter fitting](@ref optimization_parameter_fitting), we create a `OptimizationProblem` using our cost function, and some initial guess of the parameter value. We also set upper and lower bounds for each parameter using the `lb` and `ub` optional arguments (in this case limiting each parameter's value to the interval $(0.1,10.0)$).
4646
```@example behaviour_optimization
@@ -49,9 +49,9 @@ initial_guess = [1.0, 1.0, 1.0]
4949
opt_prob = OptimizationProblem(pulse_amplitude, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
5050
```
5151
!!! note
52-
As described in a [previous section on Optimization.jl](), `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess`'s values using a vector. Here, the i'th value corresponds to the value of the i'th parameter in the `parameters(incoherent_feed_forward)` vector.
52+
As described in a [previous section on Optimization.jl](@ref optimization_parameter_fitting), `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess`'s values using a vector. Here, the i'th value corresponds to the value of the i'th parameter in the `parameters(incoherent_feed_forward)` vector.
5353

54-
As [previously described](), Optimization.jl supports a wide range of optimisation algorithms. Here we use one from [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl)
54+
As [previously described](@ref optimization_parameter_fitting), Optimization.jl supports a wide range of optimisation algorithms. Here we use one from [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl):
5555
```@example behaviour_optimization
5656
using OptimizationBBO
5757
opt_sol = solve(opt_prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
@@ -71,14 +71,14 @@ For this model, it turns out that $Z$'s maximum pulse amplitude is equal to twic
7171
There are several modifications to our problem where it would actually have parameters. E.g. our model might have had additional parameters (e.g. a degradation rate) which we would like to keep fixed throughout the optimisation process. If we then would like to run the optimisation process for several different values of these fixed parameters, we could have made them parameters to our `OptimizationProblem` (and their values provided as a third argument, after `initial_guess`).
7272

7373
## Utilising automatic differentiation
74-
Optimisation methods can be divided into differentiation-free and differentiation-based optimisation methods. E.g. consider finding the minimum of the function $f(x) = x^2$, given some initial condition. Here, we can simply compute the differential and descend along it until we find $x=0$ (admittedly, for this simple problem the minimum can be computed directly). This principle forms the basis of optimisation methods such as gradient descent, which utilises information of a function's differential to minimise it. When attempting to find a global minimum, to avoid getting stuck in local minimums, these methods are often augmented by additional routines. While the differention of most algebraic functions is trivial, it turns out that even complicated functions (such as the one we used above) can be differentiated through the use of [*automatic differentiation* (AD)](https://en.wikipedia.org/wiki/Automatic_differentiation).
74+
Optimisation methods can be divided into differentiation-free and differentiation-based optimisation methods. E.g. consider finding the minimum of the function $f(x) = x^2$, given some initial guess of $x$. Here, we can simply compute the differential and descend along it until we find $x=0$ (admittedly, for this simple problem the minimum can be computed directly). This principle forms the basis of optimisation methods such as gradient descent, which utilises information of a function's differential to minimise it. When attempting to find a global minimum, to avoid getting stuck in local minimums, these methods are often augmented by additional routines. While the differention of most algebraic functions is trivial, it turns out that even complicated functions (such as the one we used above) can be differentiated computationally through the use of [*automatic differentiation* (AD)](https://en.wikipedia.org/wiki/Automatic_differentiation).
7575

76-
Through packages such as [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl), and [Zygote.jl](https://github.com/FluxML/Zygote.jl), Julia supports AD for most code. Specifically for code including simulation of differential equations, differentiation is supported by [SciMLSensitivity.jl](https://github.com/SciML/SciMLSensitivity.jl). Generally, AD can be used without specific knowledge from the user, however, it requires an additional step in the construction of our `OptimizationProblem`. Here, we create a [specialised `OptimizationFunction` from our cost function](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#optfunction). To it, we will also provide our choice of AD method. There are [several alternatives](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#Automatic-Differentiation-Construction-Choice-Recommendations), and in our case we will use `AutoForwardDiff()` (a good choice for small optimisation problems). We then create a new `OptimizationProblem` using our updated cast function:
76+
Through packages such as [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl), and [Zygote.jl](https://github.com/FluxML/Zygote.jl), Julia supports AD for most code. Specifically for code including simulation of differential equations, differentiation is supported by [SciMLSensitivity.jl](https://github.com/SciML/SciMLSensitivity.jl). Generally, AD can be used without specific knowledge from the user, however, it requires an additional step in the construction of our `OptimizationProblem`. Here, we create a [specialised `OptimizationFunction` from our cost function](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#optfunction). To it, we will also provide our choice of AD method. There are [several alternatives](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#Automatic-Differentiation-Construction-Choice-Recommendations), and in our case we will use `AutoForwardDiff()` (a good choice for small optimisation problems). We can then create a new `OptimizationProblem` using our updated cast function:
7777
```@example behaviour_optimization
7878
opt_func = OptimizationFunction(pulse_amplitude, AutoForwardDiff())
7979
opt_prob = OptimizationProblem(opt_func, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
8080
```
81-
Finally, we can find the optimum using some differentiation-based optimisation methods. Here we will use [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s `BFGS` method
81+
Finally, we can find the optimum using some differentiation-based optimisation methods. Here we will use [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s `BFGS` method:
8282
```@example behaviour_optimization
8383
using OptimizationOptimJL
8484
opt_sol = solve(opt_prob, OptimizationOptimJL.BFGS())

0 commit comments

Comments
 (0)