Skip to content

Commit b823256

Browse files
committed
Merge branch 'add-matrices' of github.com:vyudu/Catalyst.jl into add-matrices
pull changes from remote
2 parents aa12d24 + 2d1c9c8 commit b823256

29 files changed

+573
-395
lines changed

.github/workflows/Documentation.yml

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,13 @@ jobs:
2222
- name: Checkout
2323
uses: actions/checkout@v4
2424

25-
# Install all required X11 and OpenGL libraries
26-
# These are needed for GLMakie to work in a headless environment
27-
- name: Install binary dependencies
28-
run: |
29-
sudo apt-get update
30-
sudo apt-get install -y xvfb libgl1 mesa-utils freeglut3-dev xorg-dev \
31-
libxrandr-dev libxinerama-dev libxcursor-dev libxi-dev libxext-dev
25+
# # Install all required X11 and OpenGL libraries
26+
# # These are needed for GLMakie to work in a headless environment
27+
# - name: Install binary dependencies
28+
# run: |
29+
# sudo apt-get update
30+
# sudo apt-get install -y xvfb libgl1 mesa-utils freeglut3-dev xorg-dev \
31+
# libxrandr-dev libxinerama-dev libxcursor-dev libxi-dev libxext-dev
3232

3333
# # Start Xvfb explicitly instead of using xvfb-run
3434
# # This gives us more control and visibility into the virtual display setup
@@ -81,4 +81,3 @@ jobs:
8181
files: lcov.info
8282
token: ${{ secrets.CODECOV_TOKEN }}
8383
fail_ci_if_error: false
84-

Project.toml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,12 +32,13 @@ BifurcationKit = "0f109fa4-8a5d-4b75-95aa-f515264e7665"
3232
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
3333
GraphMakie = "1ecd5474-83a3-4783-bb4f-06765db800d2"
3434
HomotopyContinuation = "f213a82b-91d6-5c5d-acf7-10f1c761b327"
35+
NetworkLayout = "46757867-2c16-5918-afeb-47bfcb05e46a"
3536
# StructuralIdentifiability = "220ca800-aa68-49bb-acd8-6037fa93a544"
3637

3738
[extensions]
3839
CatalystBifurcationKitExtension = "BifurcationKit"
3940
CatalystCairoMakieExtension = "CairoMakie"
40-
CatalystGraphMakieExtension = "GraphMakie"
41+
CatalystGraphMakieExtension = ["GraphMakie", "NetworkLayout"]
4142
CatalystHomotopyContinuationExtension = "HomotopyContinuation"
4243
# CatalystStructuralIdentifiabilityExtension = "StructuralIdentifiability"
4344

@@ -58,6 +59,7 @@ LaTeXStrings = "1.3.0"
5859
Latexify = "0.16.5"
5960
MacroTools = "0.5.5"
6061
ModelingToolkit = "9.32"
62+
NetworkLayout = "0.4.7"
6163
Parameters = "0.12"
6264
Reexport = "0.2, 1.0"
6365
Requires = "1.0"

docs/Project.toml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,9 @@ CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
55
Catalyst = "479239e8-5488-4da2-87a7-35f2df7eef83"
66
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
77
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
8-
DiffEqParamEstim = "1130ab10-4a5a-5621-a13d-e4788d82bd4c"
98
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
109
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
1110
DynamicalSystems = "61744808-ddfa-5f27-97ff-6e42cc95d634"
12-
GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
1311
GlobalSensitivity = "af5da776-676b-467e-8baf-acd8249e4f0f"
1412
GraphMakie = "1ecd5474-83a3-4783-bb4f-06765db800d2"
1513
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
@@ -24,6 +22,7 @@ NonlinearSolve = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
2422
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
2523
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
2624
OptimizationBBO = "3e6eede4-6085-4f62-9a71-46d9bc1eb92b"
25+
OptimizationEvolutionary = "cb963754-43f6-435e-8d4b-99009ff27753"
2726
OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb"
2827
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
2928
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
@@ -51,11 +50,9 @@ CairoMakie = "0.12"
5150
Catalyst = "14.4"
5251
DataFrames = "1.6"
5352
DiffEqBase = "6.159.0"
54-
DiffEqParamEstim = "2.2"
5553
Distributions = "0.25"
5654
Documenter = "1.4.1"
5755
DynamicalSystems = "3.3"
58-
GLMakie = "0.10"
5956
GlobalSensitivity = "2.6"
6057
GraphMakie = "0.5"
6158
Graphs = "1.11.1"
@@ -69,6 +66,7 @@ NonlinearSolve = "3.12, 4"
6966
Optim = "1.9"
7067
Optimization = "4"
7168
OptimizationBBO = "0.4"
69+
OptimizationEvolutionary = "0.4"
7270
OptimizationNLopt = "0.3"
7371
OptimizationOptimJL = "0.4"
7472
OptimizationOptimisers = "0.3"

docs/pages.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ pages = Any[
4545
"steady_state_functionality/dynamical_systems.md"
4646
],
4747
"Inverse problems" => Any[
48-
"inverse_problems/optimization_ode_param_fitting.md",
4948
"inverse_problems/petab_ode_param_fitting.md",
49+
"inverse_problems/optimization_ode_param_fitting.md",
5050
"inverse_problems/behaviour_optimisation.md",
5151
# "inverse_problems/structural_identifiability.md",
5252
"inverse_problems/global_sensitivity_analysis.md",
50.5 KB
Loading

docs/src/devdocs/dev_guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ continuing stability of releases. Before making a release one should
1515
the artifact in the doc build that the docs actually look ok (since sometimes
1616
issues can arise that do not lead to actual errors in the doc CI).
1717
4. Release via the [registration
18-
issue](https://github.com/SciML/JumpProcesses.jl/issues/73) with the
18+
issue](https://github.com/SciML/Catalyst.jl/issues/127) with the
1919
command:
2020

2121
```

docs/src/inverse_problems/behaviour_optimisation.md

Lines changed: 16 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# [Optimization for non-data fitting purposes](@id behaviour_optimisation)
2-
In previous tutorials we have described how to use PEtab.jl and [Optimization.jl](@ref optimization_parameter_fitting) for parameter fitting. This involves solving an optimisation problem (to find the parameter set yielding the best model-to-data fit). There are, however, other situations that require solving optimisation problems. Typically, these involve the creation of a custom cost function, which optimum can then be found using Optimization.jl. In this tutorial we will describe this process, demonstrating how parameter space can be searched to find values that achieve a desired system behaviour. A more throughout description on how to solve these problems is provided by [Optimization.jl's documentation](https://docs.sciml.ai/Optimization/stable/) and the literature[^1].
2+
In previous tutorials we have described how to use [PEtab.jl](@ref petab_parameter_fitting) and [Optimization.jl](@ref optimization_parameter_fitting) for parameter fitting. This involves solving an optimisation problem (to find the parameter set yielding the best model-to-data fit). There are, however, other situations that require solving optimisation problems. Typically, these involve the creation of a custom objective function, which minimizer can then be found using Optimization.jl. In this tutorial we will describe this process, demonstrating how parameter space can be searched to find values that achieve a desired system behaviour. Many options used here are described in more detail in [the tutorial on using Optimization.jl for parameter fitting](@ref optimization_parameter_fitting). A more throughout description of how to solve these problems is provided by [Optimization.jl's documentation](https://docs.sciml.ai/Optimization/stable/) and the literature[^1].
33

44
## [Maximising the pulse amplitude of an incoherent feed forward loop](@id behaviour_optimisation_IFFL_example)
55
Incoherent feedforward loops (network motifs where a single component both activates and deactivates a downstream component) are able to generate pulses in response to step inputs[^2]. In this tutorial we will consider such an incoherent feedforward loop, attempting to generate a system with as prominent a response pulse as possible.
@@ -17,40 +17,42 @@ end
1717
```
1818
To demonstrate this pulsing behaviour we will simulate the system for an example parameter set. We select an initial condition (`u0`) so the system begins in a steady state.
1919
```@example behaviour_optimization
20-
using OrdinaryDiffEqTsit5, Plots
20+
using OrdinaryDiffEqDefault, Plots
2121
example_p = [:pX => 0.1, :pY => 1.0, :pZ => 1.0]
2222
tspan = (0.0, 50.0)
2323
example_u0 = [:X => 0.1, :Y => 0.1, :Z => 1.0]
2424
2525
oprob = ODEProblem(incoherent_feed_forward, example_u0, tspan, example_p)
26-
sol = solve(oprob, Tsit5())
26+
sol = solve(oprob)
2727
plot(sol)
2828
```
2929
Here we note that, while $X$ and $Y$ reach new steady state levels in response to the increase in $pX$, $Z$ resumes to its initial concentration after the pulse.
3030

31-
We will now attempt to find the parameter set $(pX,pY,pZ)$ which maximises the response pulse amplitude (defined by the maximum activity of $Z$ subtracted by its steady state activity). To do this, we create a custom cost function:
31+
We will now attempt to find the parameter set $(pX,pY,pZ)$ which maximises the response pulse amplitude (defined by the maximum activity of $Z$ subtracted by its steady state activity). To do this, we create a custom objective function:
3232
```@example behaviour_optimization
3333
function pulse_amplitude(p, _)
34-
ps = Dict([:pX => p[1], :pY => p[2], :pZ => p[2]])
35-
u0_new = [:X => ps[:pX], :Y => ps[:pX]*ps[:pY], :Z => ps[:pZ]/ps[:pY]^2]
36-
oprob_local = remake(oprob; u0= u0_new, p = ps)
37-
sol = solve(oprob_local, Tsit5(); verbose = false, maxiters = 10000)
34+
p = Dict([:pX => p[1], :pY => p[2], :pZ => p[2]])
35+
u0 = [:X => p[:pX], :Y => p[:pX]*p[:pY], :Z => p[:pZ]/p[:pY]^2]
36+
oprob_local = remake(oprob; u0, p)
37+
sol = solve(oprob_local; verbose = false, maxiters = 10000)
3838
SciMLBase.successful_retcode(sol) || return Inf
3939
return -(maximum(sol[:Z]) - sol[:Z][1])
4040
end
4141
nothing # hide
4242
```
43-
This cost function takes two arguments (a parameter value `p`, and an additional one which we will ignore here but discuss later). It first calculates the new initial steady state concentration for the given parameter set. Next, it creates an updated `ODEProblem` using the steady state as initial conditions and the, to the cost function provided, input parameter set. While we could create a new `ODEProblem` within the cost function, cost functions are often called a large number of times during the optimisation process (making performance important). Here, using [`remake` on a previously created `ODEProblem`](@ref simulation_structure_interfacing_problems_remake) is more performant than creating a new one. Just like [when using Optimization.jl to fit parameters to data](@ref optimization_parameter_fitting), we use the `verbose = false` option to prevent unnecessary simulation printouts, and a reduced `maxiters` value to reduce time spent simulating (for the model) unsuitable parameter sets. We also use `SciMLBase.successful_retcode(sol)` to check whether the simulation return code indicates a successful simulation (and if it did not, returns a large cost function value). Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our cost function return the negative pulse amplitude.
43+
This objective function takes two arguments (a parameter value `p`, and an additional one which we will ignore but is discussed in a note [here](@ref optimization_parameter_fitting_basics)). It first calculates the new initial steady state concentration for the given parameter set. Next, it creates an updated `ODEProblem` using the steady state as initial conditions and the, to the objective function provided, input parameter set. Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our objective function return the negative pulse amplitude.
4444

45-
Just like for [parameter fitting](@ref optimization_parameter_fitting), we create a `OptimizationProblem` using our cost function, and some initial guess of the parameter value. We also set upper and lower bounds for each parameter using the `lb` and `ub` optional arguments (in this case limiting each parameter's value to the interval $(0.1,10.0)$).
45+
As described [in our tutorial on parameter fitting using Optimization.jl](@ref optimization_parameter_fitting_basics) we use `remake`, `verbose = false`, `maxiters = 10000`, and a check on the simulations return code, all providing various advantages to the optimisation procedure (as explained in that tutorial).
46+
47+
Just like for [parameter fitting](@ref optimization_parameter_fitting_basics), we create an `OptimizationProblem` using our objective function, and some initial guess of the parameter values. We also [set upper and lower bounds](@ref optimization_parameter_fitting_constraints) for each parameter using the `lb` and `ub` optional arguments (in this case limiting each parameter's value to the interval $(0.1,10.0)$).
4648
```@example behaviour_optimization
4749
using Optimization
4850
initial_guess = [1.0, 1.0, 1.0]
4951
opt_prob = OptimizationProblem(pulse_amplitude, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
5052
nothing # hide
5153
```
5254
!!! note
53-
As described in a [previous section on Optimization.jl](@ref optimization_parameter_fitting), `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess` values using a vector. Next, in the first line of our cost function, we reshape the parameter values to the common form used across Catalyst (e.g. `[:pX => p[1], :pY => p[2], :pZ => p[2]]`, however, here we use a dictionary to easier compute the steady state initial condition). We also note that the order used in this array corresponds to the order we give each parameter's bounds in `lb` and `ub`, and the order in which their values occur in the output solution.
55+
As described in a [previous section on Optimization.jl](@ref optimization_parameter_fitting), `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess` values using a vector. Next, in the first line of our objective function, we reshape the parameter values to the common form used across Catalyst (e.g. `[:pX => p[1], :pY => p[2], :pZ => p[2]]`, however, here we use a dictionary to easier compute the steady state initial condition). We also note that the order used in this array corresponds to the order we give each parameter's bounds in `lb` and `ub`, and the order in which their values occur in the output solution.
5456

5557
As [previously described](@ref optimization_parameter_fitting), Optimization.jl supports a wide range of optimisation algorithms. Here we use one from [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl):
5658
```@example behaviour_optimization
@@ -63,30 +65,13 @@ Finally, we plot a simulation using the found parameter set (stored in `opt_sol.
6365
ps_res = Dict([:pX => opt_sol.u[1], :pY => opt_sol.u[2], :pZ => opt_sol.u[2]])
6466
u0_res = [:X => ps_res[:pX], :Y => ps_res[:pX]*ps_res[:pY], :Z => ps_res[:pZ]/ps_res[:pY]^2]
6567
oprob_res = remake(oprob; u0 = u0_res, p = ps_res)
66-
sol_res = solve(oprob_res, Tsit5())
68+
sol_res = solve(oprob_res)
6769
plot(sol_res; idxs = :Z)
6870
```
6971
For this model, it turns out that $Z$'s maximum pulse amplitude is equal to twice its steady state concentration. Hence, the maximisation of its pulse amplitude is equivalent to maximising its steady state concentration.
7072

71-
!!! note
72-
Especially if you check Optimization.jl's documentation, you will note that cost functions have the `f(u,p)` form. This is because `OptimizationProblem`s (like e.g. `ODEProblem`s) can take both variables (which can be varied in the optimisation problem), but also parameters that are fixed. In our case, the *optimisation variables* correspond to our *model parameters*. Hence, our model parameter values are the `u` input. This is also why we find the optimisation solution (our optimised parameter set) in `opt_sol`'s `u` field. Our optimisation problem does not actually have any parameters, hence, the second argument of `pulse_amplitude` is unused (that is why we call it `_`, a name commonly indicating unused function arguments).
73-
74-
There are several modifications to our problem where it would actually have parameters. E.g. our model might have had additional parameters (e.g. a degradation rate) which we would like to keep fixed throughout the optimisation process. If we then would like to run the optimisation process for several different values of these fixed parameters, we could have made them parameters to our `OptimizationProblem` (and their values provided as a third argument, after `initial_guess`).
75-
76-
## [Utilising automatic differentiation](@id behaviour_optimisation_AD)
77-
Optimisation methods can be divided into differentiation-free and differentiation-based optimisation methods. E.g. consider finding the minimum of the function $f(x) = x^2$, given some initial guess of $x$. Here, we can simply compute the differential and descend along it until we find $x=0$ (admittedly, for this simple problem the minimum can be computed directly). This principle forms the basis of optimisation methods such as [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent), which utilises information of a function's differential to minimise it. When attempting to find a global minimum, to avoid getting stuck in local minimums, these methods are often augmented by additional routines. While the differentiation of most algebraic functions is trivial, it turns out that even complicated functions (such as the one we used above) can be differentiated computationally through the use of [*automatic differentiation* (AD)](https://en.wikipedia.org/wiki/Automatic_differentiation).
78-
79-
Through packages such as [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl), and [Zygote.jl](https://github.com/FluxML/Zygote.jl), Julia supports AD for most code. Specifically for code including simulation of differential equations, differentiation is supported by [SciMLSensitivity.jl](https://github.com/SciML/SciMLSensitivity.jl). Generally, AD can be used without specific knowledge from the user, however, it requires an additional step in the construction of our `OptimizationProblem`. Here, we create a [specialised `OptimizationFunction` from our cost function](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#optfunction). To it, we will also provide our choice of AD method. There are [several alternatives](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#Automatic-Differentiation-Construction-Choice-Recommendations), and in our case we will use `AutoForwardDiff()` (a good choice for small optimisation problems). We can then create a new `OptimizationProblem` using our updated cost function:
80-
```@example behaviour_optimization
81-
opt_func = OptimizationFunction(pulse_amplitude, AutoForwardDiff())
82-
opt_prob = OptimizationProblem(opt_func, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
83-
nothing # hide
84-
```
85-
Finally, we can find the optimum using some differentiation-based optimisation methods. Here we will use [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s `BFGS` method:
86-
```julia
87-
using OptimizationOptimJL
88-
opt_sol = solve(opt_prob, OptimizationOptimJL.BFGS())
89-
```
73+
## [Other optimisation options](@id behaviour_optimisation_options)
74+
How to use Optimization.jl is discussed in more detail in [this tutorial](@ref optimization_parameter_fitting). This includes options such as using [automatic differentiation](@ref optimization_parameter_fitting_AD), [setting constraints](@ref optimization_parameter_fitting_constraints), and setting [optimisation solver options](@ref optimization_parameter_fitting_solver_options). Finally, it discusses the advantages of [carrying out the fitting in logarithmic space](@ref optimization_parameter_fitting_log_scale), something which can be advantageous for the problem described above as well.
9075

9176
---
9277
## [Citation](@id structural_identifiability_citation)

0 commit comments

Comments
 (0)