Skip to content

Commit 3b91b28

Browse files
committed
up
1 parent b0a00f8 commit 3b91b28

File tree

9 files changed

+35
-31
lines changed

9 files changed

+35
-31
lines changed

docs/Project.toml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ QuasiMonteCarlo = "8a4e6c94-4038-4cdc-81c3-7e6ffdb2a71b"
2929
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"
3030
SciMLSensitivity = "1ed8b502-d754-442c-8d5d-10ac956f44a1"
3131
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
32+
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
3233
SteadyStateDiffEq = "9672c7b4-1e72-59bd-8a11-6ac3964bc41f"
3334
StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0"
3435
StructuralIdentifiability = "220ca800-aa68-49bb-acd8-6037fa93a544"
@@ -54,12 +55,17 @@ ModelingToolkit = "9.15"
5455
NonlinearSolve = "3.12"
5556
Optim = "1.9"
5657
Optimization = "3.25"
58+
OptimizationBBO = "0.2.1"
59+
OptimizationNLopt = "0.2.1"
60+
OptimizationOptimJL = "0.3.1"
61+
OptimizationOptimisers = "0.2.1"
5762
OrdinaryDiffEq = "6.80.1"
5863
Plots = "1.40"
5964
QuasiMonteCarlo = "0.3"
6065
SciMLBase = "2.39"
6166
SciMLSensitivity = "7.60"
6267
SpecialFunctions = "2.4"
68+
StaticArrays = "1.9"
6369
SteadyStateDiffEq = "2.2"
6470
StochasticDiffEq = "6.65"
6571
StructuralIdentifiability = "0.5.7"

docs/pages.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ pages = Any[
22
"Home" => "home.md",
33
"Introduction to Catalyst" => Any[
44
"introduction_to_catalyst/catalyst_for_new_julia_users.md",
5-
"introduction_to_catalyst/introduction_to_catalyst.md"
5+
# "introduction_to_catalyst/introduction_to_catalyst.md"
66
# Advanced introduction.
77
],
88
"Model Creation and Properties" => Any[

docs/src/api.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -152,16 +152,13 @@ accessor functions.
152152
```@docs
153153
species
154154
nonspecies
155-
reactionsystemparams
156155
reactions
157156
nonreactions
158157
numspecies
159158
numparams
160159
numreactions
161-
numreactionsystemparams
162160
speciesmap
163161
paramsmap
164-
reactionsystemparamsmap
165162
isspecies
166163
isautonomous
167164
Catalyst.isconstant

docs/src/inverse_problems/global_sensitivity_analysis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ A related concept to global sensitivity is *local sensitivity*. This, rather tha
1111
While local sensitivities are primarily used as a subroutine of other methodologies (such as optimisation schemes), it also has direct uses. E.g., in the context of fitting parameters to data, local sensitivity analysis can be used to, at the parameter set of the optimal fit, [determine the cost function's sensitivity to the system parameters](@ref ref).
1212

1313
## [Basic example](@id global_sensitivity_analysis_basic_example)
14-
We will consider a simple [SEIR model of an infectious disease](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology). This is an expansion of the classic [SIR model](@ref ref) with an additional *exposed* state, $E$, denoting individuals who are latently infected but currently unable to transmit their infection to others.
14+
We will consider a simple [SEIR model of an infectious disease](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology). This is an expansion of the classic [SIR model](@ref basic_CRN_library_sir) with an additional *exposed* state, $E$, denoting individuals who are latently infected but currently unable to transmit their infection to others.
1515
```@example gsa_1
1616
using Catalyst
1717
seir_model = @reaction_network begin
@@ -53,7 +53,7 @@ on the domain $10^β ∈ (-3.0,-1.0)$, $10^a ∈ (-2.0,0.0)$, $10^γ ∈ (-2.0,0
5353
!!! note
5454
We should make a couple of notes about the example above:
5555
- Here, we write our parameters on the forms $10^β$, $10^a$, and $10^γ$, which transforms them into log-space. As [previously described](@ref optimization_parameter_fitting_logarithmic_scale), this is advantageous in the context of inverse problems such as this one.
56-
- For GSA, where a function is evaluated a large number of times, it is ideal to write it as performant as possible. Hence, we initially create a base `ODEProblem`, and then apply the [`remake`](@ref ref) function to it in each evaluation of `peak_cases` to generate a problem which is solved for that specific parameter set.
56+
- For GSA, where a function is evaluated a large number of times, it is ideal to write it as performant as possible. Hence, we initially create a base `ODEProblem`, and then apply the [`remake`](@ref simulation_structure_interfacing_problems_remake) function to it in each evaluation of `peak_cases` to generate a problem which is solved for that specific parameter set.
5757
- Again, as [previously described in other inverse problem tutorials](@ref optimization_parameter_fitting_basics), when exploring a function over large parameter spaces, we will likely simulate our model for unsuitable parameter sets. To reduce time spent on these, and to avoid excessive warning messages, we provide the `maxiters = 100000` and `verbose = false` arguments to `solve`.
5858
- As we have encountered in [a few other cases](@ref optimization_parameter_fitting_basics), the `gsa` function is not able to take parameter inputs of the map form usually used for Catalyst. Hence, as a first step in `peak_cases` we convert the parameter vector to this form. Next, we remember that the order of the parameters when we e.g. evaluate the GSA output, or set the parameter bounds, corresponds to the order used in `ps = [:β => p[1], :a => p[2], :γ => p[3]]`.
5959

docs/src/inverse_problems/optimization_ode_param_fitting.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
# [Parameter Fitting for ODEs using SciML/Optimization.jl and DiffEqParamEstim.jl](@id optimization_parameter_fitting)
1+
# [Parameter Fitting for ODEs using Optimization.jl and DiffEqParamEstim.jl](@id optimization_parameter_fitting)
22
Fitting parameters to data involves solving an optimisation problem (that is, finding the parameter set that optimally fits your model to your data, typically by minimising a cost function). The SciML ecosystem's primary package for solving optimisation problems is [Optimization.jl](https://github.com/SciML/Optimization.jl). It provides access to a variety of solvers via a single common interface by wrapping a large number of optimisation libraries that have been implemented in Julia.
33

4-
This tutorial demonstrates both how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as [finding parameter sets that maximise the magnitude of some system behaviour](@ref ref). More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/) [documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
4+
This tutorial demonstrates both how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as [finding parameter sets that maximise the magnitude of some system behaviour](@ref behaviour_optimisation). More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/) [documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
55

66
## [Basic example](@id optimization_parameter_fitting_basics)
77

8-
Let us consider a [Michaelis-Menten enzyme kinetics model](@ref ref), where an enzyme ($E$) converts a substrate ($S$) into a product ($P$):
8+
Let us consider a [Michaelis-Menten enzyme kinetics model](@ref basic_CRN_library_mm), where an enzyme ($E$) converts a substrate ($S$) into a product ($P$):
99
```@example diffeq_param_estim_1
1010
using Catalyst
1111
rn = @reaction_network begin
@@ -30,8 +30,8 @@ data_vals = (0.8 .+ 0.4*rand(10)) .* data_sol[:P][2:end]
3030
3131
# Plots the true solutions and the (synthetic) data measurements.
3232
using Plots
33-
plot(true_sol; idxs=:P, label="True solution", lw=8)
34-
plot!(data_ts, data_vals; label="Measurements", seriestype=:scatter, ms=6, color=:blue)
33+
plot(true_sol; idxs = :P, label = "True solution", lw = 8)
34+
plot!(data_ts, data_vals; label = "Measurements", seriestype=:scatter, ms = 6, color = :blue)
3535
```
3636

3737
Next, we will use DiffEqParamEstim to build a loss function to measure how well our model's solutions fit the data.
@@ -45,7 +45,7 @@ nothing # hide
4545
```
4646
To `build_loss_objective` we provide the following arguments:
4747
- `oprob`: The `ODEProblem` with which we simulate our model (using some dummy parameter values, since we do not know these).
48-
- `Tsit5()`: The [numeric integrator](@ref ref) we wish to simulate our model with.
48+
- `Tsit5()`: The [numeric solver](@ref simulation_intro_solver_options) we wish to simulate our model with.
4949
- `L2Loss(data_ts, data_vals)`: Defines the loss function. While [other alternatives](https://docs.sciml.ai/DiffEqParamEstim/stable/getting_started/#Alternative-Cost-Functions-for-Increased-Robustness) are available, `L2Loss` is the simplest one (measuring the sum of squared distances between model simulations and data measurements). Its first argument is the time points at which the data is collected, and the second is the data's values.
5050
- `Optimization.AutoForwardDiff()`: Our choice of [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) framework.
5151

@@ -63,27 +63,27 @@ nothing # hide
6363
!!! note
6464
`OptimizationProblem` cannot currently accept parameter values in the form of a map (e.g. `[:kB => 1.0, :kD => 1.0, :kP => 1.0]`). These must be provided as individual values (using the same order as the parameters occur in in the `parameters(rs)` vector). Similarly, `build_loss_objective`'s `save_idxs` uses the species' indexes, rather than the species directly. These inconsistencies should be remedied in future DiffEqParamEstim releases.
6565

66-
Finally, we can optimise `optprob` to find the parameter set that best fits our data. Optimization.jl only provides a few optimisation methods natively. However, for each supported optimisation package, it provides a corresponding wrapper-package to import that optimisation package for use with Optimization.jl. E.g., if we wish to use [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s [Nelder-Mead](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) method, we must install and import the OptimizationOptimJL package. A summary of all, by Optimization.jl supported, optimisation packages can be found [here](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers). Here, we import the Optim.jl package and uses it to minimise our cost function (thus finding a parameter set that fits the data):
66+
Finally, we can optimise `optprob` to find the parameter set that best fits our data. Optimization.jl only provides a few optimisation methods natively. However, for each supported optimisation package, it provides a corresponding wrapper-package to import that optimisation package for use with Optimization.jl. E.g., if we wish to use [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl)'s [Nelder-Mead](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) method, we must install and import the OptimizationNLopt package. A summary of all, by Optimization.jl supported, optimisation packages can be found [here](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers). Here, we import the NLopt.jl package and uses it to minimise our cost function (thus finding a parameter set that fits the data):
6767
```@example diffeq_param_estim_1
68-
using OptimizationOptimJL
69-
optsol = solve(optprob, Optim.NelderMead())
68+
using OptimizationNLopt
69+
optsol = solve(optprob, NLopt.LN_NELDERMEAD())
7070
nothing # hide
7171
```
7272

7373
We can now simulate our model for the corresponding parameter set, checking that it fits our data.
7474
```@example diffeq_param_estim_1
75-
oprob_fitted = remake(oprob; p=optsol.u)
75+
oprob_fitted = remake(oprob; p = optsol.u)
7676
fitted_sol = solve(oprob_fitted, Tsit5())
77-
plot!(fitted_sol; idxs=:P, label="Fitted solution", linestyle=:dash, lw=6, color=:lightblue)
77+
plot!(fitted_sol; idxs = :P, label = "Fitted solution", linestyle = :dash, lw = 6, color = :lightblue)
7878
```
7979

8080
!!! note
8181
Here, a good exercise is to check the resulting parameter set and note that, while it creates a good fit to the data, it does not actually correspond to the original parameter set. [Identifiability](@ref structural_identifiability) is a concept that studies how to deal with this problem.
8282

83-
Say that we instead would like to use the [Broyden–Fletcher–Goldfarb–Shannon](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm) algorithm, as implemented by the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) package. In this case we would run:
83+
Say that we instead would like to use the [Broyden–Fletcher–Goldfarb–Shannon](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm) algorithm, as implemented by the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) package. In this case we would run:
8484
```@example diffeq_param_estim_1
85-
using OptimizationNLopt
86-
sol = solve(optprob, NLopt.LD_LBFGS())
85+
using OptimizationOptimJL
86+
sol = solve(optprob, Optim.LBFGS())
8787
nothing # hide
8888
```
8989

docs/src/steady_state_functionality/dynamical_systems.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ More information on how to compute basins of attractions for ODEs using Dynamica
5454

5555
While Lyapunov exponents can be used for other purposes, they are primarily used to characterise [*chaotic behaviours*](https://en.wikipedia.org/wiki/Chaos_theory) (where small changes in initial conditions has large effect on the resulting trajectories). Generally, an ODE exhibit chaotic behaviour if its attractor(s) have *at least one* positive Lyapunov exponent. Practically, Lyapunov exponents can be computed using DynamicalSystems.jl's `lyapunovspectrum` function. Here we will use it to investigate two models, one which exhibits chaos and one which do not.
5656

57-
First, let us consider the [Willamowski–Rössler model](@ref ref), which is known to exhibit chaotic behaviour.
57+
First, let us consider the [Willamowski–Rössler model](@ref basic_CRN_library_wr), which is known to exhibit chaotic behaviour.
5858
```@example dynamical_systems_lyapunov
5959
using Catalyst
6060
wr_model = @reaction_network begin

docs/src/steady_state_functionality/homotopy_continuation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ integer Hill exponents). The roots of these can reliably be found through a
1212
Catalyst contains a special homotopy continuation extension that is loaded whenever HomotopyContinuation.jl is. This exports a single function, `hc_steady_states`, that can be used to find the steady states of any `ReactionSystem` structure.
1313

1414

15-
For this tutorial, we will use the [Wilhelm model](@ref ref) (which
15+
For this tutorial, we will use the [Wilhelm model](@ref basic_CRN_library_wilhelm) (which
1616
demonstrates bistability in a small chemical reaction network). We declare the
1717
model and the parameter set for which we want to find the steady states:
1818
```@example hc_basics

0 commit comments

Comments
 (0)