You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/inverse_problems/examples/ode_fitting_oscillation.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# [Fitting Parameters for an Oscillatory System](@id parameter_estimation)
2
-
In this examples we will use [Optimization.jl](https://github.com/SciML/Optimization.jl) to fit the parameters of an oscillatory system (the Brusselator). Here, special consideration is taken to avoid reaching a local minimum. Instead of fitting the entire time series directly, we will start with fitting parameter values for the first period, and then use those as an initial guess for fitting the next. As we complete this procedure (which can be advantageous for oscillatory systems) we reach a global optimum.
2
+
In this examples we will use [Optimization.jl](https://github.com/SciML/Optimization.jl) to fit the parameters of an oscillatory system (the Brusselator) to data. Here, special consideration is taken to avoid reaching a local minimum. Instead of fitting the entire time series directly, we will start with fitting parameter values for the first period, and then use those as an initial guess for fitting the next (and then these to find the next one, and so on). Using this procedure is advantageous for oscillatory systems, and enables us to reach the global optimum.
3
3
4
4
First, we fetch the required packages.
5
5
```@example pe1
@@ -124,7 +124,7 @@ As previously mentioned, the reason we chose to fit the model on a smaller inter
124
124
then extend the interval, is to avoid getting stuck in a local minimum. Here
125
125
specifically, we chose our initial interval to be smaller than a full cycle of
126
126
the oscillation. If we had chosen to fit a parameter set on the full interval
127
-
immediately we would have received an inferior solution.
127
+
immediately we would have received the wrong solution.
Copy file name to clipboardExpand all lines: docs/src/inverse_problems/optimization_ode_param_fitting.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
# [Parameter Fitting for ODEs using SciML/Optimization.jl and DiffEqParamEstim.jl](@id optimization_parameter_fitting)
2
-
Fitting parameters to data involves solving an optimisation problem (that is, finding the parameter set that optimally fits you model to your data, typically by minimising a cost function). The SciML ecosystem's primary package for solving optimisation problems is [Optimization.jl](https://github.com/SciML/Optimization.jl). It provides access to a variety of solvers from a single common interface, wrapping a large number of optimisation methods that have been implemented in Julia into a single common interface.
2
+
Fitting parameters to data involves solving an optimisation problem (that is, finding the parameter set that optimally fits you model to your data, typically by minimising a cost function). The SciML ecosystem's primary package for solving optimisation problems is [Optimization.jl](https://github.com/SciML/Optimization.jl). It provides access to a variety of solvers from a single common interface, wrapping a large number of optimisation methods that have been implemented in Julia into this interface.
3
3
4
4
This tutorial both demonstrate how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as finding parameter sets that maximises the magnitude of some system behaviour. More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/)[documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
5
5
6
6
## Basic example
7
7
8
-
Let us consider a simple catalysis network, where an enzyme (*E*) turns a substrate (*S*) into a product (*P*):
8
+
Let us consider a simple catalysis network, where an enzyme ($E$) turns a substrate ($S$) into a product ($P$):
9
9
```@example diffeq_param_estim_1
10
10
using Catalyst
11
11
rn = @reaction_network begin
@@ -60,9 +60,9 @@ nothing # hide
60
60
```
61
61
62
62
!!! note
63
-
`OptimizationProblem` cannot currently accept parameter values in the form of a map (e.g. `[:kB => 1.0, :kD => 1.0, :kP => 1.0]`). These must be provided as individual values (using the same order as the parameters occur in in the `parameters(rs)` vector).
63
+
`OptimizationProblem` cannot currently accept parameter values in the form of a map (e.g. `[:kB => 1.0, :kD => 1.0, :kP => 1.0]`). These must be provided as individual values (using the same order as the parameters occur in in the `parameters(rs)` vector). Similarly, `build_loss_objective`'s `save_idxs` uses the species index, rather than the species directly. These inconsistencies should be remedied in future package releases.
64
64
65
-
Finally, we can optimise `optprob` to find the parameter set that best fits our data. Optimization.jl does not provide any optimisation methods by default. Instead, for each supported optimisation package, it provides a corresponding wrapper-package to import that optimisation package for using with Optimization. E.g., if we wish to [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s [Nelder-Mead](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) method, we must install and import the OptimizationOptimJL package. A summary of all, by Optimization.jl, supported optimisation packages can be found [here](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers). Here, we import the Optim.jl package and uses it to minimise our cost function (thus finding a parameter set that fits the data):
65
+
Finally, we can optimise `optprob` to find the parameter set that best fits our data. Optimization.jl only provide a few optimisation methods natively. However, for each supported optimisation package, it provides a corresponding wrapper-package to import that optimisation package for use with Optimization.jl. E.g., if we wish to use[Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s [Nelder-Mead](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) method, we must install and import the OptimizationOptimJL package. A summary of all, by Optimization.jl, supported optimisation packages can be found [here](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers). Here, we import the Optim.jl package and uses it to minimise our cost function (thus finding a parameter set that fits the data):
66
66
```@example diffeq_param_estim_1
67
67
using OptimizationOptimJL
68
68
optsol = solve(optprob, Optim.NelderMead())
@@ -102,7 +102,7 @@ In this case we would have to use the `L2Loss(data_ts, hcat(data_vals_S, data_va
Here, `Array(hcat(data_vals_S, data_vals_P)')` is required to pu the data in the right form (in this case, a 2x10 matrix).
105
+
Here, `Array(hcat(data_vals_S, data_vals_P)')` is required to put the data in the right form (in this case, a 2x10 matrix).
106
106
107
107
We can now fit our model to data and plot the results:
108
108
```@example diffeq_param_estim_1
@@ -128,7 +128,7 @@ If we from previous knowledge know that *kD = 0.1*, and only would like to fit t
128
128
fixed_p_prob_generator(prob, p) = remake(prob; p = vcat(p[1], 0.1, p[2]))
129
129
nothing # hide
130
130
```
131
-
Here, it takes the `ODEProblem` (`prob`) we simulates, and the parameter set used (`p`), during the optimisation process, and creates a modified `ODEProblem` (by setting a customised parameter vector [using `remake`](@ref simulation_structure_interfacing_remake)). Now we create our modified loss function:
131
+
Here, it takes the `ODEProblem` (`prob`) we simulates, and the parameter set used, during the optimisation process (`p`), and creates a modified `ODEProblem` (by setting a customised parameter vector [using `remake`](@ref simulation_structure_interfacing_remake)). Now we create our modified loss function:
0 commit comments