Skip to content

Commit 0251301

Browse files
committed
up
1 parent bfb83f6 commit 0251301

File tree

3 files changed

+5
-6
lines changed

3 files changed

+5
-6
lines changed

docs/pages.jl

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,9 @@ pages = Any["Home" => "index.md",
1616
"catalyst_applications/homotopy_continuation.md",
1717
"catalyst_applications/nonlinear_solve.md",
1818
"catalyst_applications/bifurcation_diagrams.md"],
19-
"inverse_problems/petab_ode_param_fitting.md",
20-
2119
"Inverse Problems" => Any["inverse_problems/optimization_ode_param_fitting.md",
2220
"inverse_problems/petab_ode_param_fitting.md",
2321
"inverse_problems/structural_identifiability.md",
2422
"Inverse problem examples" => Any["inverse_problems/examples/ode_fitting_oscillation.md"]],
23+
"FAQs" => "faqs.md",
2524
"API" => "api/catalyst_api.md"]

docs/src/inverse_problems/examples/ode_fitting_oscillation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ As previously mentioned, the reason we chose to fit the model on a smaller inter
124124
then extend the interval, is to avoid getting stuck in a local minimum. Here
125125
specifically, we chose our initial interval to be smaller than a full cycle of
126126
the oscillation. If we had chosen to fit a parameter set on the full interval
127-
immediately we would have obtained a poorer fit and less accurate estimate for the parameters.
127+
immediately we would have obtained poor fit and an inaccurate estimate for the parameters.
128128
```@example pe1
129129
p_estimate = optimise_p([5.0,5.0], 30.0)
130130

docs/src/inverse_problems/optimization_ode_param_fitting.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# [Parameter Fitting for ODEs using SciML/Optimization.jl and DiffEqParamEstim.jl](@id optimization_parameter_fitting)
22
Fitting parameters to data involves solving an optimisation problem (that is, finding the parameter set that optimally fits your model to your data, typically by minimising a cost function). The SciML ecosystem's primary package for solving optimisation problems is [Optimization.jl](https://github.com/SciML/Optimization.jl). It provides access to a variety of solvers via a single common interface by wrapping a large number of optimisation libraries that have been implemented in Julia.
33

4-
This tutorial demonstrates both how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as finding parameter sets that maximise the magnitude of some system behavior. More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/) [documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
4+
This tutorial demonstrates both how to create parameter fitting cost functions using the [DiffEqParamEstim.jl](https://github.com/SciML/DiffEqParamEstim.jl) package, and how to use Optimization.jl to minimise these. Optimization.jl can also be used in other contexts, such as finding parameter sets that maximise the magnitude of some system behaviour. More details on how to use these packages can be found in their [respective](https://docs.sciml.ai/Optimization/stable/) [documentations](https://docs.sciml.ai/DiffEqParamEstim/stable/).
55

66
## Basic example
77

@@ -124,7 +124,7 @@ nothing # hide
124124
In addition to boundaries, Optimization.jl also supports setting [linear and non-linear constraints](https://docs.sciml.ai/Optimization/stable/tutorials/constraints/#constraints) on its output solution for some optimizers.
125125

126126
## Parameter fitting with known parameters
127-
If from previous knowledge we know that $kD = 0.1$, and only want to fit the values of $kD$ and $kP$, this can be achieved through `build_loss_objective`'s `prob_generator` argument. First, we create a function (`fixed_p_prob_generator`) that modifies our `ODEProblem` to incorporate this knowledge:
127+
If from previous knowledge we know that $kD = 0.1$, and only want to fit the values of $kB$ and $kP$, this can be achieved through `build_loss_objective`'s `prob_generator` argument. First, we create a function (`fixed_p_prob_generator`) that modifies our `ODEProblem` to incorporate this knowledge:
128128
```@example diffeq_param_estim_1
129129
fixed_p_prob_generator(prob, p) = remake(prob; p = vcat(p[1], 0.1, p[2]))
130130
nothing # hide
@@ -135,7 +135,7 @@ loss_function_fixed_kD = build_loss_objective(oprob, Tsit5(), L2Loss(data_ts, da
135135
nothing # hide
136136
```
137137

138-
We can create an optimisation problem from this one like previously, but keep in mind that it (and its output results) only contains two parameter values (*kB* and *kP):
138+
We can create an optimisation problem from this one like previously, but keep in mind that it (and its output results) only contains two parameter values ($k$* and $kP$):
139139
```@example diffeq_param_estim_1
140140
optprob_fixed_kD = OptimizationProblem(loss_function_fixed_kD, [1.0, 1.0])
141141
optsol_fixed_kD = solve(optprob_fixed_kD, Optim.NelderMead())

0 commit comments

Comments
 (0)