You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The uncertainty in the solution can be reduced when data are available. This can be either in incremental or batch form:
9
-
- batch form: we have a set of data
10
-
- incremental: common e.g. in temporal evolution, when the data are measured on the fly and systems cna change in time
9
+
- batch form: we have a set of data and we look for their explanation
10
+
- incremental: common e.g. in temporal evolution, when the data are measured on the fly and systems can change in time (stochastic ODE)
11
+
- if done right, the incremental solution also solves the batch problem.
11
12
12
13
## Fitting ODE solution to data
13
14
14
-
Since ODE solver is a function like any other, it is possible to use general-purpose optimizers to optimize parameters of the ODE to match the output.
15
+
Since the ODE solver is a function like any other, it is possible to use general-purpose optimizers to optimize parameters of the ODE to match the output.
15
16
```julia
16
17
using Optim
17
18
@@ -22,14 +23,15 @@ function loss(θin,prob::ODEProblem,Y)
-using the power of automatic differentiation (of the numerical solver)
30
+
-in the case of ODE, the gradients can be modified to use the information about exact derivatives (adjoints)
29
31
30
32
## Extending the ODE
31
33
32
-
The previous approach will work only if the data were generated by the exact ODE. If the structure of ODE is different, e.g. soem terms are missing, we can never find an exact fit.
34
+
The previous approach will work only if the data were generated by the exact ODE. If the structure of ODE is different, e.g. some terms are missing, we can never find an exact fit.
33
35
34
36
```math
35
37
\begin{align}
@@ -40,13 +42,13 @@ dot{y}&=-\delta y+\gamma xy,
40
42
41
43
We could "guess" what is the missing term or add a black box (neural network). The whole problem will become finding parameters ``\theta = [\theta_{ODE},\theta_{NN}]``.
where ``h()`` is a function transforming ODE solution to observations (e.g. identity, or selection of the relevant observations).
266
+
224
267
Worked out in the lab.
225
268
226
269
Can be combined with Neural ODE.
@@ -231,7 +274,7 @@ So far, we have seen optimizations of the ODEs in the form of point estimate. We
231
274
- the measurement are uncertain with large possible error
232
275
- the number of measurements is insufficient to fit the model.
233
276
234
-
Consider the Monte Carlo simulation from the previous lecture:
277
+
Consider the Monte Carlo simulation from the previous lecture extended for unknown parameter:
235
278
```julia
236
279
K=100
237
280
X0 = [x0 .+0.1*randn(2) for k=1:K]
@@ -248,7 +291,7 @@ Point estimate is the trajectory with the thick color.
248
291
- it is the one with minimum error
249
292
- is it really the solution?
250
293
251
-
Lets, select all trajectories withing a selected tolerance:
294
+
Lets select all trajectories within a selected tolerance:
252
295
253
296

254
297
@@ -258,7 +301,17 @@ When the data are collected sequentially, the process of reduction of the uncert
258
301
1. prediction - use ODE with uncertainty propagation to the next step,
259
302
2. correction - use the acquired measurement to reduce the uncertainty
260
303
261
-
How exactly are these steps implemented depends on the assumptions made on the type of model uncertainty (initial conditions, parameters, noise) and the measurment uncertainty (noise).
304
+
In mathematics, it is direct application of the Bayes rule:
- A implementation of the whole procedure can be implemented on general level using types for probability distributions and operations on them.
314
+
- How exactly are these steps implemented depends on the assumptions made on the type of model uncertainty (initial conditions, parameters, noise) and the measurment uncertainty (noise).
262
315
263
316
We have done propagation of the Gaussian uncertainty through an ODE (GaussNum, Cubature rules). We will complement it by the correcton step here.
- marginal distributions are unaffected by the correlation
332
+
- the correlation determines the reduction of uncertainty in the conditional case
333
+
276
334
We have uncertainty in all our unknowns ``p(\mathbf{x})`` in the form of quadrature points. We assume that the probability of observation of ``p(\mathbf{y}|\mathbf{x})`` has mean given by ``x`` and variance ``\sigma_y``.
277
335
Hence, the means can be obtained by empirical samples of the cubature points ``X_p`` and measurements corresponding to cubature points.
278
336
```math
@@ -290,7 +348,9 @@ The covariance matrices can be obtained by empirical samples:
290
348
The uncertainty reduction is then application of the conditional distribution using the obtained means and variances. A common trick is to define the Kalman gain:
0 commit comments