You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix emphasis vs definitions in kalman and ifp_advanced
Changes per #721:
- prior (kalman.md)
- filtering distribution (kalman.md)
- predictive (kalman.md)
- Kalman gain (kalman.md)
- predictive distribution (kalman.md)
- savings (ifp_advanced.md)
All terms changed from italic to bold as they are definitions per style guide.
Copy file name to clipboardExpand all lines: lectures/kalman.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,7 +85,7 @@ One way to summarize our knowledge is a point prediction $\hat x$
85
85
* Then it is better to summarize our initial beliefs with a bivariate probability density $p$
86
86
* $\int_E p(x)dx$ indicates the probability that we attach to the missile being in region $E$.
87
87
88
-
The density $p$ is called our *prior* for the random variable $x$.
88
+
The density $p$ is called our **prior** for the random variable $x$.
89
89
90
90
To keep things tractable in our example, we assume that our prior is Gaussian.
91
91
@@ -317,7 +317,7 @@ We have obtained probabilities for the current location of the state (missile) g
317
317
This is called "filtering" rather than forecasting because we are filtering
318
318
out noise rather than looking into the future.
319
319
320
-
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the *filtering distribution*
320
+
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the **filtering distribution**
321
321
322
322
But now let's suppose that we are given another task: to predict the location of the missile after one unit of time (whatever that may be) has elapsed.
323
323
@@ -331,7 +331,7 @@ Let's suppose that we have one, and that it's linear and Gaussian. In particular
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new *predictive* distribution for the location in one unit of time.
334
+
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new **predictive** distribution for the location in one unit of time.
335
335
336
336
In view of {eq}`kl_xdynam`, all we have to do is introduce a random vector $x^F \sim N(\hat x^F, \Sigma^F)$ and work out the distribution of $A x^F + w$ where $w$ is independent of $x^F$ and has distribution $N(0, Q)$.
337
337
@@ -356,7 +356,7 @@ $$
356
356
$$
357
357
358
358
The matrix $A \Sigma G' (G \Sigma G' + R)^{-1}$ is often written as
359
-
$K_{\Sigma}$ and called the *Kalman gain*.
359
+
$K_{\Sigma}$ and called the **Kalman gain**.
360
360
361
361
* The subscript $\Sigma$ has been added to remind us that $K_{\Sigma}$ depends on $\Sigma$, but not $y$ or $\hat x$.
362
362
@@ -373,7 +373,7 @@ Our updated prediction is the density $N(\hat x_{new}, \Sigma_{new})$ where
373
373
\end{aligned}
374
374
```
375
375
376
-
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the *predictive distribution*
376
+
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the **predictive distribution**
377
377
378
378
The predictive distribution is the new density shown in the following figure, where
0 commit comments