Skip to content

Commit 0fb1c34

Browse files
committed
add figure metadata cell
1 parent f8c8f48 commit 0fb1c34

File tree

1 file changed

+49
-1
lines changed

1 file changed

+49
-1
lines changed

lectures/kalman.md

Lines changed: 49 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,13 @@ $2 \times 2$ covariance matrix. In our simulations, we will suppose that
123123
This density $p(x)$ is shown below as a contour map, with the center of the red ellipse being equal to $\hat{x}$.
124124

125125
```{code-cell} ipython3
126+
---
127+
mystnb:
128+
figure:
129+
caption: |
130+
Prior distribution
131+
name: fig_prior
132+
---
126133
# Set up the Gaussian prior density p
127134
Σ = jnp.array([[0.4, 0.3],
128135
[0.3, 0.45]])
@@ -202,6 +209,13 @@ The next figure shows the original prior $p(x)$ and the new reported
202209
location $y$.
203210

204211
```{code-cell} ipython3
212+
---
213+
mystnb:
214+
figure:
215+
caption: |
216+
Prior distribution and observation
217+
name: fig_obs
218+
---
205219
# The observed value of y
206220
y = jnp.array([[2.3],
207221
[-1.9]])
@@ -277,6 +291,13 @@ This new density $p(x \,|\, y) = N(\hat{x}^F, \Sigma^F)$ is shown in the next fi
277291
The original density is left in as contour lines for comparison
278292

279293
```{code-cell} ipython3
294+
---
295+
mystnb:
296+
figure:
297+
caption: |
298+
Updated distribution from observation
299+
name: fig_update_obs
300+
---
280301
# Define the matrices G and R from the equation y = G x + N(0, R)
281302
G = jnp.array([[1, 0],
282303
[0, 1]])
@@ -395,6 +416,13 @@ Q = 0.3 * \Sigma
395416
$$
396417

397418
```{code-cell} ipython3
419+
---
420+
mystnb:
421+
figure:
422+
caption: |
423+
Updated distribution from transition
424+
name: fig_update_trans
425+
---
398426
# The matrices A and Q
399427
A = jnp.array([[1.2, 0],
400428
[0, -0.2]])
@@ -591,6 +619,13 @@ Your figure should -- modulo randomness -- look something like this
591619
```
592620

593621
```{code-cell} ipython3
622+
---
623+
mystnb:
624+
figure:
625+
caption: |
626+
First 5 densities when θ=10.0
627+
name: fig_5_density
628+
---
594629
# Parameters
595630
θ = 10 # Constant value of state x_t
596631
A, C, G, H = 1, 0, 1, 1
@@ -617,7 +652,6 @@ for i in range(N):
617652
ax.plot(xgrid, norm.pdf(xgrid, loc=m, scale=jnp.sqrt(v)), label=f'$t={i}$')
618653
kalman.update(y[i])
619654
620-
ax.set_title(f'First {N} densities when $\\theta = {θ:.1f}$')
621655
ax.legend(loc='upper left')
622656
plt.show()
623657
```
@@ -657,6 +691,13 @@ Your figure should show error erratically declining something like this
657691
```
658692

659693
```{code-cell} ipython3
694+
---
695+
mystnb:
696+
figure:
697+
caption: |
698+
Probability differences
699+
name: fig_convergence
700+
---
660701
ϵ = 0.1
661702
θ = 10 # Constant value of state x_t
662703
A, C, G, H = 1, 0, 1, 1
@@ -762,6 +803,13 @@ Observe how, after an initial learning period, the Kalman filter performs quite
762803
```
763804

764805
```{code-cell} ipython3
806+
---
807+
mystnb:
808+
figure:
809+
caption: |
810+
Kalman filter vs conditional expectation
811+
name: fig_compare
812+
---
765813
# Define A, C, G, H
766814
G = jnp.eye(2)
767815
H = jnp.sqrt(0.5) * G

0 commit comments

Comments
 (0)