You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/kalman_2.md
+8-16Lines changed: 8 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ jupytext:
4
4
extension: .md
5
5
format_name: myst
6
6
format_version: 0.13
7
-
jupytext_version: 1.14.4
7
+
jupytext_version: 1.16.7
8
8
kernelspec:
9
9
display_name: Python 3 (ipykernel)
10
10
language: python
@@ -237,7 +237,7 @@ for t in range(1, T):
237
237
x_hat, Σ = kalman.x_hat, kalman.Sigma
238
238
Σ_t[:, :, t-1] = Σ
239
239
x_hat_t[:, t-1] = x_hat.reshape(-1)
240
-
y_hat_t[t-1] = worker.G @ x_hat
240
+
[y_hat_t[t-1]] = worker.G @ x_hat
241
241
242
242
x_hat_t = np.concatenate((x[:, 1][:, np.newaxis],
243
243
x_hat_t), axis=1)
@@ -253,7 +253,6 @@ We also plot $E [u_0 | y^{t-1}]$, which is the firm inference about a worker's
253
253
We can watch as the firm's inference $E [u_0 | y^{t-1}]$ of the worker's work ethic converges toward the hidden $u_0$, which is not directly observed by the firm.
254
254
255
255
```{code-cell} ipython3
256
-
257
256
fig, ax = plt.subplots(1, 2)
258
257
259
258
ax[0].plot(y_hat_t, label=r'$E[y_t| y^{t-1}]$')
@@ -273,6 +272,7 @@ ax[1].legend()
273
272
fig.tight_layout()
274
273
plt.show()
275
274
```
275
+
276
276
## Some Computational Experiments
277
277
278
278
Let's look at $\Sigma_0$ and $\Sigma_T$ in order to see how much the firm learns about the hidden state during the horizon we have set.
@@ -290,7 +290,6 @@ Evidently, entries in the conditional covariance matrix become smaller over tim
290
290
It is enlightening to portray how conditional covariance matrices $\Sigma_t$ evolve by plotting confidence ellipsoides around $E [x_t |y^{t-1}] $ at various $t$'s.
0 commit comments