You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/kalman_2.md
+71-39Lines changed: 71 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,15 +30,15 @@ kernelspec:
30
30
```
31
31
32
32
In this quantecon lecture {doc}`A First Look at the Kalman filter <kalman>`, we used
33
-
a Kalman filter to estimate locations of a rocket.
33
+
a Kalman filter to estimate locations of a rocket.
34
34
35
35
In this lecture, we'll use the Kalman filter to
36
36
infer a worker's human capital and the effort that the worker devotes to accumulating
37
37
human capital, neither of which the firm observes directly.
38
38
39
39
The firm learns about those things only by observing a history of the output that the worker generates for the firm, and from understanding how that output depends on the worker's human capital and how human capital evolves as a function of the worker's effort.
40
40
41
-
We'll posit a rule that expresses how the much firm pays the worker each period as a function of the firm's information each period.
41
+
We'll posit a rule that expresses how much the firm pays the worker each period as a function of the firm's information each period.
42
42
43
43
In addition to what's in Anaconda, this lecture will need the following libraries:
44
44
@@ -53,8 +53,11 @@ To conduct simulations, we bring in these imports, as in {doc}`A First Look at t
53
53
```{code-cell} ipython3
54
54
import matplotlib.pyplot as plt
55
55
import numpy as np
56
+
import jax
57
+
import jax.numpy as jnp
56
58
from quantecon import Kalman, LinearStateSpace
57
59
from collections import namedtuple
60
+
from typing import NamedTuple
58
61
from scipy.stats import multivariate_normal
59
62
import matplotlib as mpl
60
63
mpl.rcParams['text.usetex'] = True
@@ -71,7 +74,7 @@ The workers' output is described by the following dynamic process:
Please note how the `WorkerModel` namedtuple creates all of the objects required to compute an associated
184
-
state-space representation {eq}`ssrepresent`.
217
+
Please note how the `WorkerModel` namedtuple creates all of the objects required to compute an associated state-space representation {eq}`ssrepresent`.
185
218
186
-
This is handy, because in order to simulate a history $\{y_t, h_t\}$ for a worker, we'll want to form
187
-
state space system for him/her by using the [`LinearStateSpace`](https://quanteconpy.readthedocs.io/en/latest/tools/lss.html) class.
219
+
This is handy, because in order to simulate a history $\{y_t, h_t\}$ for a worker, we'll want to form state space system for him/her by using the [`LinearStateSpace`](https://quanteconpy.readthedocs.io/en/latest/tools/lss.html) class.
188
220
189
221
```{code-cell} ipython3
222
+
# TODO write it into a function
190
223
# Define A, C, G, R, xhat_0, Σ_0
191
224
worker = create_worker()
192
225
A, C, G, R = worker.A, worker.C, worker.G, worker.R
193
226
xhat_0, Σ_0 = worker.xhat_0, worker.Σ_0
194
227
195
228
# Create a LinearStateSpace object
196
-
ss = LinearStateSpace(A, C, G, np.sqrt(R),
197
-
mu_0=xhat_0, Sigma_0=np.zeros((2,2)))
229
+
ss = LinearStateSpace(A, C, G, jnp.sqrt(R),
230
+
mu_0=xhat_0, Sigma_0=Σ_0)
198
231
199
232
T = 100
200
-
x, y = ss.simulate(T)
233
+
seed = 1234
234
+
x, y = ss.simulate(T, seed)
201
235
y = y.flatten()
202
236
203
237
h_0, u_0 = x[0, 0], x[1, 0]
204
238
```
205
239
206
-
Next, to compute the firm's policy for setting the log wage based on the information it has about the worker,
207
-
we use the Kalman filter described in this quantecon lecture {doc}`A First Look at the Kalman filter <kalman>`.
240
+
Next, to compute the firm's policy for setting the log wage based on the information it has about the worker, we use the Kalman filter described in this quantecon lecture {doc}`A First Look at the Kalman filter <kalman>`.
208
241
209
242
In particular, we want to compute all of the objects in an "innovation representation".
210
243
211
244
## An Innovations Representation
212
245
213
-
We have all the objects in hand required to form an innovations representation for the output
214
-
process $\{y_t\}_{t=0}^T$ for a worker.
246
+
We have all the objects in hand required to form an innovations representation for the output process $\{y_t\}_{t=0}^T$ for a worker.
215
247
216
248
Let's code that up now.
217
249
@@ -227,30 +259,30 @@ where $K_t$ is the Kalman gain matrix at time $t$.
227
259
We accomplish this in the following code that uses the [`Kalman`](https://quanteconpy.readthedocs.io/en/latest/tools/kalman.html) class.
For a draw of $h_0, u_0$, we plot $Ey_t = G \hat x_t $ where $\hat x_t = E [x_t | y^{t-1}]$.
281
+
For a draw of $h_0, u_0$, we plot $E[y_t] = G \hat x_t $ where $\hat x_t = E [x_t | y^{t-1}]$.
250
282
251
-
We also plot $E [u_0 | y^{t-1}]$, which is the firm inference about a worker's hard-wired "work ethic" $u_0$, conditioned on information $y^{t-1}$ that it has about him or her coming into period $t$.
283
+
We also plot $\hat u_t = E [u_0 | y^{t-1}]$, which is the firm inference about a worker's hard-wired "work ethic" $u_0$, conditioned on information $y^{t-1}$ that it has about him or her coming into period $t$.
252
284
253
-
We can watch as the firm's inference $E [u_0 | y^{t-1}]$ of the worker's work ethic converges toward the hidden $u_0$, which is not directly observed by the firm.
285
+
We can watch as the firm's inference $E [u_0 | y^{t-1}]$ of the worker's work ethic converges toward the hidden $u_0$, which is not directly observed by the firm.
0 commit comments