Skip to content

Commit d3912a0

Browse files
committed
check grammar mistakes and typos
1 parent b1808f5 commit d3912a0

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

lectures/kalman_2.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ mpl.rcParams['text.latex.preamble'] = r'\usepackage{{amsmath}}'
6868

6969
A representative worker is permanently employed at a firm.
7070

71-
The workers' output is described by the following dynamic process:
71+
The worker's output is described by the following dynamic process:
7272

7373
```{math}
7474
:label: worker_model
@@ -166,7 +166,7 @@ x_t = \begin{bmatrix} h_{t} \cr u_{t} \end{bmatrix} , \quad
166166
0 & \sigma_{u,0} \end{bmatrix}
167167
```
168168

169-
To compute the firm's wage setting policy, we first we create a `NamedTuple` to store the parameters of the model
169+
To compute the firm's wage-setting policy, we first create a `NamedTuple` to store the parameters of the model
170170

171171
```{code-cell} ipython3
172172
class WorkerModel(NamedTuple):
@@ -202,7 +202,7 @@ def create_worker(α=.8, β=.2, c=.2,
202202

203203
Please note how the `WorkerModel` namedtuple creates all of the objects required to compute an associated state-space representation {eq}`ssrepresent`.
204204

205-
This is handy, because in order to simulate a history $\{y_t, h_t\}$ for a worker, we'll want to form state space system for him/her by using the [`LinearStateSpace`](https://quanteconpy.readthedocs.io/en/latest/tools/lss.html) class.
205+
This is handy, because in order to simulate a history $\{y_t, h_t\}$ for a worker, we'll want to form a state space system for him/her by using the [`LinearStateSpace`](https://quanteconpy.readthedocs.io/en/latest/tools/lss.html) class.
206206

207207
```{code-cell} ipython3
208208
# Define A, C, G, R, μ_0, Σ_0
@@ -241,9 +241,9 @@ y_{t} & = G \hat x_t + a_t
241241
where $K_t$ is the Kalman gain matrix at time $t$.
242242

243243

244-
We accomplish this in the following code that uses the [`Kalman`](https://quanteconpy.readthedocs.io/en/latest/tools/kalman.html) class.
244+
We accomplish this in the following code that uses the [`Kalman`](https://quanteconpy.readthedocs.io/en/latest/tools/kalman.html) class.
245245

246-
Suppose the belief of firm coincides with the real distribution of $x_0$.
246+
Suppose the belief of the firm coincides with the real distribution of $x_0$.
247247

248248
```{code-cell} ipython3
249249
x_hat_0, Σ_hat_0 = worker.μ_0, worker.Σ_0
@@ -259,7 +259,6 @@ for t in range(1, T):
259259
# x_hat_t = E(x_t | y^{t-1})
260260
x_hat = x_hat.at[:, t].set(x_hat_t.reshape(-1))
261261
Σ_hat = Σ_hat.at[:, :, t].set(Σ_hat_t)
262-
y_hat = y_hat.at[t].set((worker.G @ x_hat_t).item())
263262
264263
# Add the initial
265264
x_hat = x_hat.at[:, 0].set(x_hat_0.reshape(-1))
@@ -270,11 +269,11 @@ y_hat = worker.G @ x_hat
270269
u_hat = x_hat[1, :]
271270
```
272271

273-
For a draw of $h_0, u_0$, we plot $E[y_t | y^{t-1}] = G \hat x_t $ where $\hat x_t = E [x_t | y^{t-1}]$.
272+
For a draw of $h_0, u_0$, we plot $E[y_t | y^{t-1}] = G \hat x_t $ where $\hat x_t = E [x_t | y^{t-1}]$.
274273

275-
We also plot $\hat u_t = E [u_t | y^{t-1}]$, which is the firm inference about a worker's hard-wired "work ethic" $u_0$, conditioned on information $y^{t-1}$ that it has about him or her coming into period $t$.
274+
We also plot $\hat u_t = E [u_t | y^{t-1}]$, which is the firm's inference about a worker's hard-wired "work ethic" $u_0$, conditioned on information $y^{t-1}$ that it has about him or her coming into period $t$.
276275

277-
We can watch as the firm's inference $E [u_t | y^{t-1}]$ of the worker's work ethic converges toward the hidden $u_0$, which is not directly observed by the firm.
276+
We can watch as the firm's inference $E [u_t | y^{t-1}]$ of the worker's work ethic converges toward the hidden $u_0$, which is not directly observed by the firm.
278277

279278
```{code-cell} ipython3
280279
fig, ax = plt.subplots(1, 2)
@@ -309,9 +308,9 @@ print(Σ_hat[:, :, 0])
309308
print(Σ_hat[:, :, -1])
310309
```
311310

312-
Evidently, entries in the conditional covariance matrix become smaller over time.
311+
Evidently, entries in the conditional covariance matrix become smaller over time.
313312

314-
It is enlightening to portray how conditional covariance matrices $\Sigma_t$ evolve by plotting confidence ellipsoides around $E [x_t |y^{t-1}] $ at various $t$'s.
313+
It is enlightening to portray how conditional covariance matrices $\Sigma_t$ evolve by plotting confidence ellipsoids around $E [x_t |y^{t-1}] $ at various $t$'s.
315314

316315
```{code-cell} ipython3
317316
# Create a grid of points for contour plotting
@@ -341,9 +340,9 @@ for i, t in enumerate(np.linspace(0, T-1, 3, dtype=int)):
341340
axs[i].set_xlabel(r'$h_{{{}}}$'.format(str(t+1)))
342341
axs[i].set_ylabel(r'$u_{{{}}}$'.format(str(t+1)))
343342
344-
cov_latex = r'$\Sigma_{{{}}}= \begin{{bmatrix}} {:.2f} & {:.2f} \\ {:.2f} & {:.2f} \end{{bmatrix}}$'.format(
345-
t+1, cov[0, 0], cov[0, 1], cov[1, 0], cov[1, 1]
346-
)
343+
cov_latex = (r'$\Sigma_{{{}}}= \begin{{bmatrix}} {:.2f} & {:.2f} \\ '
344+
r'{:.2f} & {:.2f} \end{{bmatrix}}$'.format(
345+
t+1, cov[0, 0], cov[0, 1], cov[1, 0], cov[1, 1]))
347346
axs[i].text(0.33, -0.15, cov_latex, transform=axs[i].transAxes)
348347
349348
@@ -463,7 +462,7 @@ Here is an example.
463462

464463
```{code-cell} ipython3
465464
# We can set these parameters when creating a worker -- just like classes!
466-
hard_working_worker = create_worker(α=.4, β=.8,
465+
hard_working_worker = create_worker(α=.4, β=.8,
467466
μ_h=7.0, μ_u=100, σ_h=2.5, σ_u=3.2)
468467
469468
print(hard_working_worker)
@@ -517,8 +516,9 @@ def simulate_workers(worker, T, ax, ss_μ=None, ss_Σ=None,
517516
y_hat = G @ x_hat
518517
u_hat = x_hat[1, :]
519518
520-
if diff :
521-
title = ('Difference between inferred and true work ethic over time' if title is None else title)
519+
if diff:
520+
title = ('Difference between inferred and true work ethic over time'
521+
if title is None else title)
522522
523523
ax.plot(u_hat - u_0, alpha=.5)
524524
ax.axhline(y=0, color='grey', linestyle='dashed')

0 commit comments

Comments
 (0)