Skip to content

Commit 6e56226

Browse files
committed
fix typo
1 parent 7f6a7b2 commit 6e56226

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

lectures/mle.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ The number of billionaires is integer-valued.
9494
Hence we consider distributions that take values only in the nonnegative integers.
9595

9696
(This is one reason least squares regression is not the best tool for the present problem, since the dependent variable in linear regression is not restricted
97-
to integer values)
97+
to integer values.)
9898

9999
One integer distribution is the [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution), the probability mass function (pmf) of which is
100100

@@ -176,7 +176,7 @@ In Treisman's paper, the dependent variable --- the number of billionaires $y_i$
176176

177177
Hence, the distribution of $y_i$ needs to be conditioned on the vector of explanatory variables $\mathbf{x}_i$.
178178

179-
The standard formulation --- the so-called *poisson regression* model --- is as follows:
179+
The standard formulation --- the so-called *Poisson regression* model --- is as follows:
180180

181181
```{math}
182182
:label: poissonreg
@@ -322,7 +322,7 @@ $$
322322
$$
323323

324324
In doing so it is generally easier to maximize the log-likelihood (consider
325-
differentiating $f(x) = x \exp(x)$ vs. $f(x) = \log(x) + x$).
325+
differentiating $f(x) = x \exp(x)$ vs. $f(x) = \log(x) + x$).
326326

327327
Given that taking a logarithm is a monotone increasing transformation, a maximizer of the likelihood function will also be a maximizer of the log-likelihood function.
328328

@@ -350,7 +350,7 @@ $$
350350
\end{split}
351351
$$
352352

353-
The MLE of the Poisson to the Poisson for $\hat{\beta}$ can be obtained by solving
353+
The MLE of the Poisson for $\hat{\beta}$ can be obtained by solving
354354

355355
$$
356356
\underset{\beta}{\max} \Big(
@@ -386,7 +386,7 @@ def logL(β):
386386
return -((β - 10) ** 2) - 10
387387
```
388388

389-
To find the value of gradient of the above function, we can use [jax.grad](https://jax.readthedocs.io/en/latest/_autosummary/jax.grad.html) which auto-differentiates the given function.
389+
To find the value of the gradient of the above function, we can use [jax.grad](https://jax.readthedocs.io/en/latest/_autosummary/jax.grad.html) which auto-differentiates the given function.
390390

391391
We further use [jax.vmap](https://jax.readthedocs.io/en/latest/_autosummary/jax.vmap.html) which vectorizes the given function i.e. the function acting upon scalar inputs can now be used with vector inputs.
392392

@@ -443,17 +443,17 @@ guess), then
443443
\end{aligned}
444444
$$
445445

446-
1. Check whether $\boldsymbol{\beta}_{(k+1)} - \boldsymbol{\beta}_{(k)} < tol$
446+
2. Check whether $\boldsymbol{\beta}_{(k+1)} - \boldsymbol{\beta}_{(k)} < tol$
447447
- If true, then stop iterating and set
448448
$\hat{\boldsymbol{\beta}} = \boldsymbol{\beta}_{(k+1)}$
449449
- If false, then update $\boldsymbol{\beta}_{(k+1)}$
450450

451451
As can be seen from the updating equation,
452452
$\boldsymbol{\beta}_{(k+1)} = \boldsymbol{\beta}_{(k)}$ only when
453-
$G(\boldsymbol{\beta}_{(k)}) = 0$ ie. where the first derivative is equal to 0.
453+
$G(\boldsymbol{\beta}_{(k)}) = 0$ i.e. where the first derivative is equal to 0.
454454

455455
(In practice, we stop iterating when the difference is below a small
456-
tolerance threshold)
456+
tolerance threshold.)
457457

458458
Let's have a go at implementing the Newton-Raphson algorithm.
459459

@@ -639,7 +639,7 @@ Before we begin, let's re-estimate our simple model with `statsmodels`
639639
to confirm we obtain the same coefficients and log-likelihood value.
640640

641641
Now, as `statsmodels` accepts only NumPy arrays, we can use the `__array__` method
642-
of JAX arrays to convert it to NumPy arrays.
642+
of JAX arrays to convert them to NumPy arrays.
643643

644644
```{code-cell} ipython3
645645
X = jnp.array([[1, 2, 5], [1, 1, 3], [1, 4, 2], [1, 5, 2], [1, 3, 1]])
@@ -757,7 +757,7 @@ capitalization, and negatively correlated with top marginal income tax
757757
rate.
758758

759759
To analyze our results by country, we can plot the difference between
760-
the predicted an actual values, then sort from highest to lowest and
760+
the predicted and actual values, then sort from highest to lowest and
761761
plot the first 15
762762

763763
```{code-cell} ipython3
@@ -846,7 +846,7 @@ To begin, find the log-likelihood function and derive the gradient and
846846
Hessian.
847847
848848
The `jax.scipy.stats` module `norm` contains the functions needed to
849-
compute the cmf and pmf of the normal distribution.
849+
compute the cdf and pdf of the normal distribution.
850850
```
851851

852852
```{solution-start} mle_ex1

0 commit comments

Comments
 (0)