Skip to content

Commit 5262f1e

Browse files
author
Christophe Pouzat
committed
2 errors corrected;: Fischer->Fisher and r(tau) def.
1 parent 4d43692 commit 5262f1e

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

Lectures/Statistics/Pouzat_Lascon2018_Statistics_slides.org

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -327,7 +327,7 @@ Example of simulated data with $b=10$, $\Delta=90$, $\tau=1$.
327327
- The log-likelihood is then a function of a single variable $\tau$.
328328
- We will also do as if only two times add been used, $t_1$ and $t_2$ leading to observations $x_1$ and $x_2$ on the previous figure.
329329
- The log-likelihood is then: \[l(\tau) = x_1\, \log s(t_1,\tau) - s(t_1,\tau) + x_2\, \log s(t_2,\tau) - s(t_2,\tau) \, .\]
330-
- To make comparison with subsequent simulations in the same setting we will show the graph of: \[r(\tau) = l(\hat{\tau}) - l(\tau) \, ,\] where $\hat{\tau}$ is the location of the maximum of $l(\tau)$.
330+
- To make comparison with subsequent simulations in the same setting we will show the graph of: \[r(\tau) = l(\tau) - l(\hat{\tau}) \, ,\] where $\hat{\tau}$ is the location of the maximum of $l(\tau)$.
331331

332332
**
333333
#+ATTR_LATEX: :width 0.8\textwidth
@@ -368,7 +368,7 @@ As the sample size increase:
368368
* The Maximum Likelihood Estimator :export:
369369

370370
** The MLE
371-
- In 1922 Fischer proposed to take as an estimator $\hat{\theta}$ the $\theta$ that maximizes $l(\theta)$.
371+
- In 1922 Fisher proposed to take as an estimator $\hat{\theta}$ the $\theta$ that maximizes $l(\theta)$.
372372
- In that he was essentially following and generalizing Daniel Bernoulli, Lambert and Gauss.
373373
- But he went much farther claiming that when the maximum was a smooth maximum, obtained by taking the derivative / gradient with respect to $\theta$ and setting it equal to 0, then:
374374
+ The accuracy (standard error of the estimate) can be found to a good approximation from the curvature of $l(\theta)$ at its maximum.
@@ -383,7 +383,7 @@ The [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] p
383383

384384
** Some remarks
385385
- The MLE is just the value of the parameter that makes the observations most probable /a posteriori/.
386-
- Some technical precautions are required in order to fulfill all of Fischer's promises; they are referred to as "the appropriate smoothness conditions" in the literature.
386+
- Some technical precautions are required in order to fulfill all of Fisher's promises; they are referred to as "the appropriate smoothness conditions" in the literature.
387387
- They are heavy to state and a real pain to check (that's why no one checks them)!
388388
- My recommendation is to go ahead and after the MLE, $\hat{\theta}$, has been found, do a parametric bootstrap:
389389
+ take $\hat{\theta}$ has the "true" value and simulate 500 to 1000 samples from $\mathcal{M}(\hat{\theta})$,
@@ -396,7 +396,7 @@ The [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] p
396396
** Functions associated with the Likelihood
397397
- The /score function/ is defined by: $S(\theta) \equiv \frac{\partial{}l(\theta)}{\partial{}\theta}$.
398398
- The /observed information/ is defined by: $\mathcal{J}(\theta) \equiv - \nabla \, \nabla^{T}l(\theta)$.
399-
- the /Fischer information/ is defined by: $\mathcal{I}(\theta) \equiv \mathtt{E} \mathcal{J}(\theta) / n$.
399+
- the /Fisher information/ is defined by: $\mathcal{I}(\theta) \equiv \mathtt{E} \mathcal{J}(\theta) / n$.
400400

401401
** Asymptotic properties of the MLE
402402
Under the "appropriate smoothness conditions" (see the [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] page for a full statement), we have:
-19 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)