You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Lectures/Statistics/Pouzat_Lascon2018_Statistics_slides.org
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -327,7 +327,7 @@ Example of simulated data with $b=10$, $\Delta=90$, $\tau=1$.
327
327
- The log-likelihood is then a function of a single variable $\tau$.
328
328
- We will also do as if only two times add been used, $t_1$ and $t_2$ leading to observations $x_1$ and $x_2$ on the previous figure.
329
329
- The log-likelihood is then: \[l(\tau) = x_1\, \log s(t_1,\tau) - s(t_1,\tau) + x_2\, \log s(t_2,\tau) - s(t_2,\tau) \, .\]
330
-
- To make comparison with subsequent simulations in the same setting we will show the graph of: \[r(\tau) = l(\hat{\tau}) - l(\tau) \, ,\] where $\hat{\tau}$ is the location of the maximum of $l(\tau)$.
330
+
- To make comparison with subsequent simulations in the same setting we will show the graph of: \[r(\tau) = l(\tau) - l(\hat{\tau}) \, ,\] where $\hat{\tau}$ is the location of the maximum of $l(\tau)$.
331
331
332
332
**
333
333
#+ATTR_LATEX: :width 0.8\textwidth
@@ -368,7 +368,7 @@ As the sample size increase:
368
368
* The Maximum Likelihood Estimator :export:
369
369
370
370
** The MLE
371
-
- In 1922 Fischer proposed to take as an estimator $\hat{\theta}$ the $\theta$ that maximizes $l(\theta)$.
371
+
- In 1922 Fisher proposed to take as an estimator $\hat{\theta}$ the $\theta$ that maximizes $l(\theta)$.
372
372
- In that he was essentially following and generalizing Daniel Bernoulli, Lambert and Gauss.
373
373
- But he went much farther claiming that when the maximum was a smooth maximum, obtained by taking the derivative / gradient with respect to $\theta$ and setting it equal to 0, then:
374
374
+ The accuracy (standard error of the estimate) can be found to a good approximation from the curvature of $l(\theta)$ at its maximum.
@@ -383,7 +383,7 @@ The [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] p
383
383
384
384
** Some remarks
385
385
- The MLE is just the value of the parameter that makes the observations most probable /a posteriori/.
386
-
- Some technical precautions are required in order to fulfill all of Fischer's promises; they are referred to as "the appropriate smoothness conditions" in the literature.
386
+
- Some technical precautions are required in order to fulfill all of Fisher's promises; they are referred to as "the appropriate smoothness conditions" in the literature.
387
387
- They are heavy to state and a real pain to check (that's why no one checks them)!
388
388
- My recommendation is to go ahead and after the MLE, $\hat{\theta}$, has been found, do a parametric bootstrap:
389
389
+ take $\hat{\theta}$ has the "true" value and simulate 500 to 1000 samples from $\mathcal{M}(\hat{\theta})$,
@@ -396,7 +396,7 @@ The [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] p
396
396
** Functions associated with the Likelihood
397
397
- The /score function/ is defined by: $S(\theta) \equiv \frac{\partial{}l(\theta)}{\partial{}\theta}$.
398
398
- The /observed information/ is defined by: $\mathcal{J}(\theta) \equiv - \nabla \, \nabla^{T}l(\theta)$.
399
-
- the /Fischer information/ is defined by: $\mathcal{I}(\theta) \equiv \mathtt{E} \mathcal{J}(\theta) / n$.
399
+
- the /Fisher information/ is defined by: $\mathcal{I}(\theta) \equiv \mathtt{E} \mathcal{J}(\theta) / n$.
400
400
401
401
** Asymptotic properties of the MLE
402
402
Under the "appropriate smoothness conditions" (see the [[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation][Wikipedia]] page for a full statement), we have:
0 commit comments