Skip to content

Commit 8426638

Browse files
authored
DOC: Correct some typos regarding a/an usage (#190)
* DOC: a MCMC -> an MCMC * DOC: a ESS -> an ESS * DOC: Remove an extra "a" in the sentence
1 parent 85e5797 commit 8426638

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

Chapters/DataTree.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ This is an HTML representation of a DataTree, so if you are reading this from a
4747

4848
An important concept is that of `dimensions` and `coordinates`. If your data was geographical data, like things related to maps, then `dimensions` would be like latitude and longitude, and `coordinates` would be the actual values of latitude and longitude. The xarray documentation is full of examples related to maps. But the idea is very general and applies to any kind of data.
4949

50-
Let's see the dimensions and coordinates for the `posterior` in our `dt` object. We can see 3 dimensions `chain`, `draw`, and `school`. As usual the posterior group of a DataTree object will be generated from a MCMC sampler. The `chain` dimension is used to index the different chains of the MCMC sampler, the `draw` dimension is used to index the different samples generated by the MCMC sampler. The `chain` and `draw` dimensions are ubiquitous, you will see them essentially in any DataTree object when working with ArviZ. The coordinates for `chain` are the integers `[0, 1, 2, 3]` and for `draw` are the integers `[0, 1, 2, ..., 499]`. Then, we also have the `school` dimension. This problem-specific, we have it here because the model from which the posterior was generated has a parameter conditional on school. The coordinates for `school` are the names of the schools, if you click on the {{< fa database >}} symbol by the `school` coordinate, you will be able to see the names of each school. They are: `['Choate', 'Deerfield', 'Phillips Andover', 'Phillips Exeter', 'Hotchkiss', 'Lawrenceville', "St. Paul's", 'Mt. Hermon']`
50+
Let's see the dimensions and coordinates for the `posterior` in our `dt` object. We can see 3 dimensions `chain`, `draw`, and `school`. As usual the posterior group of a DataTree object will be generated from an MCMC sampler. The `chain` dimension is used to index the different chains of the MCMC sampler, the `draw` dimension is used to index the different samples generated by the MCMC sampler. The `chain` and `draw` dimensions are ubiquitous, you will see them essentially in any DataTree object when working with ArviZ. The coordinates for `chain` are the integers `[0, 1, 2, 3]` and for `draw` are the integers `[0, 1, 2, ..., 499]`. Then, we also have the `school` dimension. This problem-specific, we have it here because the model from which the posterior was generated has a parameter conditional on school. The coordinates for `school` are the names of the schools, if you click on the {{< fa database >}} symbol by the `school` coordinate, you will be able to see the names of each school. They are: `['Choate', 'Deerfield', 'Phillips Andover', 'Phillips Exeter', 'Hotchkiss', 'Lawrenceville', "St. Paul's", 'Mt. Hermon']`
5151

5252

5353
### Get the dataset corresponding to a single group

Chapters/MCMC_diagnostics.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ azp.ess(sample)
285285
One way to use the ESS is as a minimum requirement for trustworthy MCMC samples. It is recommended the ESS to be greater than 100 per chain. That is, for 4 chains we want a minimum of 400 effective samples.
286286

287287
::: {.callout-note}
288-
The ESS can also be used as a metric of the efficiency of MCMC sampling methods. For instance, we may want to measure the ESS per sample (ESS/n), a sampler that generates a ESS/n closer to 1 is more efficient than a sampler that generates values closer to 0. Other common metrics are the ESS per second, and the ESS per likelihood evaluation.
288+
The ESS can also be used as a metric of the efficiency of MCMC sampling methods. For instance, we may want to measure the ESS per sample (ESS/n), a sampler that generates an ESS/n closer to 1 is more efficient than a sampler that generates values closer to 0. Other common metrics are the ESS per second, and the ESS per likelihood evaluation.
289289
:::
290290

291291
We see that `azp.summary(⋅)` returns two ESS values, `ess_bulk` and `ess_tail`. This is because different regions of the parameter space may have different ESS values since not all regions are sampled with the same efficiency. Intuitively, one may think that when sampling a distribution like a Gaussian it is easier to obtain better sample quality around the mean than around the tails, simply because we have more samples from that region. For some models, it could be the other way around, but the take-home message remains, not all regions are necessarily sampled with the same efficiency

Chapters/Sensitivity_checks.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -421,7 +421,7 @@ Sensitivity diagnostic values are given for both prior and likelihood sensitivit
421421
**These diagnostic messages do not necessarily indicate problems with the model**. They are informative messages that describe the interplay between the chosen prior and likelihood. If your prior is meant to be informative, influence on the posterior is desired and prior-data conflict may not be an issue. However, if you did not put much effort into choosing the priors, these messages can let you know if you should be more deliberate in your prior specification.
422422

423423
* **Strong prior / weak likelihood**. This can occur when:
424-
* The prior is completely dominating the likelihood such that changing the likelihood strength has little to no impact on the posterior. The prior may be extremely informative and a using a weaker prior may remove this domination.
424+
* The prior is completely dominating the likelihood such that changing the likelihood strength has little to no impact on the posterior. The prior may be extremely informative and using a weaker prior may remove this domination.
425425

426426
* The likelihood is uninformative and no information is gained by increasing the strength of the likelihood. The prior will always have an effect in this case.
427427

0 commit comments

Comments
 (0)