Skip to content

Commit 50070e2

Browse files
author
Chris Fonnesbeck
authored
Merge pull request #1311 from pymc-devs/ppc
DOCS: Updating the ppc file - improving readability
2 parents a58ff5d + a46d013 commit 50070e2

File tree

1 file changed

+18
-1
lines changed

1 file changed

+18
-1
lines changed

docs/source/notebooks/posterior_predictive.ipynb

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,10 @@
66
"source": [
77
"# Posterior Predictive Checks\n",
88
"\n",
9-
"PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.\n",
9+
"PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior. \n",
10+
"\n",
11+
"Elaborating slightly one can say that - Posterior predictive checks (PPCs) analyze the degree to which data generated from the model deviate from data generated from the true distribution. So often you'll want to know if for example your posterior distribution is approximating your underlying distribution. The visualization aspect of this model evaluation method is also great for a 'sense check' or explaining your model to others and getting criticism. \n",
12+
"\n",
1013
"\n",
1114
"`PyMC3` has random number support thanks to [Mark Wibrow](https://github.com/mwibrow) as implemented in [PR784](https://github.com/pymc-devs/pymc3/pull/784).\n",
1215
"\n",
@@ -184,6 +187,20 @@
184187
"ax.set(title='Posterior predictive of the mean', xlabel='mean(x)', ylabel='Frequency');"
185188
]
186189
},
190+
{
191+
"cell_type": "markdown",
192+
"metadata": {},
193+
"source": [
194+
"# Comparison between PPC and other model evaluation methods. \n",
195+
"An excellent introduction to this is given on [Edward](http://edwardlib.org/tut_PPC) and since I can't write this acny better I'll just quote this. \n",
196+
"\"PPCs are an excellent tool for revising models, simplifying or expanding the current model as one examines how well it fits the data. They are inspired by prior checks and classical hypothesis testing, under the philosophy that models should be criticized under the frequentist perspective of large sample assessment.\n",
197+
"\n",
198+
"PPCs can also be applied to tasks such as hypothesis testing, model comparison, model selection, and model averaging. It’s important to note that while they can be applied as a form of Bayesian hypothesis testing, hypothesis testing is generally not recommended: binary decision making from a single test is not as common a use case as one might believe. We recommend performing many PPCs to get a holistic understanding of the model fit.\" \n",
199+
"\n",
200+
"An important lesson to learn as someone using Probabilistic Programming is to not overfit your understanding or your criticism of models to only one metric. Model evaluation is a skill that can be honed with practice. \n",
201+
"\n"
202+
]
203+
},
187204
{
188205
"cell_type": "markdown",
189206
"metadata": {},

0 commit comments

Comments
 (0)