Skip to content

Commit 33d641d

Browse files
JoKeysertwiecki
authored andcommitted
Fix two typos.
1 parent 14a10b7 commit 33d641d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/learn/core_notebooks/pymc_overview.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3342,7 +3342,7 @@
33423342
"\n",
33433343
"This is a more realistic problem than the first regression example, as we are now dealing with a **multivariate regression** model. However, while there are several potential predictors in the LSL-DR dataset, it is difficult *a priori* to determine which ones are relevant for constructing an effective statistical model. There are a number of approaches for conducting variable selection, but a popular automated method is *regularization*, whereby ineffective covariates are shrunk towards zero via regularization (a form of penalization) if they do not contribute to predicting outcomes. \n",
33443344
"\n",
3345-
"You may have heard of regularization from machine learning or classical statistics applications, where methods like the lasso or ridge regression shrink parameters towards zero by applying a penalty to the size of the regression parameters. In a Bayesian context, we apply an appropriate prior distribution to the regression coefficients. One such prior is the *hierarchical regularized horseshoe*, which uses two regularization strategies, one global and a set of local local parameters, one for each coefficient. The key to making this work is by selecting a long-tailed distribution as the shrinkage priors, which allows some to be nonzero, while pushing the rest towards zero.\n",
3345+
"You may have heard of regularization from machine learning or classical statistics applications, where methods like the lasso or ridge regression shrink parameters towards zero by applying a penalty to the size of the regression parameters. In a Bayesian context, we apply an appropriate prior distribution to the regression coefficients. One such prior is the *hierarchical regularized horseshoe*, which uses two regularization strategies, one global and a set of local parameters, one for each coefficient. The key to making this work is by selecting a long-tailed distribution as the shrinkage priors, which allows some to be nonzero, while pushing the rest towards zero.\n",
33463346
"\n",
33473347
"The horeshoe prior for each regression coefficient $\\beta_i$ looks like this:\n",
33483348
"\n",
@@ -3396,7 +3396,7 @@
33963396
"source": [
33973397
"### Model Specification\n",
33983398
"\n",
3399-
"Specifying the model in PyMC mirrors its statistical specification. This model employs a couple of new distributions: the {class}`~pymc.HalfStudentT` distribution for the $\\tau$ and $\\lambda$ priors, and the `InverseGamma` distribution for the $c2$ variable.\n",
3399+
"Specifying the model in PyMC mirrors its statistical specification. This model employs a couple of new distributions: the {class}`~pymc.HalfStudentT` distribution for the $\\tau$ and $\\lambda$ priors, and the `InverseGamma` distribution for the $c^2$ variable.\n",
34003400
"\n",
34013401
"In PyMC, variables with purely positive priors like {class}`~pymc.InverseGamma` are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named `<variable-name>_log`) is added to the model for sampling. Variables with priors that constrain them on two sides, like {class}`~pymc.Beta` or {class}`~pymc.Uniform`, are also transformed to be unconstrained but with a log odds transform. \n",
34023402
"\n",

0 commit comments

Comments
 (0)