Skip to content

Commit a7435fb

Browse files
authored
Spellcheck all notebooks (pymc-devs#492)
1 parent 2c721e6 commit a7435fb

26 files changed

+36
-36
lines changed

examples/case_studies/BART_introduction.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@
279279
"id": "8633b7b4",
280280
"metadata": {},
281281
"source": [
282-
"The next figure shows 3 trees. As we can see these are very simple function and definitely not very good approximators by themselves. Inspecting individuals trees is generally not necessary when working with BART, we are showing them just so we can gain further intuition on the inner workins of BART."
282+
"The next figure shows 3 trees. As we can see these are very simple function and definitely not very good approximators by themselves. Inspecting individuals trees is generally not necessary when working with BART, we are showing them just so we can gain further intuition on the inner workings of BART."
283283
]
284284
},
285285
{

examples/case_studies/BART_introduction.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ The following figure shows two samples of $\mu$ from the posterior.
117117
plt.step(x_data, idata_coal.posterior["μ"].sel(chain=0, draw=[3, 10]).T);
118118
```
119119

120-
The next figure shows 3 trees. As we can see these are very simple function and definitely not very good approximators by themselves. Inspecting individuals trees is generally not necessary when working with BART, we are showing them just so we can gain further intuition on the inner workins of BART.
120+
The next figure shows 3 trees. As we can see these are very simple function and definitely not very good approximators by themselves. Inspecting individuals trees is generally not necessary when working with BART, we are showing them just so we can gain further intuition on the inner workings of BART.
121121

122122
```{code-cell} ipython3
123123
bart_trees = μ_.owner.op.all_trees

examples/case_studies/binning.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -758,7 +758,7 @@
758758
"id": "71c3cf64",
759759
"metadata": {},
760760
"source": [
761-
"Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overal posterior average with `.mean(dim=[\"draw\", \"chain\"])`."
761+
"Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overall posterior average with `.mean(dim=[\"draw\", \"chain\"])`."
762762
]
763763
},
764764
{

examples/case_studies/binning.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ Recall that we used `mu = -2` and `sigma = 2` to generate the data.
299299
az.plot_posterior(trace1, var_names=["mu", "sigma"], ref_val=[true_mu, true_sigma]);
300300
```
301301

302-
Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overal posterior average with `.mean(dim=["draw", "chain"])`.
302+
Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overall posterior average with `.mean(dim=["draw", "chain"])`.
303303

304304
```{code-cell} ipython3
305305
:tags: []

examples/case_studies/conditional-autoregressive-model.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -286,11 +286,11 @@
286286
" self.mode = 0.0\n",
287287
"\n",
288288
" def get_mu(self, x):\n",
289-
" def weigth_mu(w, a):\n",
289+
" def weight_mu(w, a):\n",
290290
" a1 = tt.cast(a, \"int32\")\n",
291291
" return tt.sum(w * x[a1]) / tt.sum(w)\n",
292292
"\n",
293-
" mu_w, _ = scan(fn=weigth_mu, sequences=[self.w, self.a])\n",
293+
" mu_w, _ = scan(fn=weight_mu, sequences=[self.w, self.a])\n",
294294
"\n",
295295
" return mu_w\n",
296296
"\n",

examples/case_studies/conditional-autoregressive-model.myst.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -220,11 +220,11 @@ class CAR(distribution.Continuous):
220220
self.mode = 0.0
221221
222222
def get_mu(self, x):
223-
def weigth_mu(w, a):
223+
def weight_mu(w, a):
224224
a1 = tt.cast(a, "int32")
225225
return tt.sum(w * x[a1]) / tt.sum(w)
226226
227-
mu_w, _ = scan(fn=weigth_mu, sequences=[self.w, self.a])
227+
mu_w, _ = scan(fn=weight_mu, sequences=[self.w, self.a])
228228
229229
return mu_w
230230

examples/case_studies/hierarchical_partial_pooling.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@
4242
"\n",
4343
"It may be possible to cluster groups of \"similar\" players, and estimate group averages, but using a hierarchical modeling approach is a natural way of sharing information that does not involve identifying *ad hoc* clusters.\n",
4444
"\n",
45-
"The idea of hierarchical partial pooling is to model the global performance, and use that estimate to parameterize a population of players that accounts for differences among the players' performances. This tradeoff between global and individual performance will be automatically tuned by the model. Also, uncertainty due to different number of at bats for each player (*i.e.* informatino) will be automatically accounted for, by shrinking those estimates closer to the global mean.\n",
45+
"The idea of hierarchical partial pooling is to model the global performance, and use that estimate to parameterize a population of players that accounts for differences among the players' performances. This tradeoff between global and individual performance will be automatically tuned by the model. Also, uncertainty due to different number of at bats for each player (*i.e.* information) will be automatically accounted for, by shrinking those estimates closer to the global mean.\n",
4646
"\n",
4747
"For far more in-depth discussion please refer to Stan [tutorial](http://mc-stan.org/documentation/case-studies/pool-binary-trials.html) {cite:p}`carpenter2016hierarchical` on the subject. The model and parameter values were taken from that example."
4848
]

examples/case_studies/hierarchical_partial_pooling.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Of course, neither approach is realistic. Clearly, all players aren't equally sk
4545

4646
It may be possible to cluster groups of "similar" players, and estimate group averages, but using a hierarchical modeling approach is a natural way of sharing information that does not involve identifying *ad hoc* clusters.
4747

48-
The idea of hierarchical partial pooling is to model the global performance, and use that estimate to parameterize a population of players that accounts for differences among the players' performances. This tradeoff between global and individual performance will be automatically tuned by the model. Also, uncertainty due to different number of at bats for each player (*i.e.* informatino) will be automatically accounted for, by shrinking those estimates closer to the global mean.
48+
The idea of hierarchical partial pooling is to model the global performance, and use that estimate to parameterize a population of players that accounts for differences among the players' performances. This tradeoff between global and individual performance will be automatically tuned by the model. Also, uncertainty due to different number of at bats for each player (*i.e.* information) will be automatically accounted for, by shrinking those estimates closer to the global mean.
4949

5050
For far more in-depth discussion please refer to Stan [tutorial](http://mc-stan.org/documentation/case-studies/pool-binary-trials.html) {cite:p}`carpenter2016hierarchical` on the subject. The model and parameter values were taken from that example.
5151

examples/case_studies/item_response_nba.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@
248248
"output_type": "stream",
249249
"text": [
250250
"Number of observed plays: 46861\n",
251-
"Number of disadvanteged players: 770\n",
251+
"Number of disadvantaged players: 770\n",
252252
"Number of committing players: 789\n",
253253
"Global probability of a foul being called: 23.3%\n",
254254
"\n",
@@ -378,7 +378,7 @@
378378
"\n",
379379
"# Display of main dataframe with some statistics\n",
380380
"print(f\"Number of observed plays: {len(df)}\")\n",
381-
"print(f\"Number of disadvanteged players: {len(disadvantaged)}\")\n",
381+
"print(f\"Number of disadvantaged players: {len(disadvantaged)}\")\n",
382382
"print(f\"Number of committing players: {len(committing)}\")\n",
383383
"print(f\"Global probability of a foul being called: \" f\"{100*round(df.foul_called.mean(),3)}%\\n\\n\")\n",
384384
"df.head()"

examples/case_studies/item_response_nba.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ df.index.name = "play_id"
140140
141141
# Display of main dataframe with some statistics
142142
print(f"Number of observed plays: {len(df)}")
143-
print(f"Number of disadvanteged players: {len(disadvantaged)}")
143+
print(f"Number of disadvantaged players: {len(disadvantaged)}")
144144
print(f"Number of committing players: {len(committing)}")
145145
print(f"Global probability of a foul being called: " f"{100*round(df.foul_called.mean(),3)}%\n\n")
146146
df.head()

0 commit comments

Comments
 (0)