Skip to content

Commit 1c88b69

Browse files
fixed docs
1 parent 89bc2aa commit 1c88b69

File tree

19 files changed

+19
-41
lines changed

19 files changed

+19
-41
lines changed

tutorials/00-introduction/index.qmd

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,6 @@ Now we can build our plot:
196196

197197
<!-- ```{julia}
198198
#| echo=false
199-
#| output: true
200199
@assert isapprox(mean(chain, :p), 0.5; atol=0.1) "Estimated mean of parameter p: $(mean(chain, :p)) - not in [0.4, 0.6]!"
201200
``` -->
202201

tutorials/01-gaussian-mixture-model/index.qmd

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -107,13 +107,11 @@ We use a `Gibbs` sampler that combines a [particle Gibbs](https://www.stats.ox.a
107107
We generate multiple chains in parallel using multi-threading.
108108

109109
```{julia}
110-
#| echo: false
111110
#| output: false
112111
setprogress!(false)
113112
```
114113

115114
```{julia}
116-
#| output: false
117115
sampler = Gibbs(PG(100, :k), HMC(0.05, 10, :μ, :w))
118116
nsamples = 100
119117
nchains = 3

tutorials/02-logistic-regression/index.qmd

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,6 @@ end;
121121
Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior.
122122

123123
```{julia}
124-
#| echo: false
125124
#| output: false
126125
setprogress!(false)
127126
```

tutorials/03-bayesian-neural-network/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,6 @@ end;
107107
Inference can now be performed by calling `sample`. We use the `NUTS` Hamiltonian Monte Carlo sampler here.
108108

109109
```{julia}
110-
#| echo: false
111110
#| output: false
112111
setprogress!(false)
113112
```
@@ -183,6 +182,7 @@ contour!(x1_range, x2_range, Z)
183182

184183
Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 1,000.
185184

185+
186186
```{julia}
187187
# Number of iterations to plot.
188188
n_end = 500

tutorials/04-hidden-markov-model/index.qmd

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,6 @@ The parameter `s` is not a continuous variable. It is a vector of **integers**,
123123
Time to run our sampler.
124124

125125
```{julia}
126-
#| echo: false
127126
#| output: false
128127
setprogress!(false)
129128
```
@@ -140,7 +139,6 @@ It's a bit easier to show how our model performed graphically.
140139
The code below generates an animation showing the graph of the data above, and the data our model generates in each sample.
141140

142141
```{julia}
143-
144142
# Extract our m and s parameters from the chain.
145143
m_set = MCMCChains.group(chn, :m).value
146144
s_set = MCMCChains.group(chn, :s).value

tutorials/05-linear-regression/index.qmd

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ This tutorial covers how to implement a linear regression model in Turing.
1111
We begin by importing all the necessary libraries.
1212

1313
```{julia}
14-
#| eval: false
1514
# Import Turing.
1615
using Turing
1716
@@ -39,8 +38,6 @@ Random.seed!(0);
3938
```
4039

4140
```{julia}
42-
#| eval: false
43-
#| echo: false
4441
#| output: false
4542
setprogress!(false)
4643
```
@@ -52,7 +49,6 @@ We want to know if we can construct a Bayesian linear regression model to predic
5249
Let us take a look at the data we have.
5350

5451
```{julia}
55-
#| eval: false
5652
# Load the dataset.
5753
data = RDatasets.dataset("datasets", "mtcars")
5854
@@ -61,7 +57,6 @@ first(data, 6)
6157
```
6258

6359
```{julia}
64-
#| eval: false
6560
size(data)
6661
```
6762

@@ -118,7 +113,6 @@ We do not know that our coefficients are different from zero, and we don't know
118113
Lastly, each observation $y_i$ is distributed according to the calculated `mu` term given by $\alpha + \boldsymbol{\beta}^\mathsf{T}\boldsymbol{X_i}$.
119114

120115
```{julia}
121-
#| eval: false
122116
# Bayesian linear regression.
123117
@model function linear_regression(x, y)
124118
# Set variance prior.

tutorials/06-infinite-mixture-model/index.qmd

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,6 @@ data /= std(data);
222222
Next, we'll sample from our posterior using SMC.
223223

224224
```{julia}
225-
#| echo: false
226225
#| output: false
227226
setprogress!(false)
228227
```

tutorials/07-poisson-regression/index.qmd

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ df[sample(1:nrow(df), 5; replace=false), :]
6969
We plot the distribution of the number of sneezes for the 4 different cases taken above. As expected, the person sneezes the most when he has taken alcohol and not taken his medicine. He sneezes the least when he doesn't consume alcohol and takes his medicine.
7070

7171
```{julia}
72-
#Data Plotting
72+
# Data Plotting
7373
7474
p1 = Plots.histogram(
7575
df[(df[:, :alcohol_taken] .== 0) .& (df[:, :nomeds_taken] .== 0), 1];
@@ -102,7 +102,7 @@ data
102102
We must recenter our data about 0 to help the Turing sampler in initialising the parameter estimates. So, normalising the data in each column by subtracting the mean and dividing by the standard deviation:
103103

104104
```{julia}
105-
# # Rescale our matrices.
105+
# Rescale our matrices.
106106
data = (data .- mean(data; dims=1)) ./ std(data; dims=1)
107107
```
108108

@@ -213,4 +213,3 @@ plot(chains_new)
213213
```
214214

215215
As can be seen from the numeric values and the plots above, the standard deviation values have decreased and all the plotted values are from the estimated posteriors. The exponentiated mean values, with the warmup samples removed, have not changed by much and they are still in accordance with their intuitive meanings as described earlier.
216-

tutorials/08-multinomial-logistic-regression/index.qmd

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,6 @@ end;
119119
Now we can run our sampler. This time we'll use [`NUTS`](https://turinglang.org/stable/docs/library/#Turing.Inference.NUTS) to sample from our posterior.
120120

121121
```{julia}
122-
#| echo: false
123122
#| output: false
124123
setprogress!(false)
125124
```

tutorials/09-variational-inference/index.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,6 @@ We'll produce 10 000 samples with 200 steps used for adaptation and a target acc
7373
If you don't understand what "adaptation" or "target acceptance rate" refers to, all you really need to know is that `NUTS` is known to be one of the most accurate and efficient samplers (when applicable) while requiring little to no hand-tuning to work well.
7474

7575
```{julia}
76-
#| echo: false
7776
#| output: false
7877
setprogress!(false)
7978
```
@@ -601,6 +600,7 @@ Test set:
601600
OLS loss: $ols_loss2")
602601
```
603602

603+
604604
Interestingly the squared difference between true- and mean-prediction on the test-set is actually *better* for the mean-field variational posterior than for the "true" posterior obtained by MCMC sampling using `NUTS`. But, as Bayesians, we know that the mean doesn't tell the entire story. One quick check is to look at the mean predictions ± standard deviation of the two different approaches:
605605

606606
```{julia}
@@ -797,8 +797,8 @@ Test set:
797797
```
798798

799799
```{julia}
800-
#| echo: false
801800
#| eval: false
801+
#| echo: false
802802
# Verify the loss on the test set.
803803
@assert vi_loss2 < 0.01 "VI loss on the test set: $(vi_loss2)"
804804
@assert bayes_loss2 < 0.000001 "Bayes loss on the test set: $(bayes_loss2)"

0 commit comments

Comments
 (0)