You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/00-introduction/index.qmd
+33-55Lines changed: 33 additions & 55 deletions
Original file line number
Diff line number
Diff line change
@@ -1,35 +1,27 @@
1
1
---
2
2
title: Introduction to Turing
3
-
permalink: /tutorials/:name/
4
-
redirect_from: tutorials/0-introduction/
5
-
weave_options:
6
-
error : false
3
+
engine: julia
7
4
---
8
5
9
-
## Introduction
6
+
###Introduction
10
7
11
8
This is the first of a series of tutorials on the universal probabilistic programming language **Turing**.
12
9
13
-
Turing is a probabilistic programming system written entirely in Julia.
14
-
It has an intuitive modelling syntax and supports a wide range of sampling-based inference algorithms.
10
+
Turing is a probabilistic programming system written entirely in Julia. It has an intuitive modelling syntax and supports a wide range of sampling-based inference algorithms.
15
11
16
-
Familiarity with Julia is assumed throughout this tutorial.
17
-
If you are new to Julia, [Learning Julia](https://julialang.org/learning/) is a good starting point.
12
+
Familiarity with Julia is assumed throughout this tutorial. If you are new to Julia, [Learning Julia](https://julialang.org/learning/) is a good starting point.
18
13
19
-
For users new to Bayesian machine learning, please consider more thorough introductions to the field such as [Pattern Recognition and Machine Learning](https://www.springer.com/us/book/9780387310732).
20
-
This tutorial tries to provide an intuition for Bayesian inference and gives a simple example on how to use Turing.
21
-
Note that this is not a comprehensive introduction to Bayesian machine learning.
14
+
For users new to Bayesian machine learning, please consider more thorough introductions to the field such as [Pattern Recognition and Machine Learning](https://www.springer.com/us/book/9780387310732). This tutorial tries to provide an intuition for Bayesian inference and gives a simple example on how to use Turing. Note that this is not a comprehensive introduction to Bayesian machine learning.
22
15
23
16
### Coin Flipping Without Turing
24
17
25
18
The following example illustrates the effect of updating our beliefs with every piece of new evidence we observe.
26
19
27
-
Assume that we are unsure about the probability of heads in a coin flip.
28
-
To get an intuitive understanding of what "updating our beliefs" is, we will visualize the probability of heads in a coin flip after each observed evidence.
20
+
Assume that we are unsure about the probability of heads in a coin flip. To get an intuitive understanding of what "updating our beliefs" is, we will visualize the probability of heads in a coin flip after each observed evidence.
29
21
30
22
First, let us load some packages that we need to simulate a coin flip
31
23
32
-
```julia
24
+
```{julia}
33
25
using Distributions
34
26
35
27
using Random
@@ -38,57 +30,47 @@ Random.seed!(12); # Set seed for reproducibility
38
30
39
31
and to visualize our results.
40
32
41
-
```julia
33
+
```{julia}
42
34
using StatsPlots
43
35
```
44
36
45
-
Note that Turing is not loaded here — we do not use it in this example.
46
-
If you are already familiar with posterior updates, you can proceed to the next step.
37
+
Note that Turing is not loaded here — we do not use it in this example. If you are already familiar with posterior updates, you can proceed to the next step.
47
38
48
-
Next, we configure the data generating model.
49
-
Let us set the true probability that a coin flip turns up heads
39
+
Next, we configure the data generating model. Let us set the true probability that a coin flip turns up heads
50
40
51
-
```julia
41
+
```{julia}
52
42
p_true = 0.5;
53
43
```
54
44
55
45
and set the number of coin flips we will show our model.
56
46
57
-
```julia
47
+
```{julia}
58
48
N = 100;
59
49
```
60
50
61
-
We simulate `N` coin flips by drawing `N` random samples from the Bernoulli distribution with success probability `p_true`.
62
-
The draws are collected in a variable called `data`:
51
+
We simulate `N` coin flips by drawing N random samples from the Bernoulli distribution with success probability `p_true`. The draws are collected in a variable called `data`:
63
52
64
-
```julia
53
+
```{julia}
65
54
data = rand(Bernoulli(p_true), N);
66
55
```
67
56
68
57
Here is what the first five coin flips look like:
69
58
70
-
```julia
59
+
```{julia}
71
60
data[1:5]
72
61
```
73
62
74
-
Next, we specify a prior belief about the distribution of heads and tails in a coin toss.
75
-
Here we choose a [Beta](https://en.wikipedia.org/wiki/Beta_distribution) distribution as prior distribution for the probability of heads.
76
-
Before any coin flip is observed, we assume a uniform distribution $\operatorname{U}(0, 1) = \operatorname{Beta}(1, 1)$ of the probability of heads.
77
-
I.e., every probability is equally likely initially.
63
+
Next, we specify a prior belief about the distribution of heads and tails in a coin toss. Here we choose a [Beta](https://en.wikipedia.org/wiki/Beta_distribution) distribution as prior distribution for the probability of heads. Before any coin flip is observed, we assume a uniform distribution $\operatorname{U}(0, 1) = \operatorname{Beta}(1, 1)$ of the probability of heads. I.e., every probability is equally likely initially.
78
64
79
-
```julia
65
+
```{julia}
80
66
prior_belief = Beta(1, 1);
81
67
```
82
68
83
69
With our priors set and our data at hand, we can perform Bayesian inference.
84
70
85
-
This is a fairly simple process.
86
-
We expose one additional coin flip to our model every iteration, such that the first run only sees the first coin flip, while the last iteration sees all the coin flips.
87
-
In each iteration we update our belief to an updated version of the original Beta distribution that accounts for the new proportion of heads and tails.
88
-
The update is particularly simple since our prior distribution is a [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior).
89
-
Note that a closed-form expression for the posterior (implemented in the `updated_belief` expression below) is not accessible in general and usually does not exist for more interesting models.
71
+
This is a fairly simple process. We expose one additional coin flip to our model every iteration, such that the first run only sees the first coin flip, while the last iteration sees all the coin flips. In each iteration we update our belief to an updated version of the original Beta distribution that accounts for the new proportion of heads and tails. The update is particularly simple since our prior distribution is a [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior). Note that a closed-form expression for the posterior (implemented in the `updated_belief` expression below) is not accessible in general and usually does not exist for more interesting models.
90
72
91
-
```julia
73
+
```{julia}
92
74
function updated_belief(prior_belief::Beta, data::AbstractArray{Bool})
93
75
# Count the number of heads and tails.
94
76
heads = sum(data)
@@ -143,19 +125,19 @@ We now move away from the closed-form expression above.
143
125
We use **Turing** to specify the same model and to approximate the posterior distribution with samples.
144
126
To do so, we first need to load `Turing`.
145
127
146
-
```julia
128
+
```{julia}
147
129
using Turing
148
130
```
149
131
150
132
Additionally, we load `MCMCChains`, a library for analyzing and visualizing the samples with which we approximate the posterior distribution.
151
133
152
-
```julia
134
+
```{julia}
153
135
using MCMCChains
154
136
```
155
137
156
138
First, we define the coin-flip model using Turing.
157
139
158
-
```julia
140
+
```{julia}
159
141
# Unconditioned coinflip model with `N` observations.
160
142
@model function coinflip(; N::Int)
161
143
# Our prior belief about the probability of heads in a coin toss.
@@ -174,15 +156,15 @@ The `@model` macro modifies the body of the Julia function `coinflip` and, e.g.,
174
156
175
157
Here we defined a model that is not conditioned on any specific observations as this allows us to easily obtain samples of both `p` and `y` with
176
158
177
-
```julia
159
+
```{julia}
178
160
rand(coinflip(; N))
179
161
```
180
162
181
163
The model can be conditioned on some observations with `|`.
182
164
See the [documentation of the `condition` syntax](https://turinglang.github.io/DynamicPPL.jl/stable/api/#Condition-and-decondition) in `DynamicPPL.jl` for more details.
183
165
In the conditioned `model` the observations `y` are fixed to `data`.
The `sample` function and common keyword arguments are explained more extensively in the documentation of [AbstractMCMC.jl](https://turinglang.github.io/AbstractMCMC.jl/dev/api/).
206
188
207
189
After finishing the sampling process, we can visually compare the closed-form posterior distribution with the approximation obtained with Turing.
208
190
209
-
```julia
191
+
```{julia}
210
192
histogram(chain)
211
193
```
212
194
213
195
Now we can build our plot:
214
196
215
-
```julia; echo=false
197
+
<!-- ```{julia}
198
+
#| echo=false
199
+
#| output: true
216
200
@assert isapprox(mean(chain, :p), 0.5; atol=0.1) "Estimated mean of parameter p: $(mean(chain, :p)) - not in [0.4, 0.6]!"
217
-
```
201
+
```-->
218
202
219
-
```julia
203
+
```{julia}
220
204
# Visualize a blue density plot of the approximate posterior distribution using HMC (see Chain 1 in the legend).
0 commit comments