You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/docs-09-using-turing-advanced/index.qmd
+5-238Lines changed: 5 additions & 238 deletions
Original file line number
Diff line number
Diff line change
@@ -3,242 +3,9 @@ title: Advanced Usage
3
3
engine: julia
4
4
---
5
5
6
-
```{julia}
7
-
#| echo: false
8
-
#| output: false
9
-
using Pkg;
10
-
Pkg.instantiate();
11
-
```
6
+
This page has been separated into new sections. Please update any bookmarks you might have:
12
7
13
-
```{julia}
14
-
#| echo: false
15
-
using Distributions, Turing, Random, Bijectors
16
-
```
17
-
18
-
## How to Define a Customized Distribution
19
-
20
-
`Turing.jl` supports the use of distributions from the Distributions.jl package. By extension, it also supports the use of customized distributions by defining them as subtypes of `Distribution` type of the Distributions.jl package, as well as corresponding functions.
21
-
22
-
Below shows a workflow of how to define a customized distribution, using our own implementation of a simple `Uniform` distribution as a simple example.
23
-
24
-
### 1. Define the Distribution Type
25
-
26
-
First, define a type of the distribution, as a subtype of a corresponding distribution type in the Distributions.jl package.
27
-
28
-
```{julia}
29
-
struct CustomUniform <: ContinuousUnivariateDistribution end
30
-
```
31
-
32
-
### 2. Implement Sampling and Evaluation of the log-pdf
33
-
34
-
Second, define `rand` and `logpdf`, which will be used to run the model.
In most cases, it may be required to define some helper functions.
47
-
48
-
#### 3.1 Domain Transformation
49
-
50
-
Certain samplers, such as `HMC`, require the domain of the priors to be unbounded. Therefore, to use our `CustomUniform` as a prior in a model we also need to define how to transform samples from `[0, 1]` to `ℝ`. To do this, we simply need to define the corresponding `Bijector` from `Bijectors.jl`, which is what `Turing.jl` uses internally to deal with constrained distributions.
51
-
52
-
To transform from `[0, 1]` to `ℝ` we can use the `Logit` bijector:
You'd do the exact same thing for `ContinuousMultivariateDistribution` and `ContinuousMatrixDistribution`. For example, `Wishart` defines a distribution over positive-definite matrices and so `bijector` returns a `PDBijector` when called with a `Wishart` distribution as an argument. For discrete distributions, there is no need to define a bijector; the `Identity` bijector is used by default.
59
-
60
-
Alternatively, for `UnivariateDistribution` we can define the `minimum` and `maximum` of the distribution
61
-
62
-
```{julia}
63
-
Distributions.minimum(d::CustomUniform) = 0.0
64
-
Distributions.maximum(d::CustomUniform) = 1.0
65
-
```
66
-
67
-
and `Bijectors.jl` will return a default `Bijector` called `TruncatedBijector` which makes use of `minimum` and `maximum` derive the correct transformation.
68
-
69
-
Internally, Turing basically does the following when it needs to convert a constrained distribution to an unconstrained distribution, e.g. when sampling using `HMC`:
70
-
71
-
```{julia}
72
-
dist = Gamma(2,3)
73
-
b = bijector(dist)
74
-
transformed_dist = transformed(dist, b) # results in distribution with transformed support + correction for logpdf
75
-
```
76
-
77
-
and then we can call `rand` and `logpdf` as usual, where
78
-
79
-
-`rand(transformed_dist)` returns a sample in the unconstrained space, and
80
-
-`logpdf(transformed_dist, y)` returns the log density of the original distribution, but with `y` living in the unconstrained space.
81
-
82
-
To read more about Bijectors.jl, check out [the project README](https://github.com/TuringLang/Bijectors.jl).
83
-
84
-
## Update the accumulated log probability in the model definition
85
-
86
-
Turing accumulates log probabilities internally in an internal data structure that is accessible through
87
-
the internal variable `__varinfo__` inside of the model definition (see below for more details about model internals).
88
-
However, since users should not have to deal with internal data structures, a macro `Turing.@addlogprob!` is provided
89
-
that increases the accumulated log probability. For instance, this allows you to
90
-
[include arbitrary terms in the likelihood](https://github.com/TuringLang/Turing.jl/issues/1332)
# Instantiate a Model object with our data variables.
181
-
model = gdemo([1.5, 2.0])
182
-
```
183
-
184
-
### Reparametrization and generated_quantities
185
-
186
-
Often, the most natural parameterization for a model is not the most computationally feasible. Consider the following
187
-
(efficiently reparametrized) implementation of Neal's funnel [(Neal, 2003)](https://arxiv.org/abs/physics/0009028):
188
-
189
-
```{julia}
190
-
#| eval: false
191
-
@model function Neal()
192
-
# Raw draws
193
-
y_raw ~ Normal(0, 1)
194
-
x_raw ~ arraydist([Normal(0, 1) for i in 1:9])
195
-
196
-
# Transform:
197
-
y = 3 * y_raw
198
-
x = exp.(y ./ 2) .* x_raw
199
-
200
-
# Return:
201
-
return [x; y]
202
-
end
203
-
```
204
-
205
-
In this case, the random variables exposed in the chain (`x_raw`, `y_raw`) are not in a helpful form — what we're after is the deterministically transformed variables `x, y`.
206
-
207
-
More generally, there are often quantities in our models that we might be interested in viewing, but which are not explicitly present in our chain.
208
-
209
-
We can generate draws from these variables — in this case, `x, y` — by adding them as a return statement to the model, and then calling `generated_quantities(model, chain)`. Calling this function outputs an array of values specified in the return statement of the model.
210
-
211
-
For example, in the above reparametrization, we sample from our model:
212
-
213
-
```{julia}
214
-
#| eval: false
215
-
chain = sample(Neal(), NUTS(), 1000)
216
-
```
217
-
218
-
and then call:
219
-
220
-
```{julia}
221
-
#| eval: false
222
-
generated_quantities(Neal(), chain)
223
-
```
224
-
225
-
to return an array for each posterior sample containing `x1, x2, ... x9, y`.
226
-
227
-
In this case, it might be useful to reorganize our output into a matrix for plotting:
Where we can recover a vector of our samples as follows:
235
-
236
-
```{julia}
237
-
#| eval: false
238
-
x1_samples = reparam_chain[:, 1]
239
-
y_samples = reparam_chain[:, 10]
240
-
```
241
-
242
-
## Task Copying
243
-
244
-
Turing [copies](https://github.com/JuliaLang/julia/issues/4085) Julia tasks to deliver efficient inference algorithms, but it also provides alternative slower implementation as a fallback. Task copying is enabled by default. Task copying requires us to use the `TapedTask` facility which is provided by [Libtask](https://github.com/TuringLang/Libtask.jl) to create tasks.
Copy file name to clipboardExpand all lines: tutorials/docs-16-using-turing-external-samplers/index.qmd
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Using External Sampler
2
+
title: Using External Samplers
3
3
engine: julia
4
4
---
5
5
@@ -176,4 +176,4 @@ For practical examples of how to adapt a sampling library to the `AbstractMCMC`
176
176
[^1]: Xu et al., [AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms](http://proceedings.mlr.press/v118/xu20a/xu20a.pdf), 2019
177
177
[^2]: Zhang et al., [Pathfinder: Parallel quasi-Newton variational inference](https://arxiv.org/abs/2108.03782), 2021
178
178
[^3]: Robnik et al, [Microcanonical Hamiltonian Monte Carlo](https://arxiv.org/abs/2212.08549), 2022
179
-
[^4]: Robnik and Seljak, [Langevine Hamiltonian Monte Carlo](https://arxiv.org/abs/2303.18221), 2023
179
+
[^4]: Robnik and Seljak, [Langevine Hamiltonian Monte Carlo](https://arxiv.org/abs/2303.18221), 2023
0 commit comments