Skip to content

Commit 60e9ecf

Browse files
committed
Add page on submodels
1 parent 6df0913 commit 60e9ecf

File tree

4 files changed

+321
-1
lines changed

4 files changed

+321
-1
lines changed

_quarto.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ website:
5757
collapse-level: 1
5858
contents:
5959
- usage/automatic-differentiation/index.qmd
60+
- usage/submodels/index.qmd
6061
- usage/custom-distribution/index.qmd
6162
- usage/probability-interface/index.qmd
6263
- usage/modifying-logprob/index.qmd
@@ -196,6 +197,7 @@ usage-modifying-logprob: usage/modifying-logprob
196197
usage-performance-tips: usage/performance-tips
197198
usage-probability-interface: usage/probability-interface
198199
usage-sampler-visualisation: usage/sampler-visualisation
200+
usage-submodels: usage/submodels
199201
usage-tracking-extra-quantities: usage/tracking-extra-quantities
200202
usage-troubleshooting: usage/troubleshooting
201203

core-functionality/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ end
4545

4646
Indexing and field access is supported, so that `x[i] ~ distr` and `x.field ~ distr` are valid statements.
4747
However, in these cases, `x` must be defined in the scope of the model function.
48-
`distr` is typically either a distribution from Distributions.jl (see [this page]({{< meta usage-custom-distribution >}}) for implementing custom distributions), or another Turing model wrapped in `to_submodel()`.
48+
`distr` is typically either a distribution from Distributions.jl (see [this page]({{< meta usage-custom-distribution >}}) for implementing custom distributions), or another Turing model wrapped in `to_submodel()` (see [this page]({{< meta usage-submodels >}}) for submodels).
4949

5050
There are two classes of tilde-statements: _observe_ statements, where the left-hand side contains an observed value, and _assume_ statements, where the left-hand side is not observed.
5151
These respectively correspond to likelihood and prior terms.

developers/contexts/submodel-condition/index.qmd

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@ title: "Conditioning and fixing in submodels"
33
engine: julia
44
---
55

6+
This page is a technical explanation of how contexts are managed for submodels.
7+
8+
A user-facing guide to submodels is available on [this page]({{< meta usage-submodels }}>).
9+
610
## PrefixContext
711

812
Submodels in DynamicPPL come with the notion of _prefixing_ variables: under the hood, this is implemented by adding a `PrefixContext` to the context stack.

usage/submodels/index.qmd

Lines changed: 314 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,314 @@
1+
---
2+
title: Submodels
3+
engine: julia
4+
---
5+
6+
```{julia}
7+
using Turing
8+
using Random: Xoshiro, seed!
9+
seed!(468)
10+
```
11+
12+
In Turing.jl, you can define models and use them as components of larger models (i.e., _submodels_), using the `to_submodel` function.
13+
In this way, you can (for example) define a model once and use it in multiple places:
14+
15+
```{julia}
16+
@model function inner()
17+
a ~ Normal()
18+
return a + 100
19+
end
20+
21+
@model function outer()
22+
# This line adds the variable `x.a` to the chain.
23+
# The inner variable `a` is prefixed with the
24+
# left-hand side of the `~` operator, i.e. `x`.
25+
x ~ to_submodel(inner())
26+
# Here, the value of x will be `a + 100` because
27+
# that is the return value of the submodel.
28+
b ~ Normal(x)
29+
end
30+
```
31+
32+
If we sample from this model, we would expect that `x.a` should be close to zero, and `b` close to 100:
33+
34+
```{julia}
35+
rand(outer())
36+
```
37+
38+
## Manipulating submodels
39+
40+
### Conditioning
41+
42+
In general, everything that can be done to a model 'carries over' to when it is used as a submodel.
43+
For example, you can condition a variable in a submodel in two ways:
44+
45+
```{julia}
46+
# From the outside; the prefix `x` must be applied because
47+
# from the perspective of `outer`, the variable is called
48+
# `x.a`.
49+
outer_conditioned1 = outer() | (@varname(x.a) => 1);
50+
rand(Xoshiro(468), outer_conditioned1)
51+
```
52+
53+
Or equivalently, from the inside:
54+
55+
```{julia}
56+
@model function outer_conditioned2()
57+
# The prefix doesn't need to be applied here because
58+
# `inner` itself has no knowledge of the prefix.
59+
x ~ to_submodel(inner() | (@varname(a) => 1))
60+
b ~ Normal(x)
61+
end
62+
rand(Xoshiro(468), outer_conditioned2())
63+
```
64+
65+
In both cases the variable `x.a` does not appear.
66+
67+
Note, however, that you cannot condition on the return value of a submodel.
68+
Thus, for example, if we had:
69+
70+
```{julia}
71+
@model function inner_sensible()
72+
a ~ Normal()
73+
return a
74+
end
75+
76+
@model function outer()
77+
x ~ to_submodel(inner())
78+
b ~ Normal(x)
79+
end
80+
```
81+
82+
and we tried to condition on `x`, it would be silently ignored, even though `x` is equal to `a`.
83+
84+
The reason for this is because it is entirely coincidental that the return value of the submodel is equal to `a`.
85+
In general, a return value can be anything, and conditioning on it is in general not a meaningful operation.
86+
87+
### Prefixing
88+
89+
Prefixing is the only place where submodel behaviour is 'special' compared to that of ordinary models.
90+
91+
By default, all variables in a submodel are prefixed with the left-hand side of the tilde-statement.
92+
This is done to avoid clashes if the same submodel is used multiple times in a model.
93+
94+
You can disable this by passing `false` as the second argument to `to_submodel`:
95+
96+
```{julia}
97+
@model function outer_no_prefix()
98+
x ~ to_submodel(inner(), false)
99+
b ~ Normal(x)
100+
end
101+
rand(outer_no_prefix())
102+
```
103+
104+
## Accessing submodel variables
105+
106+
In all of the examples above, `x` is equal to `a + 100` because that is the return value of the submodel.
107+
To access the actual latent variables in the submodel itself, the simplest option is to include the variable in the return value of the submodel:
108+
109+
```{julia}
110+
@model function inner_with_retval()
111+
a ~ Normal()
112+
# You can return anything you like from the model,
113+
# but if you want to access the latent variables, they
114+
# should be included in the return value.
115+
return (; a=a, a_plus_100=a + 100)
116+
end
117+
@model function outer_with_retval()
118+
# The variable `x` will now contain the return value of the submodel,
119+
# which is a named tuple with `a` and `a_plus_100`.
120+
x ~ to_submodel(inner_with_retval())
121+
# You can access the value of x.a directly, because
122+
# x is a NamedTuple which contains `a`. Since `b` is
123+
# centred on `x.a`, it should be close to 0, not 100.
124+
b ~ Normal(x.a)
125+
end
126+
rand(Xoshiro(468), outer_with_retval())
127+
```
128+
129+
You can also manually access the value by looking inside the special `__varinfo__` object.
130+
131+
::: {.callout-warning}
132+
This relies on DynamicPPL internals and we do not recommend doing this unless you have no other option, e.g., if the submodel is defined in a different package which you do not control.
133+
:::
134+
135+
```{julia}
136+
@model function outer_with_varinfo()
137+
x ~ to_submodel(inner())
138+
# Access the value of x.a
139+
a_value = __varinfo__[@varname(x.a)]
140+
b ~ Normal(a_value)
141+
end
142+
rand(Xoshiro(468), outer_with_varinfo())
143+
```
144+
145+
146+
## Example: linear models
147+
148+
Here is a motivating example for the use of submodels.
149+
Suppose we want to fit a (very simplified) regression model to some data $x$ and $y$, where
150+
151+
$$\begin{align}
152+
c_0 &\sim \text{Normal}(0, 5) \\
153+
c_1 &\sim \text{Normal}(0, 5) \\
154+
\mu &= c_0 + c_1x \\
155+
y &\sim d
156+
\end{align}$$
157+
158+
where $d$ is _some_ distribution parameterised by the value of $\mu$, which we don't know the exact form of.
159+
160+
In practice, what we would do is to write several different models, one for each function $f$:
161+
162+
```julia
163+
@model function normal(x, y)
164+
c0 ~ Normal(0, 5)
165+
c1 ~ Normal(0, 5)
166+
mu = c0 + c1 * x
167+
# Assume that y = mu, and that the noise in `y` is
168+
# normally distributed with standard deviation sigma
169+
sigma ~ truncated(Cauchy(0, 3); lower=0)
170+
y ~ Normal(mu, sigma)
171+
end
172+
173+
@model function logpoisson(x, y)
174+
c0 ~ Normal(0, 5)
175+
c1 ~ Normal(0, 5)
176+
mu = c0 + c1 * x
177+
# exponentiate mu because the rate parameter of
178+
# a Poisson distribution must be positive
179+
y ~ Poisson(exp(mu))
180+
end
181+
182+
# and so on...
183+
```
184+
185+
We would then fit all of our models and use some criterion to test which model is most suitable (see e.g. [Wikipedia](https://en.wikipedia.org/wiki/Model_selection), or section 3.4 of Bishop's *Pattern Recognition and Machine Learning*).
186+
187+
However, the code above is quite repetitive.
188+
For example, if we wanted to adjust the priors on `c0` and `c1`, we would have to do it in each model separately.
189+
If this was any other kind of code, we would naturally think of extracting the common parts into a separate function.
190+
In this case we can do exactly that with a submodel:
191+
192+
```julia
193+
@model function priors(x)
194+
c0 ~ Normal(0, 5)
195+
c1 ~ Normal(0, 5)
196+
mu = c0 + c1 * x
197+
return (; c0=c0, c1=c1, mu=mu)
198+
end
199+
200+
@model function normal(x, y)
201+
(; c0, c1, mu) = to_submodel(priors(x))
202+
sigma ~ truncated(Cauchy(0, 3); lower=0)
203+
y ~ Normal(mu, sigma)
204+
end
205+
206+
@model function logpoisson(x, y)
207+
(; c0, c1, mu) = to_submodel(priors(x))
208+
y ~ Poisson(exp(mu))
209+
end
210+
```
211+
212+
One could go even further and extract the `y` section into its own submodel as well, which would bring us to a generalised linear modelling interface that does not actually require the user to define their own Turing models at all:
213+
214+
```julia
215+
@model function normal_family(mu, y)
216+
sigma ~ truncated(Cauchy(0, 3); lower=0)
217+
y ~ Normal(mu, sigma)
218+
return nothing
219+
end
220+
221+
@model function logpoisson_family(mu, y)
222+
y ~ Poisson(exp(mu))
223+
return nothing
224+
end
225+
226+
# An end-user could just use this function. Of course,
227+
# a more thorough interface would also allow the user to
228+
# specify priors, etc.
229+
function make_model(x, y, family::Symbol)
230+
if family == :normal
231+
family_model = normal_family
232+
elseif family == :logpoisson
233+
family_model = logpoisson_family
234+
else
235+
error("unknown family: `$family`")
236+
end
237+
238+
@model function general(x, y)
239+
(; c0, c1, mu) ~ to_submodel(priors(x))
240+
_n ~ to_submodel(family_model(mu, y))
241+
end
242+
return general(x, y)
243+
end
244+
```
245+
246+
While this final example really showcases the composability of submodels, it also illustrates a minor syntactic drawback.
247+
In the case of the `family_model`, we do not care about its return value because it is not used anywhere else in the model.
248+
Ideally, we would therefore not need to place anything on the left-hand side of `to_submodel`.
249+
However, because the special behaviour of `to_submodel` relies on the tilde operator, and the tilde operator requires a left-hand side, we have to use a dummy variable (here `_n`).
250+
251+
252+
## Submodels versus distributions
253+
254+
Finally, we end with a discussion of why some of the behaviour for submodels above has come about.
255+
This is slightly more behind-the-scenes and therefore will likely be of most interest to Turing developers.
256+
257+
Fundamentally, submodels are to be compared against distributions: both of them can appear on the right-hand side of a tilde statement.
258+
However, distributions only have one 'output', i.e., the value that is sampled from them:
259+
260+
```{julia}
261+
dist = Normal()
262+
rand(dist)
263+
```
264+
265+
Another point to bear in mind is that, given a sample from `dist`, asking for its log-probability is a meaningful calculation.
266+
267+
```{julia}
268+
logpdf(dist, rand(dist))
269+
```
270+
271+
In contrast, models (and hence submodels) have two different outputs: the latent variables, and the return value.
272+
These are accessed respectively using `rand(model)` and `model()`:
273+
274+
```{julia}
275+
@model function f()
276+
a ~ Normal()
277+
return "hello, world."
278+
end
279+
280+
model = f()
281+
```
282+
283+
```{julia}
284+
# Latent variables
285+
rand(model)
286+
```
287+
288+
```{julia}
289+
# Return value
290+
model()
291+
```
292+
293+
Just like for distributions, one can indeed ask for the log-probability of the latent variables (although we have to specify whether we want the joint, likelihood, or prior):
294+
295+
```{julia}
296+
logjoint(model, rand(model))
297+
```
298+
299+
But it does not make sense to ask for the log-probability of the return value (which in this case is a string, and in general, could be literally any object).
300+
301+
The fact that we have what looks like a unified notation for these is a bit of a lie, since it hides this distinction.
302+
In particular, for `x ~ distr`, `x` is assigned the value of `rand(distr)`; but for `y ~ submodel`, `y` is assigned the value of `submodel()`.
303+
This is why, for example, it is impossible to condition on `y` in `y ~ ...`; we can only condition on `x` in `x ~ dist`.
304+
305+
Eventually we would like to make this more logically consistent.
306+
In particular, it is clear that `y ~ submodel` should return not one but two objects: the latent variables and the return value.
307+
Furthermore, it should be possible to condition on the latent variables, but not on the return value.
308+
See [this issue](https://github.com/TuringLang/Turing.jl/issues/2485) for an ongoing discussion of the best way to accomplish this.
309+
310+
It should be mentioned that extracting the latent variables from a submodel is not entirely trivial since the submodel is run using the same `VarInfo` as the parent model (i.e., we would have to do a before-and-after comparison to see which _new_ variables were added by the submodel).
311+
312+
Also, we are still working out the exact data structure that should be used to represent the latent variables.
313+
In the examples above `rand(model)` returns a `NamedTuple`, but this actually causes loss of information because the keys of a `NamedTuple` are `Symbol`s, whereas we really want to use `VarName`s.
314+
See [this issue](https://github.com/TuringLang/DynamicPPL.jl/issues/900) for a current proposal.

0 commit comments

Comments
 (0)