Skip to content

Commit b22e287

Browse files
committed
Actually run the linear model bit
1 parent 60e9ecf commit b22e287

File tree

1 file changed

+35
-18
lines changed

1 file changed

+35
-18
lines changed

usage/submodels/index.qmd

Lines changed: 35 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,6 @@ end
142142
rand(Xoshiro(468), outer_with_varinfo())
143143
```
144144

145-
146145
## Example: linear models
147146

148147
Here is a motivating example for the use of submodels.
@@ -159,67 +158,83 @@ where $d$ is _some_ distribution parameterised by the value of $\mu$, which we d
159158

160159
In practice, what we would do is to write several different models, one for each function $f$:
161160

162-
```julia
161+
```{julia}
163162
@model function normal(x, y)
164163
c0 ~ Normal(0, 5)
165164
c1 ~ Normal(0, 5)
166-
mu = c0 + c1 * x
165+
mu = c0 .+ c1 .* x
167166
# Assume that y = mu, and that the noise in `y` is
168167
# normally distributed with standard deviation sigma
169168
sigma ~ truncated(Cauchy(0, 3); lower=0)
170-
y ~ Normal(mu, sigma)
169+
for i in eachindex(y)
170+
y[i] ~ Normal(mu[i], sigma)
171+
end
171172
end
172173
173174
@model function logpoisson(x, y)
174175
c0 ~ Normal(0, 5)
175176
c1 ~ Normal(0, 5)
176-
mu = c0 + c1 * x
177+
mu = c0 .+ c1 .* x
177178
# exponentiate mu because the rate parameter of
178179
# a Poisson distribution must be positive
179-
y ~ Poisson(exp(mu))
180+
for i in eachindex(y)
181+
y[i] ~ Poisson(exp(mu[i]))
182+
end
180183
end
181184
182185
# and so on...
183186
```
184187

188+
::: {.callout-note}
189+
You could use `arraydist` to avoid the loops: for example, in `logpoisson`, one could write `y ~ arraydist(Poisson.(exp.(mu)))`, but for simplicity in this tutorial we spell it out fully.
190+
:::
191+
185192
We would then fit all of our models and use some criterion to test which model is most suitable (see e.g. [Wikipedia](https://en.wikipedia.org/wiki/Model_selection), or section 3.4 of Bishop's *Pattern Recognition and Machine Learning*).
186193

187194
However, the code above is quite repetitive.
188195
For example, if we wanted to adjust the priors on `c0` and `c1`, we would have to do it in each model separately.
189196
If this was any other kind of code, we would naturally think of extracting the common parts into a separate function.
190197
In this case we can do exactly that with a submodel:
191198

192-
```julia
199+
```{julia}
193200
@model function priors(x)
194201
c0 ~ Normal(0, 5)
195202
c1 ~ Normal(0, 5)
196-
mu = c0 + c1 * x
203+
mu = c0 .+ c1 .* x
197204
return (; c0=c0, c1=c1, mu=mu)
198205
end
199206
200207
@model function normal(x, y)
201-
(; c0, c1, mu) = to_submodel(priors(x))
208+
ps = to_submodel(priors(x))
202209
sigma ~ truncated(Cauchy(0, 3); lower=0)
203-
y ~ Normal(mu, sigma)
210+
for i in eachindex(y)
211+
y[i] ~ Normal(ps.mu[i], sigma)
212+
end
204213
end
205214
206215
@model function logpoisson(x, y)
207-
(; c0, c1, mu) = to_submodel(priors(x))
208-
y ~ Poisson(exp(mu))
216+
ps = to_submodel(priors(x))
217+
for i in eachindex(y)
218+
y[i] ~ Poisson(exp(ps.mu[i]))
219+
end
209220
end
210221
```
211222

212223
One could go even further and extract the `y` section into its own submodel as well, which would bring us to a generalised linear modelling interface that does not actually require the user to define their own Turing models at all:
213224

214-
```julia
225+
```{julia}
215226
@model function normal_family(mu, y)
216227
sigma ~ truncated(Cauchy(0, 3); lower=0)
217-
y ~ Normal(mu, sigma)
228+
for i in eachindex(y)
229+
y[i] ~ Normal(mu[i], sigma)
230+
end
218231
return nothing
219232
end
220233
221234
@model function logpoisson_family(mu, y)
222-
y ~ Poisson(exp(mu))
235+
for i in eachindex(y)
236+
y[i] ~ Poisson(exp(mu[i]))
237+
end
223238
return nothing
224239
end
225240
@@ -236,16 +251,18 @@ function make_model(x, y, family::Symbol)
236251
end
237252
238253
@model function general(x, y)
239-
(; c0, c1, mu) ~ to_submodel(priors(x))
240-
_n ~ to_submodel(family_model(mu, y))
254+
ps ~ to_submodel(priors(x), false)
255+
_n ~ to_submodel(family_model(ps.mu, y), false)
241256
end
242257
return general(x, y)
243258
end
259+
260+
sample(make_model([1, 2, 3], [1, 2, 3], :normal), NUTS(), 1000; progress=false)
244261
```
245262

246263
While this final example really showcases the composability of submodels, it also illustrates a minor syntactic drawback.
247264
In the case of the `family_model`, we do not care about its return value because it is not used anywhere else in the model.
248-
Ideally, we would therefore not need to place anything on the left-hand side of `to_submodel`.
265+
Ideally, we should therefore not need to place anything on the left-hand side of `to_submodel`.
249266
However, because the special behaviour of `to_submodel` relies on the tilde operator, and the tilde operator requires a left-hand side, we have to use a dummy variable (here `_n`).
250267

251268

0 commit comments

Comments
 (0)