Skip to content
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
414 changes: 218 additions & 196 deletions Manifest.toml

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -56,5 +56,4 @@ Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"

[compat]
Turing = "0.39"
DelayDiffEq = "~5.56"
Turing = "0.40"
2 changes: 1 addition & 1 deletion _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ website:
href: https://turinglang.org/team/
right:
# Current version
- text: "v0.39"
- text: "v0.40"
menu:
- text: Changelog
href: https://turinglang.org/docs/changelog.html
Expand Down
2 changes: 2 additions & 0 deletions developers/compiler/minituring-contexts/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ Pkg.instantiate();

In the [Mini Turing]({{< meta minituring >}}) tutorial we developed a miniature version of the Turing language, to illustrate its core design. A passing mention was made of contexts. In this tutorial we develop that aspect of our mini Turing language further to demonstrate how and why contexts are an important part of Turing's design.

Note: The way Turing actually uses contexts changed somewhat in releases 0.39 and 0.40. The content of this page remains relevant, the principles of how contexts operate remain the same, and concepts like leaf and parent contexts still exist. However, we've moved away from using contexts for quite as many things as we used to. Most importantly, whether to accumulate the log joint, log prior, or log likelihood is no longer done using different contexts. Please keep this in mind as you read this page: The principles remain, but the details have changed. We will update this page once the refactoring of internals that is happening around releases like 0.39 and 0.40 is done.

# Mini Turing expanded, now with more contexts

If you haven't read [Mini Turing]({{< meta minituring >}}) yet, you should do that first. We start by repeating verbatim much of the code from there. Define the type for holding values for variables:
Expand Down
8 changes: 4 additions & 4 deletions developers/compiler/model-manual/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -33,21 +33,21 @@ Taking the `gdemo` model above as an example, the macro-based definition can be
using DynamicPPL

# Create the model function.
function gdemo2(model, varinfo, context, x)
function gdemo2(model, varinfo, x)
# Assume s² has an InverseGamma distribution.
s², varinfo = DynamicPPL.tilde_assume!!(
context, InverseGamma(2, 3), @varname(s²), varinfo
model.context, InverseGamma(2, 3), @varname(s²), varinfo
)

# Assume m has a Normal distribution.
m, varinfo = DynamicPPL.tilde_assume!!(
context, Normal(0, sqrt(s²)), @varname(m), varinfo
model.context, Normal(0, sqrt(s²)), @varname(m), varinfo
)

# Observe each value of x[i] according to a Normal distribution.
for i in eachindex(x)
_retval, varinfo = DynamicPPL.tilde_observe!!(
context, Normal(m, sqrt(s²)), x[i], @varname(x[i]), varinfo
model.context, Normal(m, sqrt(s²)), x[i], @varname(x[i]), varinfo
)
end

Expand Down
6 changes: 3 additions & 3 deletions developers/contexts/submodel-condition/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -108,13 +108,13 @@ unwrap_sampling_context(ctx::DynamicPPL.SamplingContext) = ctx.context
unwrap_sampling_context(ctx::DynamicPPL.AbstractContext) = ctx

@model function inner()
println("inner context: $(unwrap_sampling_context(__context__))")
println("inner context: $(unwrap_sampling_context(__model__.context))")
x ~ Normal()
return y ~ Normal()
end

@model function outer()
println("outer context: $(unwrap_sampling_context(__context__))")
println("outer context: $(unwrap_sampling_context(__model__.context))")
return a ~ to_submodel(inner())
end

Expand All @@ -124,7 +124,7 @@ with_outer_cond = outer() | (@varname(a.x) => 1.0)
# 'Inner conditioning'
inner_cond = inner() | (@varname(x) => 1.0)
@model function outer2()
println("outer context: $(unwrap_sampling_context(__context__))")
println("outer context: $(unwrap_sampling_context(__model__.context))")
return a ~ to_submodel(inner_cond)
end
with_inner_cond = outer2()
Expand Down
2 changes: 1 addition & 1 deletion developers/transforms/dynamicppl/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,7 @@ Hence, one might expect that if we try to evaluate the model using this `VarInfo
Here, `evaluate!!` returns two things: the model's return value itself (which we defined above to be a `NamedTuple`), and the resulting `VarInfo` post-evaluation.

```{julia}
retval, ret_varinfo = DynamicPPL.evaluate!!(model, vi_linked, DefaultContext())
retval, ret_varinfo = DynamicPPL.evaluate!!(model, vi_linked)
getlogp(ret_varinfo)
```

Expand Down
14 changes: 7 additions & 7 deletions tutorials/variational-inference/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -182,8 +182,8 @@ Usually, `q_avg` will perform better than the last-iterate `q_last`.
For instance, we can compare the ELBO of the two:
```{julia}
@info("Objective of q_avg and q_last",
ELBO_q_avg = estimate_objective(AdvancedVI.RepGradELBO(32), q_avg, Turing.Variational.make_logdensity(m)),
ELBO_q_last = estimate_objective(AdvancedVI.RepGradELBO(32), q_last, Turing.Variational.make_logdensity(m))
ELBO_q_avg = estimate_objective(AdvancedVI.RepGradELBO(32), q_avg, LogDensityFunction(m)),
ELBO_q_last = estimate_objective(AdvancedVI.RepGradELBO(32), q_last, LogDensityFunction(m))
)
```
We can see that `ELBO_q_avg` is slightly more optimal.
Expand All @@ -205,9 +205,9 @@ For example, the following callback function estimates the ELBO on `q_avg` every
```{julia}
function callback(; stat, averaged_params, restructure, kwargs...)
if mod(stat.iteration, 10) == 1
q_avg = restructure(averaged_params)
obj = AdvancedVI.RepGradELBO(128)
elbo_avg = estimate_objective(obj, q_avg, Turing.Variational.make_logdensity(m))
q_avg = restructure(averaged_params)
obj = AdvancedVI.RepGradELBO(128)
elbo_avg = estimate_objective(obj, q_avg, LogDensityFunction(m))
(elbo_avg = elbo_avg,)
else
nothing
Expand All @@ -223,7 +223,7 @@ q_mf, _, info_mf, _ = vi(m, q_init, n_iters; show_progress=false, callback=callb

Let's plot the result:
```{julia}
iters = 1:10:length(info_mf)
iters = 1:10:length(info_mf)
elbo_mf = [i.elbo_avg for i in info_mf[iters]]
Plots.plot!(iters, elbo_mf, xlabel="Iterations", ylabel="ELBO", label="callback", ylims=(-200,Inf))
```
Expand All @@ -247,7 +247,7 @@ _, _, info_adam, _ = vi(m, q_init, n_iters; show_progress=false, callback=callba
```

```{julia}
iters = 1:10:length(info_mf)
iters = 1:10:length(info_mf)
elbo_adam = [i.elbo_avg for i in info_adam[iters]]
Plots.plot(iters, elbo_mf, xlabel="Iterations", ylabel="ELBO", label="DoWG")
Plots.plot!(iters, elbo_adam, xlabel="Iterations", ylabel="ELBO", label="Adam")
Expand Down
2 changes: 1 addition & 1 deletion usage/automatic-differentiation/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ model = gdemo(1.5, 2)

for adtype in [AutoForwardDiff(), AutoReverseDiff()]
result = run_ad(model, adtype; benchmark=true)
@show result.time_vs_primal
@show result.grad_time / result.primal_time
end
```

Expand Down
9 changes: 3 additions & 6 deletions usage/modifying-logprob/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,10 @@ using LinearAlgebra
end
```

Note that `@addlogprob!` always increases the accumulated log probability, regardless of the provided
sampling context.
For instance, if you do not want to apply `@addlogprob!` when evaluating the prior of your model but only when computing the log likelihood and the log joint probability, then you should [check the type of the internal variable `__context_`](https://github.com/TuringLang/DynamicPPL.jl/issues/154), as in the following example:
Note that `@addlogprob!` increases the accumulated log likelihood.
If instead you want to add to the log prior, you can use

```{julia}
#| eval: false
if DynamicPPL.leafcontext(__context__) !== Turing.PriorContext()
@addlogprob! myloglikelihood(x, μ)
end
@addlogprob! (; logprior=value_goes_here)
```