Skip to content

Support DPPL 0.37 #2550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 52 commits into from
Aug 12, 2025
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
cea1f7d
First efforts towards DPPL 0.37 compat, WIP
mhauru May 15, 2025
5d860d9
More DPPL 0.37 compat work, WIP
mhauru May 20, 2025
c7c4638
Add [sources] for [email protected]
penelopeysm Jul 17, 2025
f16a5cf
Remove context argument from `LogDensityFunction`
penelopeysm Jul 19, 2025
98d5e7a
Fix MH
penelopeysm Jul 19, 2025
73e127b
Remove spurious logging
penelopeysm Jul 19, 2025
ce0c782
Remove residual OptimizationContext
penelopeysm Jul 19, 2025
4d03c07
Delete files that were removed in previous releases
penelopeysm Jul 19, 2025
06fec2d
Fix typo
penelopeysm Jul 19, 2025
0af8725
Simplify ESS
penelopeysm Jul 19, 2025
3d44c12
Fix LDF
penelopeysm Jul 19, 2025
a1837b5
Fix Prior(), fix a couple more imports
penelopeysm Jul 19, 2025
17efb8c
fixes
penelopeysm Jul 19, 2025
d62ad82
actually fix prior
penelopeysm Jul 19, 2025
aac93f1
Remove extra return value from tilde_assume
penelopeysm Jul 19, 2025
e903d1c
fix ldf
penelopeysm Jul 19, 2025
fd5a815
actually fix prior
penelopeysm Jul 19, 2025
10a130a
fix HMC log-density
penelopeysm Jul 20, 2025
c630723
fix ldf
penelopeysm Jul 20, 2025
9cbb2e9
fix make_evaluate_...
penelopeysm Jul 20, 2025
335cd2a
more fixes for evaluate!!
penelopeysm Jul 20, 2025
c912fb9
fix hmc
penelopeysm Jul 20, 2025
195f819
fix run_ad
penelopeysm Jul 20, 2025
cd52e9f
even more fixes (oh goodness when will this end)
penelopeysm Jul 20, 2025
9360f18
more fixes
penelopeysm Jul 20, 2025
64ebd92
fix
penelopeysm Jul 20, 2025
283d4dd
more fix fix fix
penelopeysm Jul 20, 2025
b346198
fix return values of tilde pipeline
penelopeysm Jul 20, 2025
9012774
even more fixes
penelopeysm Jul 20, 2025
e600589
Fix missing import
penelopeysm Jul 20, 2025
3d5072f
More MH fixes
penelopeysm Jul 20, 2025
37466cc
Fix conversion
penelopeysm Jul 20, 2025
1b73e5a
don't think it really needs those type params
penelopeysm Jul 20, 2025
66a8544
implement copy for LogPriorWithoutJacAcc
penelopeysm Jul 20, 2025
98e70c2
Even more fixes
penelopeysm Jul 20, 2025
d2c1c92
More fixes; I think the remaining failures are pMCMC related
penelopeysm Jul 20, 2025
a21f24d
Merge branch 'breaking' into mhauru/dppl-0.37
penelopeysm Jul 21, 2025
11a2a31
Fix merge
penelopeysm Jul 21, 2025
7ca59ce
Merge branch 'breaking' into mhauru/dppl-0.37
penelopeysm Jul 28, 2025
c062867
DPPL 0.37 compat for particle MCMC (#2625)
mhauru Jul 31, 2025
7124864
"Fixes" for PG-in-Gibbs (#2629)
penelopeysm Jul 31, 2025
8fdecc0
Use accumulators to fix all logp calculations when sampling (#2630)
penelopeysm Aug 1, 2025
27aab23
Merge branch 'breaking' into mhauru/dppl-0.37
penelopeysm Aug 1, 2025
119c818
InitContext isn't for 0.37, update comments
penelopeysm Aug 1, 2025
b41a4b1
Fix merge
penelopeysm Aug 1, 2025
d92fd56
Do not re-evaluate model for Prior (#2644)
penelopeysm Aug 5, 2025
806c82d
No need to test AD for SamplingContext{<:HMC} (#2645)
penelopeysm Aug 5, 2025
5743ff7
change breaking -> main
penelopeysm Aug 7, 2025
57e6f9c
Remove calls to resetlogp!! & add changelog (#2650)
penelopeysm Aug 11, 2025
bb21e1e
Remove `[sources]`
penelopeysm Aug 11, 2025
1bc2fbf
Unify Turing `Transition`s, fix some tests (#2651)
penelopeysm Aug 12, 2025
247aee9
Update changelog for PG in Gibbs
penelopeysm Aug 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,57 @@
# 0.40.0

[...]
## Breaking changes

**DynamicPPL 0.37**

Turing.jl v0.40 updates DynamicPPL compatibility to 0.37.
The summary of the changes provided here is intended for end-users of Turing.
If you are a package developer, or would otherwise like to understand these changes in-depth, please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md#0370).

- **`@submodel`** is now completely removed; please use `to_submodel`.

- **Prior and likelihood calculations** are now completely separated in Turing. Previously, the log-density used to be accumulated in a single field and thus there was no clear way to separate prior and likelihood components.

+ **`@addlogprob! f`**, where `f` is a float, now adds to the likelihood by default.
+ You can instead use **`@addlogprob! (; logprior=x, loglikelihood=y)`** to control which log-density component to add to.
+ This means that usage of `PriorContext` and `LikelihoodContext` is no longer needed, and these have now been removed.
- The special **`__context__`** variable has been removed. If you still need to access the evaluation context, it is now available as `__model__.context`.

**Log-density in chains**

When sampling from a Turing model, the resulting `MCMCChains.Chains` object now contains not only the log-joint (accessible via `chain[:lp]`) but also the log-prior and log-likelihood (`chain[:logprior]` and `chain[:loglikelihood]` respectively).

These values now correspond to the log density of the sampled variables exactly as per the model definition / user parameterisation and thus will ignore any linking (transformation to unconstrained space).
For example, if the model is `@model f() = x ~ LogNormal()`, `chain[:lp]` would always contain the value of `logpdf(LogNormal(), x)` for each sampled value of `x`.
Previously these values could be incorrect if linking had occurred: some samplers would return `logpdf(Normal(), log(x))` i.e. the log-density with respect to the transformed distribution.

**Gibbs sampler**

When using Turing's Gibbs sampler, e.g. `Gibbs(:x => MH(), :y => HMC(0.1, 20))`, the conditioned variables (for example `y` during the MH step, or `x` during the HMC step) are treated as true observations.
Thus the log-density associated with them is added to the likelihood.
Previously these would effectively be added to the prior (in the sense that if `LikelihoodContext` was used they would be ignored).
This is unlikely to affect users but we mention it here to be explicit.
This change only affects the log probabilities as the Gibbs component samplers see them; the resulting chain will include the usual log prior, likelihood, and joint, as described above.

**Particle Gibbs**

Previously, only 'true' observations (i.e., `x ~ dist` where `x` is a model argument or conditioned upon) would trigger resampling of particles.
Specifically, there were two cases where resampling would not be triggered:

- Calls to `@addlogprob!`
- Gibbs-conditioned variables: e.g. `y` in `Gibbs(:x => PG(20), :y => MH())`

Turing 0.40 changes this such that both of the above cause resampling.
(The second case follows from the changes to the Gibbs sampler, see above.)

This release also fixes a bug where, if the model ended with one of these statements, their contribution to the particle weight would be ignored, leading to incorrect results.

## Other changes

- Sampling using `Prior()` should now be about twice as fast because we now avoid evaluating the model twice on every iteration.
- `Turing.Inference.Transition` now has different fields.
If `t isa Turing.Inference.Transition`, `t.stat` is always a NamedTuple, not `nothing` (if it genuinely has no information then it's an empty NamedTuple).
Furthermore, `t.lp` has now been split up into `t.logprior` and `t.loglikelihood` (see also 'Log-density in chains' section above).

# 0.39.9

Expand Down
3 changes: 0 additions & 3 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,3 @@ julia = "1.10.8"
[extras]
DynamicHMC = "bbc10e6e-7c05-544b-b16e-64fede858acb"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"

[sources]
DynamicPPL = {url = "https://github.com/TuringLang/DynamicPPL.jl", rev = "breaking"}
18 changes: 5 additions & 13 deletions ext/TuringDynamicHMCExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ function DynamicPPL.initialstep(

# Define log-density function.
ℓ = DynamicPPL.LogDensityFunction(
model, DynamicPPL.getlogjoint, vi; adtype=spl.alg.adtype
model, DynamicPPL.getlogjoint_internal, vi; adtype=spl.alg.adtype
)

# Perform initial step.
Expand All @@ -73,14 +73,9 @@ function DynamicPPL.initialstep(
steps = DynamicHMC.mcmc_steps(results.sampling_logdensity, results.final_warmup_state)
Q, _ = DynamicHMC.mcmc_next_step(steps, results.final_warmup_state.Q)

# Update the variables.
vi = DynamicPPL.unflatten(vi, Q.q)
# TODO(DPPL0.37/penelopeysm): This is obviously incorrect. Fix this.
vi = DynamicPPL.setloglikelihood!!(vi, Q.ℓq)
vi = DynamicPPL.setlogprior!!(vi, 0.0)

# Create first sample and state.
sample = Turing.Inference.Transition(model, vi)
vi = DynamicPPL.unflatten(vi, Q.q)
sample = Turing.Inference.Transition(model, vi, nothing)
state = DynamicNUTSState(ℓ, vi, Q, steps.H.κ, steps.ϵ)

return sample, state
Expand All @@ -99,12 +94,9 @@ function AbstractMCMC.step(
steps = DynamicHMC.mcmc_steps(rng, spl.alg.sampler, state.metric, ℓ, state.stepsize)
Q, _ = DynamicHMC.mcmc_next_step(steps, state.cache)

# Update the variables.
vi = DynamicPPL.unflatten(vi, Q.q)
vi = DynamicPPL.setlogp!!(vi, Q.ℓq)

# Create next sample and state.
sample = Turing.Inference.Transition(model, vi)
vi = DynamicPPL.unflatten(vi, Q.q)
sample = Turing.Inference.Transition(model, vi, nothing)
newstate = DynamicNUTSState(ℓ, vi, Q, state.metric, state.stepsize)

return sample, newstate
Expand Down
6 changes: 3 additions & 3 deletions ext/TuringOptimExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ function Optim.optimize(
options::Optim.Options=Optim.Options();
kwargs...,
)
f = Optimisation.OptimLogDensity(model, Optimisation.getlogjoint_without_jacobian)
f = Optimisation.OptimLogDensity(model, DynamicPPL.getlogjoint)
init_vals = DynamicPPL.getparams(f.ldf)
optimizer = Optim.LBFGS()
return _map_optimize(model, init_vals, optimizer, options; kwargs...)
Expand All @@ -124,7 +124,7 @@ function Optim.optimize(
options::Optim.Options=Optim.Options();
kwargs...,
)
f = Optimisation.OptimLogDensity(model, Optimisation.getlogjoint_without_jacobian)
f = Optimisation.OptimLogDensity(model, DynamicPPL.getlogjoint)
init_vals = DynamicPPL.getparams(f.ldf)
return _map_optimize(model, init_vals, optimizer, options; kwargs...)
end
Expand All @@ -140,7 +140,7 @@ function Optim.optimize(
end

function _map_optimize(model::DynamicPPL.Model, args...; kwargs...)
f = Optimisation.OptimLogDensity(model, Optimisation.getlogjoint_without_jacobian)
f = Optimisation.OptimLogDensity(model, DynamicPPL.getlogjoint)
return _optimize(f, args...; kwargs...)
end

Expand Down
Loading