Skip to content

Commit 57e6f9c

Browse files
penelopeysmmhauru
andauthored
Remove calls to resetlogp!! & add changelog (#2650)
* Remove calls to resetlogp!! * Add a changelog for 0.40 * Update HISTORY.md Co-authored-by: Markus Hauru <[email protected]> --------- Co-authored-by: Markus Hauru <[email protected]>
1 parent 5743ff7 commit 57e6f9c

File tree

2 files changed

+52
-4
lines changed

2 files changed

+52
-4
lines changed

HISTORY.md

Lines changed: 52 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,57 @@
11
# 0.40.0
22

3-
[...]
3+
## Breaking changes
4+
5+
**DynamicPPL 0.37**
6+
7+
Turing.jl v0.40 updates DynamicPPL compatibility to 0.37.
8+
The summary of the changes provided here is intended for end-users of Turing.
9+
If you are a package developer, or would otherwise like to understand these changes in-depth, please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md#0370).
10+
11+
- **`@submodel`** is now completely removed; please use `to_submodel`.
12+
13+
- **Prior and likelihood calculations** are now completely separated in Turing. Previously, the log-density used to be accumulated in a single field and thus there was no clear way to separate prior and likelihood components.
14+
15+
+ **`@addlogprob! f`**, where `f` is a float, now adds to the likelihood by default.
16+
+ You can instead use **`@addlogprob! (; logprior=x, loglikelihood=y)`** to control which log-density component to add to.
17+
+ This means that usage of `PriorContext` and `LikelihoodContext` is no longer needed, and these have now been removed.
18+
- The special **`__context__`** variable has been removed. If you still need to access the evaluation context, it is now available as `__model__.context`.
19+
20+
**Log-density in chains**
21+
22+
When sampling from a Turing model, the resulting `MCMCChains.Chains` object now contains not only the log-joint (accessible via `chain[:lp]`) but also the log-prior and log-likelihood (`chain[:logprior]` and `chain[:loglikelihood]` respectively).
23+
24+
These values now correspond to the log density of the sampled variables exactly as per the model definition / user parameterisation and thus will ignore any linking (transformation to unconstrained space).
25+
For example, if the model is `@model f() = x ~ LogNormal()`, `chain[:lp]` would always contain the value of `logpdf(LogNormal(), x)` for each sampled value of `x`.
26+
Previously these values could be incorrect if linking had occurred: some samplers would return `logpdf(Normal(), log(x))` i.e. the log-density with respect to the transformed distribution.
27+
28+
**Gibbs sampler**
29+
30+
When using Turing's Gibbs sampler, e.g. `Gibbs(:x => MH(), :y => HMC(0.1, 20))`, the conditioned variables (for example `y` during the MH step, or `x` during the HMC step) are treated as true observations.
31+
Thus the log-density associated with them is added to the likelihood.
32+
Previously these would effectively be added to the prior (in the sense that if `LikelihoodContext` was used they would be ignored).
33+
This is unlikely to affect users but we mention it here to be explicit.
34+
This change only affects the log probabilities as the Gibbs component samplers see them; the resulting chain will include the usual log prior, likelihood, and joint, as described above.
35+
36+
**Particle Gibbs**
37+
38+
Previously, only 'true' observations (i.e., `x ~ dist` where `x` is a model argument or conditioned upon) would trigger resampling of particles.
39+
Specifically, there were two cases where resampling would not be triggered:
40+
41+
- Calls to `@addlogprob!`
42+
- Gibbs-conditioned variables: e.g. `y` in `Gibbs(:x => PG(20), :y => MH())`
43+
44+
Turing 0.40 changes this such that both of the above cause resampling.
45+
(The second case follows from the changes to the Gibbs sampler, see above.)
46+
47+
This release also fixes a bug where, if the model ended with one of these statements, their contribution to the particle weight would be ignored, leading to incorrect results.
48+
49+
## Other changes
50+
51+
- Sampling using `Prior()` should now be about twice as fast because we now avoid evaluating the model twice on every iteration.
52+
- `Turing.Inference.Transition` now has different fields.
53+
If `t isa Turing.Inference.Transition`, `t.stat` is always a NamedTuple, not `nothing` (if it genuinely has no information then it's an empty NamedTuple).
54+
Furthermore, `t.lp` has now been split up into `t.logprior` and `t.loglikelihood` (see also 'Log-density in chains' section above).
455

556
# 0.39.9
657

src/mcmc/particle_mcmc.jl

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -211,7 +211,6 @@ function DynamicPPL.initialstep(
211211
# Reset the VarInfo.
212212
vi = DynamicPPL.setacc!!(vi, ProduceLogLikelihoodAccumulator())
213213
set_all_del!(vi)
214-
vi = DynamicPPL.resetlogp!!(vi)
215214
vi = DynamicPPL.empty!!(vi)
216215

217216
# Create a new set of particles.
@@ -339,7 +338,6 @@ function DynamicPPL.initialstep(
339338
vi = DynamicPPL.setacc!!(vi, ProduceLogLikelihoodAccumulator())
340339
# Reset the VarInfo before new sweep
341340
set_all_del!(vi)
342-
vi = DynamicPPL.resetlogp!!(vi)
343341

344342
# Create a new set of particles
345343
num_particles = spl.alg.nparticles
@@ -370,7 +368,6 @@ function AbstractMCMC.step(
370368
# Reset the VarInfo before new sweep.
371369
vi = state.vi
372370
vi = DynamicPPL.setacc!!(vi, ProduceLogLikelihoodAccumulator())
373-
vi = DynamicPPL.resetlogp!!(vi)
374371

375372
# Create reference particle for which the samples will be retained.
376373
unset_all_del!(vi)

0 commit comments

Comments
 (0)