Skip to content

breaking - v0.43#2733

Merged
penelopeysm merged 58 commits intomainfrom
breaking
Mar 5, 2026
Merged

breaking - v0.43#2733
penelopeysm merged 58 commits intomainfrom
breaking

Conversation

@penelopeysm
Copy link
Member

@penelopeysm penelopeysm commented Dec 5, 2025

This minor version does the following:

  • DynamicPPL v0.40
    • Docs are already written up in the main docs repo and just need a release
  • New optimisation interface
    • Docs are updated locally in the Turing repo -- we can shift over to the Quarto docs repo after releasing
  • New MH sampler interface

Things that NEED to be fixed before releasing:

  • Abstract type in SMC TracedModel -- get around this using a tactic based on VectorParamAccumulator I benchmarked and it isn't a problem.
  • Optimisation bug with parameter names / values (specifically for LKJCholesky(3, x) where it generates 6 names but 9 values) -- the Turing code for this has already been written, and generally works, but requires upstream changes

@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

Turing.jl documentation for PR #2733 is available at:
https://TuringLang.github.io/Turing.jl/previews/PR2733/

@codecov
Copy link

codecov bot commented Jan 7, 2026

Codecov Report

❌ Patch coverage is 90.35917% with 51 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.32%. Comparing base (184d592) to head (427fab9).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/optimisation/init.jl 81.45% 23 Missing ⚠️
src/optimisation/Optimisation.jl 74.24% 17 Missing ⚠️
src/optimisation/stats.jl 94.36% 4 Missing ⚠️
src/mcmc/gibbs_conditional.jl 93.02% 3 Missing ⚠️
src/mcmc/particle_mcmc.jl 71.42% 2 Missing ⚠️
src/common.jl 96.00% 1 Missing ⚠️
src/mcmc/mh.jl 99.18% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2733      +/-   ##
==========================================
- Coverage   87.03%   86.32%   -0.72%     
==========================================
  Files          20       22       +2     
  Lines        1296     1433     +137     
==========================================
+ Hits         1128     1237     +109     
- Misses        168      196      +28     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Closes #2232
Closes #2363
Closes #2745
Closes #2732
Closes #1775
Closes #2735
Closes #2734
Closes #2634

This PR redesigns Turing's optimisation interface as discussed in #2634.

# How to review?

The key things that are changed here are:

- The fields of the `ModeResult` struct are slightly different. We get
rid of the old `NamedArray` that was stored in there. Instead, users who
really, really need it can generate it themselves via
`StatsBase.coef(result)`. The primary interface for getting parameters
will be `result.params`, which is a `Dict{VarName}` (obviously can be
changed to VNT later).

- Linking is now user-controlled and depends on the `link` parameter
passed in.

- Constraints are now passed in as either a NamedTuple or a
Dict{VarName} (again pending VNT). Values are always provided in
unlinked space.

- Initial parameters are now supplied as an `AbstractInitStrategy`. The
code in `src/optimisation/init.jl` will attempt to sample initial
parameters using that strategy, but will also make sure that the sampled
value obeys the constraints.

The most meaningful section of code is the definition of
`estimate_mode`, plus `src/optimisation/init.jl` which handles
everything to do with initialisation and constraints. The rest of the
stuff are just tweaks to fit the new interface.
The usual game of trying to see if DPPL breaks anything upstream.

(20 minutes later)

Okay, maybe '*if* DPPL breaks anything upstream' was a bit optimistic.
More like seeing *what* DPPL broke upstream.
@penelopeysm penelopeysm marked this pull request as draft February 7, 2026 15:48
@penelopeysm
Copy link
Member Author

This isn't yet ready for release; DPPL isn't released yet, and the MH bit needs a review. But tests are passing.

Copy link
Member

@sunxd3 sunxd3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the logjac is tricky stuff, I think it's correct, but I guess only tests and time will tell

thanks for this, Penny!

function DynamicPPL.init(
rng::Random.AbstractRNG, vn::VarName, prior::Distribution, strategy::InitFromProposals
)
if haskey(strategy.proposals, vn)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haskey feels a bit dangerous, what if proposals have x but vn is x[1]?

also if this doesn't match, it might fail silently to the else branch?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would silently do the else branch, that's true. I need to sit down and work out these edge cases. (To be fair, the old one had even weirder behaviour, but that's not an excuse.)

Copy link
Member Author

@penelopeysm penelopeysm Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's very hard to do the correct thing though. In the else branch we could kind of do a 'stepping upwards' loop, like if vn is x[1].a we check whether x[1] exists or x exists and if so warn. But I think that would just create other edge cases, and it would also be bad for performance.

Maybe the best answer is before we start sampling (somewhere in the first AbstractMCMC.step) we should do one model evaluation where we iterate through the model and print out the proposal for each parameter? The user should be able to silence that with verbose=false, and if not then it's a good check for them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vns_with_proposal::Set{VarName}
end
function (s::StoreUnspecifiedPriors)(val, tval, logjac, vn, dist::Distribution)
return if vn in s.vns_with_proposal
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar to the question above, what to do in the case of subsumption?

# Convert all keys to VarNames.
vn, proposal = pair
vn = _to_varname(vn)
if !haskey(raw_vals, vn)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the behavior is correct -- if user doesn't give init info, then we skip.

But it also makes me think what if user gives the wrong key for initialization, then this is kind of silent.

)

# Calculate the log-acceptance probability.
log_a = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe some NaN/Inf handling? (can lp gets these values?)

Copy link
Member Author

@penelopeysm penelopeysm Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends. Sometimes -Inf is fine in the sense that you just make it reject the sample (which can happen either by sampling a parameter value outside the support, or if the user just wants to signal some failure condition with @addlogprob! -Inf). The main problem I discovered with this was when the initial parameters already gave an lp of -Inf, which is why I inserted the special handler on line 286 onwards.

NaN on the other hand I think can only happen if the value itself is already a NaN or if there's a bug in the implementation of logpdf, which should probably not happen. But we could insert a warning here? I think the current behaviour is that it will never be accepted (because comparisons against NaN always return false), which is fairly sensible, but I could see a warning being nice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a warning (also in #2774)

src/mcmc/mh.jl Outdated

proposals = NamedTuple{tuple(prop_syms...)}(tuple(props...))
**Note that when using conditional proposals, the values obtained by indexing into the
`VarNamedTuple` are always in unlinked space.** Sometimes, you may want to define a random-walk
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

side question: do we have a convention on the use of link vs transform vs constrained?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately not. I like transform the best out of them, so I'll try to stick to that where possible in the docstrings, but a lot of the code still uses link.

My perspective on this is that transform is generic and can be anything, but link is the special transform that maps to unconstrained space. That might be a bit questionable because in theory there are many transforms that will map to unconstrained space, so in practice what it really means is whatever Bijectors.bijector returns.

@penelopeysm penelopeysm marked this pull request as ready for review March 4, 2026 01:55
@penelopeysm
Copy link
Member Author

This is pretty much ready to go once CI passes.

Confusingly, there's a case where CI on 1.10 hangs midway through the MH tests. I have no idea why.

@penelopeysm
Copy link
Member Author

Oh, I thought I would finally go to bed, but actually I can reproduce it locally just by running import Pkg; Pkg.test("Turing"; test_args=["mh"]).

Ctrl-C'ing at the place where it hangs gives the following stack trace:

inference with samplers: Error During Test at /Users/pyong/ppl/lib/test/runtests.jl:54
  Got exception outside of a @test
  LoadError: InterruptException:
  Stacktrace:
    [1] lookup(pointer::Ptr{Nothing})
      @ Base.StackTraces ./stacktraces.jl:108
    [2] firstcaller
      @ ./deprecated.jl:157 [inlined]
    [3] firstcaller
      @ ./deprecated.jl:152 [inlined]
    [4] macro expansion
      @ ./deprecated.jl:132 [inlined]
    [5] macro expansion
      @ ./logging.jl:390 [inlined]
    [6] depwarn(msg::String, funcsym::Symbol; force::Bool)
      @ Base ./deprecated.jl:127
    [7] depwarn(msg::String, funcsym::Symbol)
      @ Base ./deprecated.jl:121
    [8] Distributions.MvNormal(μ::Vector{Float64}, σ::Float64)
      @ Distributions ./deprecated.jl:104
    [9] init_strategy_constructor
      @ ~/ppl/lib/src/mcmc/mh.jl:233 [inlined]
   [10] step(rng::StableRNGs.LehmerRNG, model::DynamicPPL.Model{Main.MHTests.var"#f#8", (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext, false}, spl::Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}}, old_vi::DynamicPPL.VarInfo{DynamicPPL.UnlinkSome{Set{AbstractPPL.VarName{:y, AbstractPPL.Iden}}, DynamicPPL.LinkAll}, DynamicPPL.VarNamedTuples.VarNamedTuple{(:x, :y), Tuple{DynamicPPL.LinkedVectorValue{Vector{Float64}, Bijectors.VectorBijectors.OnlyWrap{Bijectors.VectorBijectors.TypedIdentity}}, DynamicPPL.VectorValue{Vector{Float64}, Bijectors.VectorBijectors.OnlyWrap{Bijectors.VectorBijectors.TypedIdentity}}}}, DynamicPPL.AccumulatorTuple{6, @NamedTuple{LogPrior::DynamicPPL.LogPriorAccumulator{Float64}, LogJacobian::DynamicPPL.LogJacobianAccumulator{Float64}, LogLikelihood::DynamicPPL.LogLikelihoodAccumulator{Float64}, RawValues::DynamicPPL.VNTAccumulator{:RawValues, DynamicPPL.GetRawValues, DynamicPPL.VarNamedTuples.VarNamedTuple{(:x, :y), Tuple{Float64, Float64}}}, MHLinkedValues::DynamicPPL.VNTAccumulator{:MHLinkedValues, Turing.Inference.StoreLinkedValues, DynamicPPL.VarNamedTuples.VarNamedTuple{(:x,), Tuple{Vector{Float64}}}}, MHUnspecifiedPriors::DynamicPPL.VNTAccumulator{:MHUnspecifiedPriors, Turing.Inference.StoreUnspecifiedPriors, DynamicPPL.VarNamedTuples.VarNamedTuple{(:y,), Tuple{Distributions.Beta{Float64}}}}}}}; discard_sample::Bool, kwargs::@Kwargs{initial_params::DynamicPPL.InitFromPrior})
      @ Turing.Inference ~/ppl/lib/src/mcmc/mh.jl:351
   [11] _step_or_step_warmup(::Int64, ::Int64, ::StableRNGs.LehmerRNG, ::Vararg{Any}; kwargs::@Kwargs{discard_sample::Bool, initial_params::DynamicPPL.InitFromPrior})
      @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:0
   [12] _step_or_step_warmup
      @ ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:124 [inlined]
   [13] macro expansion
      @ ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:315 [inlined]
   [14] (::AbstractMCMC.var"#31#32"{Nothing, Int64, Int64, Int64, UnionAll, Nothing, @Kwargs{initial_params::DynamicPPL.InitFromPrior}, StableRNGs.LehmerRNG, DynamicPPL.Model{Main.MHTests.var"#f#8", (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext, false}, Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}}, Int64, Float64, Int64})()
      @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/logging.jl:134
   [15] with_logstate(f::Function, logstate::Any)
      @ Base.CoreLogging ./logging.jl:517
   [16] with_logger(f::Function, logger::LoggingExtras.TeeLogger{Tuple{LoggingExtras.EarlyFilteredLogger{TerminalLoggers.TerminalLogger, AbstractMCMC.var"#1#3"{Module}}, LoggingExtras.EarlyFilteredLogger{Logging.ConsoleLogger, AbstractMCMC.var"#2#4"{Module}}}})
      @ Base.CoreLogging ./logging.jl:630
   [17] with_progresslogger(f::Function, _module::Module, logger::Logging.ConsoleLogger)
      @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/logging.jl:157
   [18] macro expansion
      @ ~/.julia/packages/AbstractMCMC/oqm6Y/src/logging.jl:133 [inlined]
   [19] mcmcsample(rng::StableRNGs.LehmerRNG, model::DynamicPPL.Model{Main.MHTests.var"#f#8", (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext, false}, sampler::Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, num_warmup::Int64, discard_initial::Int64, thinning::Int64, chain_type::Type, initial_state::Nothing, kwargs::@Kwargs{initial_params::DynamicPPL.InitFromPrior})
      @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:204
   [20] sample(rng::StableRNGs.LehmerRNG, model::DynamicPPL.Model{Main.MHTests.var"#f#8", (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext, false}, spl::Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}}, N::Int64; initial_params::DynamicPPL.InitFromPrior, check_model::Bool, chain_type::Type, kwargs::@Kwargs{})
      @ Turing.Inference ~/ppl/lib/src/mcmc/abstractmcmc.jl:55
   [21] sample(rng::StableRNGs.LehmerRNG, model::DynamicPPL.Model{Main.MHTests.var"#f#8", (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext, false}, spl::Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}}, N::Int64)
      @ Turing.Inference ~/ppl/lib/src/mcmc/abstractmcmc.jl:44
   [22] macro expansion
      @ ~/ppl/lib/test/mcmc/mh.jl:48 [inlined]
   [23] macro expansion
      @ ~/.julia/juliaup/julia-1.10.10+0.aarch64.apple.darwin14/share/julia/stdlib/v1.10/Test/src/Test.jl:1527 [inlined]
   [24] (::Main.MHTests.var"#test_mean_and_std#9")(spl::Turing.Inference.MH{Turing.Inference.var"#init_strategy_constructor#44"{Tuple{Pair{AbstractPPL.VarName{:x, AbstractPPL.Iden}, Turing.Inference.LinkedRW{Float64}}}}, DynamicPPL.LinkSome{Set{AbstractPPL.VarName}, DynamicPPL.UnlinkAll}})
      @ Main.MHTests ~/ppl/lib/test/mcmc/mh.jl:47
   [25] include(fname::String)
      @ Base.MainInclude ./client.jl:494
   [26] include(fname::String)
      @ Base.MainInclude ./client.jl:494
   [27] eval
      @ ./boot.jl:385 [inlined]
   [28] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:296
   [29] _start()
      @ Base ./client.jl:557
  in expression starting at

Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very excited to see this go live!

However, please note that this only applies to **containers that contain random variables on the left-hand side of tilde-statements.**
In general, there are no restrictions on containers of *observed* data, or containers that are not used in tilde-statements.

- Likewise, arrays of random variables should ideally have a constant size from iteration to iteration. That means a model like this will fail sometimes (*but* see below):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth pointing out either here or above at the beginning of these bullet points that this only applies when using indexing, and doing the multivariate distribution version of the below is entirely fine?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tweaked wording

HISTORY.md Outdated
Specifically, the types of **containers that can include random variables** are now more limited:
if `x[i] ~ dist` is a random variable, then `x` must obey the following criteria:

- They must be arrays. Dicts and other containers are currently unsupported (we have [an issue to track this](https://github.com/TuringLang/DynamicPPL.jl/issues/1263)). If you really need this functionality, please open an issue and let us know; we can try to make it a priority.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth specifying what we mean by "arrays"? Any subtype of AbstractArray?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, changed to AbstractArray.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, it's contingent on it implementing enough of an AbstractArray interface, plus it working with BangBang. But well.

These two convenience functions are now imported and re-exported from DynamicPPL, rather than DistributionsAD.jl.
They are now just wrappers around `Distributions.product_distribution`, instead of the specialised implementations that were in DistributionsAD.jl.
DistributionsAD.jl is for all intents and purposes deprecated: it is no longer a dependency in the Turing stack.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like at least Gibbs may have had a serious performance boost. Is it worth talking about performance improvements here?

Copy link
Member Author

@penelopeysm penelopeysm Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a note. Turns out runtimes are indeed a lot better too, this is 0.11 seconds on this branch vs 0.18 seconds on main.

using Turing

@model function f()
    x ~ Normal(0, 1)
    y = zeros(10)
    for i in 1:10
        y[i] ~ Normal(x, 1)
    end
    z ~ Normal(sum(y), 1)
end

@time sample(f(), Gibbs(:x => HMC(0.1, 10), :y => HMC(0.1, 10), :z => HMC(0.1, 10)), 1000; chain_type=Any);

These two convenience functions are now imported and re-exported from DynamicPPL, rather than DistributionsAD.jl.
They are now just wrappers around `Distributions.product_distribution`, instead of the specialised implementations that were in DistributionsAD.jl.
DistributionsAD.jl is for all intents and purposes deprecated: it is no longer a dependency in the Turing stack.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any other advantages other than performance that would be worth raising here? Improvements to fix and conditional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a paragraph about this. I'm still not entirely sure I like where we are. We still don't have well-defined semantics for this. It's probably much closer to what we think should be correct, but there isn't a formal statement of what is correct, and consequently it's hard to meaningfully judge how much closer we are to that. Of course, there are lots of individual cases where we can say the behaviour is more intuitive, but lots of individual cases don't together make a formal specification.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still, that's something for another time.

@yebai
Copy link
Member

yebai commented Mar 4, 2026

Looks great!

One minor comment: If we drop deps on DistributionsAD, would we lose any rules for ForwardDiff (default Turing AD) in TuringLang/DistributionsAD.jl#280?

@penelopeysm
Copy link
Member Author

@yebai I doubt it. At the very least, since CI is passing, we can be sure that there's nothing in Turing's CI suite that depended on that.

The CI suite does contain this list of distributions that we check against:

@testset "single distribution correctness" begin
n_samples = 10_000
mean_tol = 0.1
var_atol = 1.0
var_tol = 0.5
multi_dim = 4
# 1. UnivariateDistribution
# NOTE: Noncentral distributions are commented out because of
# AD incompatibility of their logpdf functions
dist_uni = [
Arcsine(1, 3),
Beta(2, 1),
# NoncentralBeta(2, 1, 1),
BetaPrime(1, 1),
Biweight(0, 1),
Chi(7),
Chisq(7),
# NoncentralChisq(7, 1),
Cosine(0, 1),
Epanechnikov(0, 1),
Erlang(2, 3),
Exponential(0.1),
FDist(7, 7),
# NoncentralF(7, 7, 1),
Frechet(2, 0.5),
Normal(0, 1),
GeneralizedExtremeValue(0, 1, 0.5),
GeneralizedPareto(0, 1, 0.5),
Gumbel(0, 0.5),
InverseGaussian(1, 1),
Kolmogorov(),
# KSDist(2), # no pdf function defined
# KSOneSided(2), # no pdf function defined
Laplace(0, 0.5),
Levy(0, 1),
Logistic(0, 1),
LogNormal(0, 1),
Gamma(2, 3),
InverseGamma(3, 1),
NormalCanon(0, 1),
NormalInverseGaussian(0, 2, 1, 1),
Pareto(1, 1),
Rayleigh(1),
SymTriangularDist(0, 1),
TDist(2.5),
# NoncentralT(2.5, 1),
TriangularDist(1, 3, 2),
Triweight(0, 1),
Uniform(0, 1),
# VonMises(0, 1), WARNING: this is commented are because the
# test is broken
Weibull(2, 1),
# Cauchy(0, 1), # mean and variance are undefined for Cauchy
]
# 2. MultivariateDistribution
dist_multi = [
MvNormal(zeros(multi_dim), I),
MvNormal(zeros(2), [2.0 1.0; 1.0 4.0]),
Dirichlet(multi_dim, 2.0),
]
# 3. MatrixDistribution
dist_matrix = [
Wishart(7, [1.0 0.5; 0.5 1.0]), InverseWishart(7, [1.0 0.5; 0.5 1.0])
]
@testset "Correctness test for single distributions" begin
for (dist_set, dist_list) in [
("UnivariateDistribution", dist_uni),
("MultivariateDistribution", dist_multi),
("MatrixDistribution", dist_matrix),
]
@testset "$(string(dist_set))" begin
for dist in dist_list
@testset "$(string(typeof(dist)))" begin
@info "Distribution(params)" dist
@model m() = x ~ dist
seed = if dist isa GeneralizedExtremeValue
# GEV is prone to giving really wacky results that are quite
# seed-dependent.
StableRNG(469)
else
StableRNG(468)
end
chn = sample(seed, m(), HMC(0.05, 20), n_samples)
# Numerical tests.
check_dist_numerical(
dist,
chn;
mean_tol=mean_tol,
var_atol=var_atol,
var_tol=var_tol,
)
end
end
end
end
end
end
end

I think this list is probably outdated and could be expanded. Bijectors now contains very though testing for differentiability of with_logabsdet_jacobian on a ton of distributions, and the remaining thing would be to test differentiability of logpdf. (That should arguably be in Distributions.jl, but well... we might have to stick it in DynamicPPL or something.)

But I'm pretty sure that anything that used to work before should continue to work.

@penelopeysm
Copy link
Member Author

penelopeysm commented Mar 4, 2026

CI failure is due to a Julia GC segfault -- really?!....

Edit: It is indeterministic on CI and I couldn't reproduce locally, so will just ignore it going forwards

@penelopeysm penelopeysm merged commit 158b2a0 into main Mar 5, 2026
29 checks passed
@penelopeysm penelopeysm deleted the breaking branch March 5, 2026 00:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants