-
Notifications
You must be signed in to change notification settings - Fork 228
Use typed varinfo in Prior #2649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
* Allow Prior to skip model re-evaluation * remove unneeded `default_chain_type` method * add a test * add a likelihood term too * why not test correctness while we're at it
Turing.jl documentation for PR #2649 is available at: |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2649 +/- ##
==========================================
- Coverage 84.88% 0.00% -84.89%
==========================================
Files 22 22
Lines 1475 1449 -26
==========================================
- Hits 1252 0 -1252
- Misses 223 1449 +1226 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
61db751
to
faf1318
Compare
806c82d
to
8af36e1
Compare
faf1318
to
422fc68
Compare
8af36e1
to
806c82d
Compare
Pull Request Test Coverage Report for Build 16801697729Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I correct in thinking that this breaks prior sampling for
@model function f()
x ~ Normal()
if x > 0
y ~ Normal()
else
z ~ Normal()
end
end
but not for
@model function f()
x ~ Normal()
if x > 0
y.a ~ Normal()
else
y.b ~ Normal()
end
end
?
If yes, I'm happy.
Actually, both are broken, for different reasons. The first one seems to never change, the second just doesn't run. I'll have to investigate this further, I'm a bit confused as to why either of them are broken. |
This works: @model function f()
x ~ Normal()
if x > 0
y ~ Normal()
else
z ~ Normal()
end
end It's because MH and NUTS don't work on this model, only Prior does. This doesn't work: @model function f()
x ~ Normal()
y = (; a = 1.0, b = 1.0) # need this line so that it can assign to y.a or y.b
if x > 0
y.a ~ Normal()
else
y.b ~ Normal()
end
end The key MH and NUTS also don't work on this model. |
I confirmed that both example models work on current release, when sampling with I don't really mind losing @model function f()
x ~ Normal()
if x > 0
y ~ Normal()
else
z ~ Normal()
end
end I think that's syntax that we mostly don't support anyway. The one mostly I care about is actually @model function f()
x ~ Normal()
y = Vector{Float64}(undef, x > 0 ? 1 : 2)
if x > 0
y[1] ~ Normal()
else
y[1] ~ Normal()
y[2] ~ Normal()
end
end That might be fine on this PR as-is? I'm not sure if I care about @model function f()
x ~ Normal()
y = (; a = 1.0, b = 1.0) # need this line so that it can assign to y.a or y.b
if x > 0
y.a ~ Normal()
else
y.b ~ Normal()
end
end I just didn't realise that it would behave differently from the Vector version when I proposed it as a test case. |
Yup, that still works on this PR. The x/y/z model also works on this PR, but x/y.a/y.b doesn't. I think in terms of the code, this PR is mostly good to go. (I believe CI failures will be fixed by #2650.) But I'm a bit hesitant to merge into the upcoming minor version, because it does technically remove support for a few models. In general, for arbitrary dynamic models like these, I think the solution to make sampling work is to use untyped VarInfo. That's what Prior used to do, which is also the only reason why it 'supported' all these things. The long term solution would be to let people sample with whatever kind of varinfo they want. That was (is?) the purpose of the sample-with-LDF PRs I still have sitting around. But right now I'm kind of leaning towards not merging this until those are in, just in the interests of not gratuitously breaking things. The flip side of course is that this PR does improve performance by a lot and it will probably be months until sample-with-LDF is merged. |
Without thinking this through carefully, I wonder if the performance gains are more important that losing support for some pretty niche models. Actually, now that I said that, maybe those niche models include all models with dynamic use of submodels? Still kinda niche though. Regardless, happy to leave it out of this upcoming release with DPPL 0.37. |
Base branch:
This PR:
I was too lazy to retroactively apply this fix to 0.39 (there's a lot of code that was changed and I honestly can't remember how it used to work). I think it's fine since we're close enough to merging 0.40.