-
Notifications
You must be signed in to change notification settings - Fork 36
Fix resume_from
for parallel sampling
#1035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
resume_from=nothing, | ||
initial_state=loadstate(resume_from), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now if you call sample(..., MCMCThreads(), ...; resume_from=chn
then initial_state
will be correctly loaded before being sent to AbstractMCMC.
The only requirement for this to work is then that initial_state
must be a vector containing exactly nchains
final states. For MCMCChains this is not currently true, but it will be fixed by TuringLang/MCMCChains.jl#488, hence the need for that PR.
Benchmark Report for Commit c84699dComputer Information
Benchmark Results
|
DynamicPPL.jl documentation for PR #1035 is available at: |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1035 +/- ##
==========================================
+ Coverage 82.26% 82.34% +0.08%
==========================================
Files 38 38
Lines 3947 3949 +2
==========================================
+ Hits 3247 3252 +5
+ Misses 700 697 -3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Pull Request Test Coverage Report for Build 17460622460Details
💛 - Coveralls |
Pull Request Test Coverage Report for Build 17460622460Details
💛 - Coveralls |
@joelkandiah I think this should make |
Closes #1033