-
Notifications
You must be signed in to change notification settings - Fork 36
Make benchmarks not depend on TuringBenchmarking.jl, and run ]dev ..
#834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## tor/benchmark-update #834 +/- ##
=====================================================
Coverage 84.60% 84.60%
=====================================================
Files 34 34
Lines 3832 3832
=====================================================
Hits 3242 3242
Misses 590 590 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Computer Information
Benchmark Report
|
Looks good to me after your latest commit in this, I ran it locally too and it ran fine! Thank you @mhauru! |
shravanngoswamii
approved these changes
Mar 6, 2025
github-merge-queue bot
pushed a commit
that referenced
this pull request
Mar 12, 2025
* bigboy update to benchmarks * make models return random variables as NamedTuple as it can be useful for downstream tasks * add benchmarking of evaluation with SimpleVarInfo with NamedTuple * added some information about the execution environment * added judgementtable_single * added benchmarking of SimpleVarInfo, if present * added ComponentArrays benchmarking for SimpleVarInfo * formatting * Apply suggestions from code review Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update benchmarks/benchmarks.jmd Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Benchmarking CI * Julia script for benchmarking on top of current setup * keep old results for reference * updated benchmarking setup * applied suggested changes * updated benchmarks/README.md * setup benchmarking CI * Update benchmark models (#826) * Update models to benchmark plus small style changes * Make benchmark times relative. Add benchmark documentation. * Choose whether to show linked or unlinked benchmark times * Make table header more concise * Make benchmarks not depend on TuringBenchmarking.jl, and run `]dev ..` (#834) * Make benchmarks not depend on TuringBenchmarking.jl * Make benchmarks.jl dev the local DDPPL version * Add benchmarks compat bounds * Use ForwardDiff with dynamic benchmark model * Benchmarking.yml: now comments raw markdown table enclosed in triple backticks * Benchmarking.yml: now includes the SHA of the DynamicPPL commit in Benchmark Results comment * Benchmark more with Mooncake * Add model dimension to benchmark table * Add info print * Fix type instability in benchmark model * Remove done TODO note * Apply suggestions from code review Co-authored-by: Penelope Yong <[email protected]> * Fix table formatting bug * Simplify benchmark suite code * Use StableRNG --------- Co-authored-by: Hong Ge <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Shravan Goswami <[email protected]> Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Penelope Yong <[email protected]> Co-authored-by: Shravan Goswami <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Note that this is a PR into #346, not to
main
.Turns out we can't depend on TuringBenchmarking.jl because of annoying reverse dependency problems, as discussed in #346, hence these changes. I made this by literally copying the code from TuringBenchmarking.jl and removing all the features of it that weren't useful here. I also put in the one-liner that will hopefully make CI run using the current DPPL version rather than getting one from the package repository, and some compat bounds.