-
Notifications
You must be signed in to change notification settings - Fork 36
Update benchmark models #826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Computer Information
Benchmark Report
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## tor/benchmark-update #826 +/- ##
=====================================================
Coverage 84.60% 84.60%
=====================================================
Files 34 34
Lines 3832 3832
=====================================================
Hits 3242 3242
Misses 590 590 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything looks good to me! Thank you so much @mhauru!
* bigboy update to benchmarks * make models return random variables as NamedTuple as it can be useful for downstream tasks * add benchmarking of evaluation with SimpleVarInfo with NamedTuple * added some information about the execution environment * added judgementtable_single * added benchmarking of SimpleVarInfo, if present * added ComponentArrays benchmarking for SimpleVarInfo * formatting * Apply suggestions from code review Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update benchmarks/benchmarks.jmd Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Benchmarking CI * Julia script for benchmarking on top of current setup * keep old results for reference * updated benchmarking setup * applied suggested changes * updated benchmarks/README.md * setup benchmarking CI * Update benchmark models (#826) * Update models to benchmark plus small style changes * Make benchmark times relative. Add benchmark documentation. * Choose whether to show linked or unlinked benchmark times * Make table header more concise * Make benchmarks not depend on TuringBenchmarking.jl, and run `]dev ..` (#834) * Make benchmarks not depend on TuringBenchmarking.jl * Make benchmarks.jl dev the local DDPPL version * Add benchmarks compat bounds * Use ForwardDiff with dynamic benchmark model * Benchmarking.yml: now comments raw markdown table enclosed in triple backticks * Benchmarking.yml: now includes the SHA of the DynamicPPL commit in Benchmark Results comment * Benchmark more with Mooncake * Add model dimension to benchmark table * Add info print * Fix type instability in benchmark model * Remove done TODO note * Apply suggestions from code review Co-authored-by: Penelope Yong <[email protected]> * Fix table formatting bug * Simplify benchmark suite code * Use StableRNG --------- Co-authored-by: Hong Ge <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Shravan Goswami <[email protected]> Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Penelope Yong <[email protected]> Co-authored-by: Shravan Goswami <[email protected]>
Note that this is a PR into #346, not into main.
This puts the benchmark models in their own module and rewrites them. It also makes a couple of other small changes to benchmarking, stylistic and small feature additions.