Skip to content

Conversation

mhauru
Copy link
Member

@mhauru mhauru commented Mar 3, 2025

Note that this is a PR into #346, not into main.

This puts the benchmark models in their own module and rewrites them. It also makes a couple of other small changes to benchmarking, stylistic and small feature additions.

@mhauru mhauru requested a review from shravanngoswamii March 3, 2025 12:36
Copy link
Contributor

github-actions bot commented Mar 3, 2025

Computer Information

Julia Version 1.11.3
Commit d63adeda50d (2025-01-21 19:42 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 4 × AMD EPYC 7763 64-Core Processor
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Benchmark Report

Model AD Backend VarInfo Type Linked Eval Time / Ref Time AD Time / Eval Time
Simple assume observe forwarddiff typed false 8.5 1.4
Smorgasbord forwarddiff typed false 1443.0 29.5
Smorgasbord forwarddiff simple_namedtuple true 873.0 40.8
Smorgasbord forwarddiff untyped true 2371.3 20.6
Smorgasbord forwarddiff simple_dict true 1773.7 29.0
Smorgasbord reversediff typed true 1881.2 24.3
Loop univariate 1k reversediff typed true 5631.0 47.3
Multivariate 1k reversediff typed true 1079.0 69.5
Loop univariate 10k reversediff typed true 63368.8 44.4
Multivariate 10k reversediff typed true 9059.6 83.7
Dynamic reversediff typed true 125.5 35.9
Submodel reversediff typed true 24.7 13.0
LDA reversediff typed true 368.2 6.0

Copy link

codecov bot commented Mar 3, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.60%. Comparing base (1d1b11e) to head (37f1e93).
Report is 1 commits behind head on tor/benchmark-update.

Additional details and impacted files
@@                  Coverage Diff                  @@
##           tor/benchmark-update     #826   +/-   ##
=====================================================
  Coverage                 84.60%   84.60%           
=====================================================
  Files                        34       34           
  Lines                      3832     3832           
=====================================================
  Hits                       3242     3242           
  Misses                      590      590           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@shravanngoswamii shravanngoswamii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything looks good to me! Thank you so much @mhauru!

@mhauru mhauru merged commit ad4175a into tor/benchmark-update Mar 3, 2025
16 of 17 checks passed
@mhauru mhauru deleted the mhauru/benchmark-update branch March 3, 2025 12:58
github-merge-queue bot pushed a commit that referenced this pull request Mar 12, 2025
* bigboy update to benchmarks

* make models return random variables as NamedTuple as it can be useful for downstream tasks

* add benchmarking of evaluation with SimpleVarInfo with NamedTuple

* added some information about the execution environment

* added judgementtable_single

* added benchmarking of SimpleVarInfo, if present

* added ComponentArrays benchmarking for SimpleVarInfo

* formatting

* Apply suggestions from code review

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Update benchmarks/benchmarks.jmd

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Benchmarking CI

* Julia script for benchmarking on top of current setup

* keep old results for reference

* updated benchmarking setup

* applied suggested changes

* updated benchmarks/README.md

* setup benchmarking CI

* Update benchmark models (#826)

* Update models to benchmark plus small style changes

* Make benchmark times relative. Add benchmark documentation.

* Choose whether to show linked or unlinked benchmark times

* Make table header more concise

* Make benchmarks not depend on TuringBenchmarking.jl, and run `]dev ..` (#834)

* Make benchmarks not depend on TuringBenchmarking.jl

* Make benchmarks.jl dev the local DDPPL version

* Add benchmarks compat bounds

* Use ForwardDiff with dynamic benchmark model

* Benchmarking.yml: now comments raw markdown table enclosed in triple backticks

* Benchmarking.yml: now includes the SHA of the DynamicPPL commit in Benchmark Results comment

* Benchmark more with Mooncake

* Add model dimension to benchmark table

* Add info print

* Fix type instability in benchmark model

* Remove done TODO note

* Apply suggestions from code review

Co-authored-by: Penelope Yong <[email protected]>

* Fix table formatting bug

* Simplify benchmark suite code

* Use StableRNG

---------

Co-authored-by: Hong Ge <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Shravan Goswami <[email protected]>
Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Penelope Yong <[email protected]>
Co-authored-by: Shravan Goswami <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants