Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# DynamicPPL Changelog

## 0.36.0

### BenchmarkTools extension

DynamicPPL now contains a BenchmarkTools extension.
If you have both packages loaded, then it exports a single function `make_benchmark_suite`, which returns a `BenchmarkTools.BenchmarkGroup` object.
Running this will give you information about the time taken to evaluate the log density of the model ("evaluation"), as well as the time taken to evaluate its gradient.

Note that benchmarking both of these is in general much easier since the changes in 0.35.0.
If you just want to run a single model, the easiest way is to do this:

```julia
@model f() = ...
ldf = LogDensityFunction(f(); adtype=AutoMyBackend())
params = ldf.varinfo[:]
@btime LogDensityProblems.logdensity($ldf, params)
@btime LogDensityProblems.logdensity_and_gradient($ldf, params)
```

The `make_benchmark_suite` function is essentially a nice wrapper around this.

## 0.35.5

Several internal methods have been removed:
Expand Down
3 changes: 3 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Requires = "ae029012-a4dd-5104-9daa-d747884805df"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[weakdeps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
EnzymeCore = "f151be2c-9106-41f4-ab19-57ee4f262869"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
Expand All @@ -34,6 +35,7 @@ MCMCChains = "c7f686f2-ff18-58e9-bc7b-31028e88f75d"
Mooncake = "da2b9cff-9c12-43a0-ae48-6db2b0edb7d6"

[extensions]
DynamicPPLBenchmarkToolsExt = ["BenchmarkTools"]
DynamicPPLChainRulesCoreExt = ["ChainRulesCore"]
DynamicPPLEnzymeCoreExt = ["EnzymeCore"]
DynamicPPLForwardDiffExt = ["ForwardDiff"]
Expand All @@ -47,6 +49,7 @@ AbstractMCMC = "5"
AbstractPPL = "0.10.1"
Accessors = "0.1"
BangBang = "0.4.1"
BenchmarkTools = "1.6.0"
Bijectors = "0.13.18, 0.14, 0.15"
ChainRulesCore = "1"
Compat = "4"
Expand Down
File renamed without changes.
83 changes: 59 additions & 24 deletions benchmarks/benchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,43 @@ using Pkg
# To ensure we benchmark the local version of DynamicPPL, dev the folder above.
Pkg.develop(; path=joinpath(@__DIR__, ".."))

using DynamicPPLBenchmarks: Models, make_suite, model_dimension
using DynamicPPL: DynamicPPL, make_benchmark_suite, VarInfo
using ADTypes
using BenchmarkTools: @benchmark, median, run
using PrettyTables: PrettyTables, ft_printf
using ForwardDiff: ForwardDiff
using Mooncake: Mooncake
using ReverseDiff: ReverseDiff
using StableRNGs: StableRNG

rng = StableRNG(23)
include("Models.jl")

"""
model_dimension(model, islinked)

Return the dimension of `model`, accounting for linking, if any.
"""
function model_dimension(model, islinked)
vi = VarInfo()
model(vi)
if islinked
vi = DynamicPPL.link(vi, model)
end
return length(vi[:])
end

# Create DynamicPPL.Model instances to run benchmarks on.
smorgasbord_instance = Models.smorgasbord(randn(rng, 100), randn(rng, 100))
smorgasbord_instance = Models.smorgasbord(
randn(StableRNG(23), 100), randn(StableRNG(23), 100)
)
loop_univariate1k, multivariate1k = begin
data_1k = randn(rng, 1_000)
data_1k = randn(StableRNG(23), 1_000)
loop = Models.loop_univariate(length(data_1k)) | (; o=data_1k)
multi = Models.multivariate(length(data_1k)) | (; o=data_1k)
loop, multi
end
loop_univariate10k, multivariate10k = begin
data_10k = randn(rng, 10_000)
data_10k = randn(StableRNG(23), 10_000)
Comment on lines -15 to +41
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a bunch of changes to rng here. The point of using a fresh RNG object for every sampling call is to make sure that the values sampled don't change when we add or remove models.

We discussed this previously on Turing's test suite, cf. TuringLang/Turing.jl#2433 (comment)

loop = Models.loop_univariate(length(data_10k)) | (; o=data_10k)
multi = Models.multivariate(length(data_10k)) | (; o=data_10k)
loop, multi
Expand All @@ -29,43 +49,58 @@ lda_instance = begin
Models.lda(2, d, w)
end

# AD types setup
fd = AutoForwardDiff()
rd = AutoReverseDiff()
mc = AutoMooncake(; config=nothing)
"""
get_adtype_shortname(adtype::ADTypes.AbstractADType)

Get the package name that corresponds to the the AD backend `adtype`. Only used
for pretty-printing.
"""
get_adtype_shortname(::AutoMooncake) = "Mooncake"
get_adtype_shortname(::AutoForwardDiff) = "ForwardDiff"
get_adtype_shortname(::AutoReverseDiff{false}) = "ReverseDiff"
get_adtype_shortname(::AutoReverseDiff{true}) = "ReverseDiff:Compiled"

# Specify the combinations to test:
# (Model Name, model instance, VarInfo choice, AD backend, linked)
chosen_combinations = [
(
"Simple assume observe",
Models.simple_assume_observe(randn(rng)),
Models.simple_assume_observe(randn(StableRNG(23))),
:typed,
:forwarddiff,
fd,
false,
),
("Smorgasbord", smorgasbord_instance, :typed, :forwarddiff, false),
("Smorgasbord", smorgasbord_instance, :simple_namedtuple, :forwarddiff, true),
("Smorgasbord", smorgasbord_instance, :untyped, :forwarddiff, true),
("Smorgasbord", smorgasbord_instance, :simple_dict, :forwarddiff, true),
("Smorgasbord", smorgasbord_instance, :typed, :reversediff, true),
("Smorgasbord", smorgasbord_instance, :typed, :mooncake, true),
("Loop univariate 1k", loop_univariate1k, :typed, :mooncake, true),
("Multivariate 1k", multivariate1k, :typed, :mooncake, true),
("Loop univariate 10k", loop_univariate10k, :typed, :mooncake, true),
("Multivariate 10k", multivariate10k, :typed, :mooncake, true),
("Dynamic", Models.dynamic(), :typed, :mooncake, true),
("Submodel", Models.parent(randn(rng)), :typed, :mooncake, true),
("LDA", lda_instance, :typed, :reversediff, true),
("Smorgasbord", smorgasbord_instance, :typed, fd, false),
("Smorgasbord", smorgasbord_instance, :simple_namedtuple, fd, true),
("Smorgasbord", smorgasbord_instance, :untyped, fd, true),
("Smorgasbord", smorgasbord_instance, :simple_dict, fd, true),
("Smorgasbord", smorgasbord_instance, :typed, rd, true),
("Smorgasbord", smorgasbord_instance, :typed, mc, true),
("Loop univariate 1k", loop_univariate1k, :typed, mc, true),
("Multivariate 1k", multivariate1k, :typed, mc, true),
("Loop univariate 10k", loop_univariate10k, :typed, mc, true),
("Multivariate 10k", multivariate10k, :typed, mc, true),
("Dynamic", Models.dynamic(), :typed, mc, true),
("Submodel", Models.parent(randn(StableRNG(23))), :typed, mc, true),
("LDA", lda_instance, :typed, rd, true),
]

# Time running a model-like function that does not use DynamicPPL, as a reference point.
# Eval timings will be relative to this.
reference_time = begin
obs = randn(rng)
obs = randn(StableRNG(23))
median(@benchmark Models.simple_assume_observe_non_model(obs)).time
end

results_table = Tuple{String,Int,String,String,Bool,Float64,Float64}[]

for (model_name, model, varinfo_choice, adbackend, islinked) in chosen_combinations
@info "Running benchmark for $model_name"
suite = make_suite(model, varinfo_choice, adbackend, islinked)
@info "Running benchmark for $model_name / $varinfo_choice / $(get_adtype_shortname(adbackend))"
suite = make_benchmark_suite(StableRNG(23), model, varinfo_choice, adbackend, islinked)
results = run(suite)
eval_time = median(results["evaluation"]).time
relative_eval_time = eval_time / reference_time
Expand All @@ -76,7 +111,7 @@ for (model_name, model, varinfo_choice, adbackend, islinked) in chosen_combinati
(
model_name,
model_dimension(model, islinked),
string(adbackend),
get_adtype_shortname(adbackend),
string(varinfo_choice),
islinked,
relative_eval_time,
Expand Down
106 changes: 0 additions & 106 deletions benchmarks/src/DynamicPPLBenchmarks.jl

This file was deleted.

3 changes: 3 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
[deps]
Accessors = "7d9f7c33-5ae7-4f3b-8dc6-eff91059b697"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterMermaid = "a078cd44-4d9c-4618-b545-3ab9d77f9177"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
JET = "c3a54625-cd67-489e-a8e7-0a5a0ff4e31b"
Expand All @@ -13,6 +15,7 @@ StableRNGs = "860ef19b-820b-49d6-a774-d7a799459cd3"

[compat]
Accessors = "0.1"
BenchmarkTools = "1"
DataStructures = "0.18"
Distributions = "0.25"
Documenter = "1"
Expand Down
10 changes: 8 additions & 2 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,10 @@ using DynamicPPL: AbstractPPL
# consistent with that.
using Distributions
using DocumenterMermaid
# load MCMCChains package extension to make `predict` available

# To get docstrings from package extensions
using MCMCChains
using BenchmarkTools

# Doctest setup
DocMeta.setdocmeta!(
Expand All @@ -22,7 +24,11 @@ makedocs(;
# The API index.html page is fairly large, and violates the default HTML page size
# threshold of 200KiB, so we double that.
format=Documenter.HTML(; size_threshold=2^10 * 400),
modules=[DynamicPPL, Base.get_extension(DynamicPPL, :DynamicPPLMCMCChainsExt)],
modules=[
DynamicPPL,
Base.get_extension(DynamicPPL, :DynamicPPLMCMCChainsExt),
Base.get_extension(DynamicPPL, :DynamicPPLBenchmarkToolsExt),
],
pages=[
"Home" => "index.md", "API" => "api.md", "Internals" => ["internals/varinfo.md"]
],
Expand Down
18 changes: 18 additions & 0 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,24 @@ DynamicPPL.TestUtils.update_values!!
DynamicPPL.TestUtils.test_values
```

## Benchmarking Utilities

If you have `BenchmarkTools` loaded, this function will be available:

```@docs
DynamicPPL.make_benchmark_suite
```

For more fine-grained control over this, you can construct a [`LogDensityFunction`](@ref) yourself and run something along the lines of:

```julia
# set up your model and varinfo here
ldf = LogDensityFunction(model, varinfo; adtype=adtype)
params = varinfo[:]
@benchmark LogDensityProblems.logdensity($ldf, params)
@benchmark LogDensityProblems.logdensity_with_gradient($ldf, params)
```

## Debugging Utilities

DynamicPPL provides a few methods for checking validity of a model-definition.
Expand Down
Loading