|
| 1 | +# BenchmarkTools |
| 2 | + |
| 3 | +BenchmarkTools makes **performance tracking of Julia code easy** by supplying a framework for **writing and running groups of benchmarks** as well as **comparing benchmark results**. |
| 4 | + |
| 5 | +This package is used to write and run the benchmarks found in [BaseBenchmarks.jl](https://github.com/JuliaCI/BaseBenchmarks.jl). |
| 6 | + |
| 7 | +The CI infrastructure for automated performance testing of the Julia language is not in this package, but can be found in [Nanosoldier.jl](https://github.com/JuliaCI/Nanosoldier.jl). |
| 8 | + |
| 9 | +## Quick Start |
| 10 | + |
| 11 | +The primary macro provided by BenchmarkTools is `@benchmark`: |
| 12 | + |
| 13 | +```julia |
| 14 | +julia> using BenchmarkTools |
| 15 | + |
| 16 | +# The `setup` expression is run once per sample, and is not included in the |
| 17 | +# timing results. Note that each sample can require multiple evaluations |
| 18 | +# benchmark kernel evaluations. See the BenchmarkTools manual for details. |
| 19 | +julia> @benchmark sin(x) setup=(x=rand()) |
| 20 | +BenchmarkTools.Trial: |
| 21 | + memory estimate: 0 bytes |
| 22 | + allocs estimate: 0 |
| 23 | + -------------- |
| 24 | + minimum time: 4.248 ns (0.00% GC) |
| 25 | + median time: 4.631 ns (0.00% GC) |
| 26 | + mean time: 5.502 ns (0.00% GC) |
| 27 | + maximum time: 60.995 ns (0.00% GC) |
| 28 | + -------------- |
| 29 | + samples: 10000 |
| 30 | + evals/sample: 1000 |
| 31 | +``` |
| 32 | + |
| 33 | +For quick sanity checks, one can use the [`@btime` macro](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#benchmarking-basics), which is a convenience wrapper around `@benchmark` whose output is analogous to Julia's built-in [`@time` macro](https://docs.julialang.org/en/v1/base/base/#Base.@time): |
| 34 | + |
| 35 | +```julia |
| 36 | +julia> @btime sin(x) setup=(x=rand()) |
| 37 | + 4.361 ns (0 allocations: 0 bytes) |
| 38 | +0.49587200950472454 |
| 39 | +``` |
| 40 | + |
| 41 | +If the expression you want to benchmark depends on external variables, you should use [`$` to "interpolate"](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#interpolating-values-into-benchmark-expressions) them into the benchmark expression to |
| 42 | +[avoid the problems of benchmarking with globals](https://docs.julialang.org/en/v1/manual/performance-tips/#Avoid-global-variables). |
| 43 | +Essentially, any interpolated variable `$x` or expression `$(...)` is "pre-computed" before benchmarking begins: |
| 44 | + |
| 45 | +```julia |
| 46 | +julia> A = rand(3,3); |
| 47 | + |
| 48 | +julia> @btime inv($A); # we interpolate the global variable A with $A |
| 49 | + 1.191 μs (10 allocations: 2.31 KiB) |
| 50 | + |
| 51 | +julia> @btime inv($(rand(3,3))); # interpolation: the rand(3,3) call occurs before benchmarking |
| 52 | + 1.192 μs (10 allocations: 2.31 KiB) |
| 53 | + |
| 54 | +julia> @btime inv(rand(3,3)); # the rand(3,3) call is included in the benchmark time |
| 55 | + 1.295 μs (11 allocations: 2.47 KiB) |
| 56 | +``` |
| 57 | + |
| 58 | +Sometimes, interpolating variables into very simple expressions can give the compiler more information than you intended, causing it to "cheat" the benchmark by hoisting the calculation out of the benchmark code |
| 59 | +```julia |
| 60 | +julia> a = 1; b = 2 |
| 61 | +2 |
| 62 | + |
| 63 | +julia> @btime $a + $b |
| 64 | + 0.024 ns (0 allocations: 0 bytes) |
| 65 | +3 |
| 66 | +``` |
| 67 | +As a rule of thumb, if a benchmark reports that it took less than a nanosecond to perform, this hoisting probably occured. You can avoid this by referencing and dereferencing the interpolated variables |
| 68 | +```julia |
| 69 | +julia> @btime $(Ref(a))[] + $(Ref(b))[] |
| 70 | + 1.277 ns (0 allocations: 0 bytes) |
| 71 | +3 |
| 72 | +``` |
| 73 | + |
| 74 | +As described the [Manual](@ref), the BenchmarkTools package supports many other features, both for additional output and for more fine-grained control over the benchmarking process. |
0 commit comments