Skip to content
This repository was archived by the owner on Jul 4, 2023. It is now read-only.

Commit c04f0c6

Browse files
authored
Allow changing the number of bins (#3)
* wip * fix * allow changing the number of bins * tweak wording
1 parent 8ed825d commit c04f0c6

File tree

4 files changed

+105
-21
lines changed

4 files changed

+105
-21
lines changed

README.md

Lines changed: 67 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ However, BenchmarkPlots re-exports all the export of BenchmarkTools, so you can
1111

1212
Providing this functionality in BenchmarkTools itself was discussed in <https://github.com/JuliaCI/BenchmarkTools.jl/pull/180>.
1313

14+
Use the setting `BenchmarkPlots.NBINS[] = 10` to change the number of histogram bins used.
15+
1416
## Example
1517

1618
One just uses `BenchmarkPlots` instead of `BenchmarkTools`, e.g.
@@ -22,15 +24,23 @@ using BenchmarkPlots
2224
```
2325

2426
```
25-
samples: 10000; evals/sample: 999; memory estimate: 0 bytes; allocs estimate: 0
27+
samples: 10000; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
2628
┌ ┐
27-
[ 5.0, 10.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 7857
28-
ns [10.0, 15.0) ┤▇▇▇▇▇▇▇▇▇ 2134
29-
[15.0, 20.0) ┤ 8
30-
[20.0, 25.0) ┤ 1
29+
[ 4.0, 6.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 7823
30+
[ 6.0, 8.0) ┤▇▇▇▇▇▇▇ 1643
31+
[ 8.0, 10.0) ┤▇▇ 529
32+
[10.0, 12.0) ┤ 2
33+
[12.0, 14.0) ┤ 2
34+
ns [14.0, 16.0) ┤ 0
35+
[16.0, 18.0) ┤ 0
36+
[18.0, 20.0) ┤ 0
37+
[20.0, 22.0) ┤ 0
38+
[22.0, 24.0) ┤ 0
39+
[24.0, 26.0) ┤ 0
40+
[26.0, 28.0) ┤ 1
3141
└ ┘
3242
Counts
33-
min: 8.967 ns (0.00% GC); mean: 9.564 ns (0.00% GC); median: 9.092 ns (0.00% GC); max: 20.145 ns (0.00% GC).
43+
min: 4.916 ns (0.00% GC); mean: 5.724 ns (0.00% GC); median: 5.208 ns (0.00% GC); max: 27.458 ns (0.00% GC).
3444
```
3545

3646
That benchmark does not have a very interesting distribution, but it's not hard to find more interesting cases.
@@ -40,15 +50,18 @@ That benchmark does not have a very interesting distribution, but it's not hard
4050
```
4151

4252
```
43-
samples: 3094; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
53+
samples: 3192; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
4454
┌ ┐
45-
[ 0.0, 1000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1936
46-
ns [1000.0, 2000.0) ┤ 0
47-
[2000.0, 3000.0) ┤ 0
48-
[3000.0, 4000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1158
55+
[ 0.0, 500.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 2036
56+
[ 500.0, 1000.0) ┤ 0
57+
[1000.0, 1500.0) ┤ 0
58+
ns [1500.0, 2000.0) ┤ 0
59+
[2000.0, 2500.0) ┤ 0
60+
[2500.0, 3000.0) ┤ 0
61+
[3000.0, 3500.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1156
4962
└ ┘
5063
Counts
51-
min: 4.333 ns (0.00% GC); mean: 1.188 μs (0.00% GC); median: 7.208 ns (0.00% GC); max: 3.711 μs (0.00% GC).
64+
min: 1.875 ns (0.00% GC); mean: 1.141 μs (0.00% GC); median: 4.521 ns (0.00% GC); max: 3.315 μs (0.00% GC).
5265
```
5366

5467
Here, we see a bimodal distribution; in the case `5` is indeed in the vector, we find it very quickly, in the 0-1000 ns range (thanks to `sort` which places it at the front). In the case 5 is not present, we need to check every entry to be sure, and we end up in the 3000-4000 ns range.
@@ -60,14 +73,51 @@ Without the `sort`, we end up with more of a uniform distribution:
6073
```
6174

6275
```
63-
samples: 2394; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
76+
samples: 2461; evals/sample: 999; memory estimate: 0 bytes; allocs estimate: 0
6477
┌ ┐
65-
[ 0.0, 2000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1113
66-
ns [2000.0, 4000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1273
67-
[4000.0, 6000.0) ┤ 8
78+
[ 0.0, 500.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 364
79+
[ 500.0, 1000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇ 327
80+
[1000.0, 1500.0) ┤▇▇▇▇▇▇▇▇▇▇ 266
81+
ns [1500.0, 2000.0) ┤▇▇▇▇▇▇▇▇ 214
82+
[2000.0, 2500.0) ┤▇▇▇▇▇▇▇▇ 213
83+
[2500.0, 3000.0) ┤▇▇▇▇▇ 146
84+
[3000.0, 3500.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 931
6885
└ ┘
6986
Counts
70-
min: 2.000 ns (0.00% GC); mean: 2.035 μs (0.00% GC); median: 2.215 μs (0.00% GC); max: 5.932 μs (0.00% GC).
87+
min: 8.842 ns (0.00% GC); mean: 1.972 μs (0.00% GC); median: 2.154 μs (0.00% GC); max: 3.364 μs (0.00% GC).
88+
```
89+
90+
This function gives a somewhat more Gaussian distribution of times, kindly supplied by Mason Protter:
91+
92+
```julia
93+
f() = sum((sin(i) for i in 1:round(Int, 1000 + 100*randn())))
94+
95+
@benchmark f()
96+
```
97+
98+
```
99+
samples: 10000; evals/sample: 3; memory estimate: 0 bytes; allocs estimate: 0
100+
┌ ┐
101+
[ 0.0, 20000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 9978
102+
[ 20000.0, 40000.0) ┤ 16
103+
[ 40000.0, 60000.0) ┤ 3
104+
[ 60000.0, 80000.0) ┤ 0
105+
[ 80000.0, 100000.0) ┤ 1
106+
[100000.0, 120000.0) ┤ 1
107+
[120000.0, 140000.0) ┤ 0
108+
[140000.0, 160000.0) ┤ 0
109+
ns [160000.0, 180000.0) ┤ 0
110+
[180000.0, 200000.0) ┤ 0
111+
[200000.0, 220000.0) ┤ 0
112+
[220000.0, 240000.0) ┤ 0
113+
[240000.0, 260000.0) ┤ 0
114+
[260000.0, 280000.0) ┤ 0
115+
[280000.0, 300000.0) ┤ 0
116+
[300000.0, 320000.0) ┤ 0
117+
[320000.0, 340000.0) ┤ 1
118+
└ ┘
119+
Counts
120+
min: 6.889 μs (0.00% GC); mean: 9.161 μs (0.00% GC); median: 9.014 μs (0.00% GC); max: 327.208 μs (0.00% GC).
71121
```
72122

73123
See also <https://tratt.net/laurie/blog/entries/minimum_times_tend_to_mislead_when_benchmarking.html> for another example of where looking at the whole histogram can be useful in benchmarking.

generate_readme/README.jl

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111

1212
# Providing this functionality in BenchmarkTools itself was discussed in <https://github.com/JuliaCI/BenchmarkTools.jl/pull/180>.
1313

14+
# Use the setting `BenchmarkPlots.NBINS[] = 10` to change the number of histogram bins used.
15+
1416
# ## Example
1517

1618
# One just uses `BenchmarkPlots` instead of `BenchmarkTools`, e.g.
@@ -29,4 +31,11 @@ using BenchmarkPlots
2931

3032
@benchmark 5 v setup=(v = rand(1:10000, 10000))
3133

34+
# This function gives a somewhat more Gaussian distribution of times, kindly supplied by Mason Protter:
35+
36+
f() = sum((sin(i) for i in 1:round(Int, 1000 + 100*randn())))
37+
38+
@benchmark f()
39+
40+
3241
# See also <https://tratt.net/laurie/blog/entries/minimum_times_tend_to_mislead_when_benchmarking.html> for another example of where looking at the whole histogram can be useful in benchmarking.

src/BenchmarkPlots.jl

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,13 @@ end
1616
# Export our own `@benchmark`
1717
export @benchmark
1818

19+
"""
20+
const NBINS = Ref(0)
21+
22+
Controls the number of histogram bins used.
23+
When `NBINS[] <= 0`, the number is chosen automatically by UnicodePlots.
24+
"""
25+
const NBINS = Ref(0)
1926

2027
struct BenchmarkPlot
2128
trial::BenchmarkTools.Trial
@@ -44,7 +51,9 @@ function Base.show(io::IO, ::MIME"text/plain", bp::BenchmarkPlot)
4451
meanstr = "N/A"
4552
end
4653
println(io, "samples: ", length(t), "; evals/sample: ", t.params.evals, "; memory estimate: ", memorystr, "; allocs estimate: ", allocsstr)
47-
show(io, histogram(t.times, ylabel="ns", xlabel="Counts", nbins=5))
54+
55+
bin_arg = NBINS[] <= 0 ? NamedTuple() : (; nbins=NBINS[])
56+
show(io, histogram(t.times; ylabel="ns", xlabel="Counts", bin_arg...))
4857
println(io)
4958
print(io, "min: ", minstr, "; mean: ", meanstr, "; median: ", medstr, "; max: ", maxstr, ".")
5059
end

test/runtests.jl

Lines changed: 19 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
11
using BenchmarkPlots
22
using Test
33

4-
5-
@testset "BenchmarkPlots.jl" begin
4+
function tests()
65
bp = @benchmark 1+1
7-
86
output = sprint(show, MIME"text/plain"(), bp)
97

108
# Don't want to test the exact string since the stats will
@@ -25,4 +23,22 @@ using Test
2523
@test n_matches(r"% GC") == 4
2624
# Corners of the plot
2725
@test n_matches(r"") == n_matches(r"") == n_matches(r"") == n_matches(r"") == 1
26+
return nothing
27+
end
28+
29+
function tests(nbins)
30+
pre = BenchmarkPlots.NBINS[]
31+
BenchmarkPlots.NBINS[] = nbins
32+
try
33+
tests()
34+
finally
35+
BenchmarkPlots.NBINS[] = pre
36+
end
37+
return nothing
38+
end
39+
40+
@testset "BenchmarkPlots.jl" begin
41+
tests()
42+
tests(10)
43+
tests(-1)
2844
end

0 commit comments

Comments
 (0)