Skip to content
This repository was archived by the owner on Jul 4, 2023. It is now read-only.

Commit 3afedc2

Browse files
authored
initial functionality (#1)
* initial functionality * fix test, add badges * test on more OS's * exclude 32-bit macOS * add note * update workflows
1 parent 0bc6d38 commit 3afedc2

File tree

7 files changed

+249
-6
lines changed

7 files changed

+249
-6
lines changed

.github/workflows/CI.yml

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
2+
name: CI
3+
on:
4+
push:
5+
branches:
6+
- master
7+
- main
8+
- /^release-.*$/
9+
paths:
10+
- '.github/workflows/CI.yml'
11+
- 'test/**'
12+
- 'src/**'
13+
- 'Project.toml'
14+
pull_request:
15+
types: [opened, synchronize, reopened]
16+
paths:
17+
- '.github/workflows/CI.yml'
18+
- 'test/**'
19+
- 'src/**'
20+
- 'Project.toml'
21+
release:
22+
workflow_dispatch:
23+
jobs:
24+
test:
25+
name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }}
26+
runs-on: ${{ matrix.os }}
27+
timeout-minutes: 30
28+
strategy:
29+
fail-fast: false
30+
matrix:
31+
version: ['1.0', '1']
32+
arch: [x64, x86]
33+
os: [ubuntu-latest, windows-latest, macOS-latest]
34+
# 32-bit Julia binaries are not available on macOS
35+
exclude:
36+
- os: macOS-latest
37+
arch: x86
38+
steps:
39+
- uses: actions/checkout@v2
40+
- uses: julia-actions/setup-julia@v1
41+
with:
42+
version: ${{ matrix.version }}
43+
arch: ${{ matrix.arch }}
44+
- uses: actions/cache@v1
45+
env:
46+
cache-name: cache-artifacts
47+
with:
48+
path: ~/.julia/artifacts
49+
key: ${{ runner.os }}-test-${{ env.cache-name }}-${{ hashFiles('**/Project.toml') }}
50+
restore-keys: |
51+
${{ runner.os }}-test-${{ env.cache-name }}-
52+
${{ runner.os }}-test-
53+
${{ runner.os }}-
54+
- uses: julia-actions/julia-buildpkg@v1
55+
- uses: julia-actions/julia-runtest@v1
56+
- uses: julia-actions/julia-processcoverage@v1
57+
- uses: codecov/codecov-action@v1
58+
with:
59+
file: lcov.info

.github/workflows/CompatHelper.yml

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,19 @@ jobs:
77
CompatHelper:
88
runs-on: ubuntu-latest
99
steps:
10-
- name: Pkg.add("CompatHelper")
11-
run: julia -e 'using Pkg; Pkg.add("CompatHelper")'
12-
- name: CompatHelper.main()
10+
- name: "Install CompatHelper"
11+
run: |
12+
import Pkg
13+
name = "CompatHelper"
14+
uuid = "aa819f21-2bde-4658-8897-bab36330d9b7"
15+
version = "2"
16+
Pkg.add(; name, uuid, version)
17+
shell: julia --color=yes {0}
18+
- name: "Run CompatHelper"
19+
run: |
20+
import CompatHelper
21+
CompatHelper.main()
22+
shell: julia --color=yes {0}
1323
env:
1424
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
1525
COMPATHELPER_PRIV: ${{ secrets.DOCUMENTER_KEY }}
16-
run: julia -e 'using CompatHelper; CompatHelper.main()'

Project.toml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,15 @@ uuid = "a80a1652-aad8-438d-b80b-ecb1a674e33b"
33
authors = ["Eric Hanson <[email protected]> and contributors"]
44
version = "0.1.0"
55

6+
[deps]
7+
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
8+
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
9+
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
10+
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
11+
612
[compat]
13+
BenchmarkTools = "0.7"
14+
UnicodePlots = "1.3"
715
julia = "1"
816

917
[extras]

README.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,60 @@
1+
[![CI](https://github.com/ericphanson/BenchmarkPlots.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/ericphanson/BenchmarkPlots.jl/actions/workflows/CI.yml)
2+
[![codecov](https://codecov.io/gh/ericphanson/BenchmarkPlots.jl/branch/main/graph/badge.svg?token=v0aca89xRi)](https://codecov.io/gh/ericphanson/BenchmarkPlots.jl)
3+
14
# BenchmarkPlots
5+
6+
Wraps [BenchmarkTools.jl](https://github.com/JuliaCI/BenchmarkTools.jl/) to provide a UnicodePlots.jl-powered `show` method for `@benchmark`. This is accomplished by a custom `@benchmark` method which wraps the output in a `BenchmarkPlot` struct with a custom show method.
7+
8+
This means one should not call `using` on both BenchmarkPlots and BenchmarkTools in the same namespace, or else these `@benchmark` macros will conflict ("WARNING: using `BenchmarkTools.@benchmark` in module Main conflicts with an existing identifier.")
9+
10+
However, BenchmarkPlots re-exports all the export of BenchmarkTools, so you can simply call `using BenchmarkPlots`.
11+
12+
Based on <https://github.com/JuliaCI/BenchmarkTools.jl/pull/180>.
13+
14+
## Example
15+
16+
One just uses `BenchmarkPlots` instead of `BenchmarkTools`, e.g.
17+
```julia
18+
julia> using BenchmarkPlots
19+
20+
julia> @benchmark sin(x) setup=(x=rand())
21+
samples: 10000; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
22+
┌ ┐
23+
[ 0.0, 5.0) ┤ 131
24+
[ 5.0, 10.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 9848
25+
ns [10.0, 15.0) ┤ 18
26+
[15.0, 20.0) ┤ 2
27+
[20.0, 25.0) ┤ 1
28+
└ ┘
29+
Counts
30+
min: 4.917 ns (0.00% GC); mean: 5.578 ns (0.00% GC); median: 5.042 ns (0.00% GC); max: 22.375 ns (0.00% GC).
31+
```
32+
That benchmark does not have a very interesting distribution, but it's not hard to find more interesting cases.
33+
34+
```julia
35+
julia> @benchmark 5 v setup=(v = sort(rand(1:10000, 10000)))
36+
samples: 3169; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
37+
┌ ┐
38+
[ 0.0, 1000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 2020
39+
ns [1000.0, 2000.0) ┤ 0
40+
[2000.0, 3000.0) ┤ 0
41+
[3000.0, 4000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1149
42+
└ ┘
43+
Counts
44+
min: 1.875 ns (0.00% GC); mean: 1.152 μs (0.00% GC); median: 4.708 ns (0.00% GC); max: 3.588 μs (0.00% GC).
45+
```
46+
Here, we see a bimodal distribution; in the case `5` is indeed in the vector, we find it very quickly, in the 0-1000 ns range (thanks to `sort` which places it at the front). In the case 5 is not present, we need to check every entry to be sure, and we end up in the 3000-4000 ns range.
47+
48+
Without the `sort`, we end up with more of a uniform distribution:
49+
```julia
50+
julia> @benchmark 5 v setup=(v = rand(1:10000, 10000))
51+
samples: 2379; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
52+
┌ ┐
53+
[ 0.0, 1000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 619
54+
ns [1000.0, 2000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 458
55+
[2000.0, 3000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇ 356
56+
[3000.0, 4000.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 946
57+
└ ┘
58+
Counts
59+
min: 1.917 ns (0.00% GC); mean: 2.040 μs (0.00% GC); median: 2.257 μs (0.00% GC); max: 3.552 μs (0.00% GC).
60+
```

src/BenchmarkPlots.jl

Lines changed: 58 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,62 @@
11
module BenchmarkPlots
22

3-
# Write your package code here.
3+
using UnicodePlots
4+
using Statistics
5+
using Printf
6+
using BenchmarkTools: BenchmarkTools
7+
8+
# Reexport everything *except* `@benchmark`
9+
for T in setdiff(names(BenchmarkTools), tuple(Symbol("@benchmark")))
10+
@eval begin
11+
using BenchmarkTools: $T
12+
export $T
13+
end
14+
end
15+
16+
# Export our own `@benchmark`
17+
export @benchmark
18+
19+
20+
struct BenchmarkPlot
21+
trial::BenchmarkTools.Trial
22+
end
23+
24+
# borrowed some from `show` implementation for `BenchmarkTools.Trial`
25+
function Base.show(io::IO, ::MIME"text/plain", bp::BenchmarkPlot)
26+
t = bp.trial
27+
if length(t) > 0
28+
min = minimum(t)
29+
max = maximum(t)
30+
med = median(t)
31+
avg = mean(t)
32+
memorystr = string(prettymemory(memory(min)))
33+
allocsstr = string(allocs(min))
34+
minstr = string(prettytime(time(min)), " (", prettypercent(gcratio(min)), " GC)")
35+
maxstr = string(prettytime(time(max)), " (", prettypercent(gcratio(max)), " GC)")
36+
medstr = string(prettytime(time(med)), " (", prettypercent(gcratio(med)), " GC)")
37+
meanstr = string(prettytime(time(avg)), " (", prettypercent(gcratio(avg)), " GC)")
38+
else
39+
memorystr = "N/A"
40+
allocsstr = "N/A"
41+
minstr = "N/A"
42+
maxstr = "N/A"
43+
medstr = "N/A"
44+
meanstr = "N/A"
45+
end
46+
println(io, "samples: ", length(t), "; evals/sample: ", t.params.evals, "; memory estimate: ", memorystr, "; allocs estimate: ", allocsstr)
47+
show(io, histogram(t.times, ylabel="ns", xlabel="Counts", nbins=5))
48+
println(io)
49+
print(io, "min: ", minstr, "; mean: ", meanstr, "; median: ", medstr, "; max: ", maxstr, ".")
50+
end
51+
52+
macro benchmark(exprs...)
53+
return quote
54+
BenchmarkPlot(BenchmarkTools.@benchmark($(exprs...)))
55+
end
56+
end
57+
58+
# We vendor some pretty-printing methods from BenchmarkTools
59+
# so that we don't have to rely on internals.
60+
include("vendor.jl")
461

562
end

src/vendor.jl

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
gcratio(t) = ratio(gctime(t), time(t))
2+
3+
prettypercent(p) = string(@sprintf("%.2f", p * 100), "%")
4+
5+
function prettytime(t)
6+
if t < 1e3
7+
value, units = t, "ns"
8+
elseif t < 1e6
9+
value, units = t / 1e3, "μs"
10+
elseif t < 1e9
11+
value, units = t / 1e6, "ms"
12+
else
13+
value, units = t / 1e9, "s"
14+
end
15+
return string(@sprintf("%.3f", value), " ", units)
16+
end
17+
18+
function prettymemory(b)
19+
if b < 1024
20+
return string(b, " bytes")
21+
elseif b < 1024^2
22+
value, units = b / 1024, "KiB"
23+
elseif b < 1024^3
24+
value, units = b / 1024^2, "MiB"
25+
else
26+
value, units = b / 1024^3, "GiB"
27+
end
28+
return string(@sprintf("%.2f", value), " ", units)
29+
end

test/runtests.jl

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,28 @@
11
using BenchmarkPlots
22
using Test
33

4+
45
@testset "BenchmarkPlots.jl" begin
5-
# Write your tests here.
6+
bp = @benchmark 1+1
7+
8+
output = sprint(show, MIME"text/plain"(), bp)
9+
10+
# Don't want to test the exact string since the stats will
11+
# fluctuate. So let's just test that it contains the right
12+
# number of the right things, and assume they're arranged properly.
13+
n_matches = r -> length(collect(eachmatch(r, output)))
14+
15+
# Top row: timing stats
16+
@test n_matches(r"samples:") == 1
17+
@test n_matches(r"evals/sample:") == 1
18+
@test n_matches(r"memory estimate:") == 1
19+
@test n_matches(r"allocs estimate:") == 1
20+
# y-axis label + at most four summary stats
21+
@test 1 <= n_matches(r"ns") <= 5
22+
@test n_matches(r"Counts") == 1
23+
# Summary stats
24+
@test n_matches(r"min") == n_matches(r"mean") == n_matches(r"median") == n_matches(r"max") == 1
25+
@test n_matches(r"% GC") == 4
26+
# Corners of the plot
27+
@test n_matches(r"") == n_matches(r"") == n_matches(r"") == n_matches(r"") == 1
628
end

0 commit comments

Comments
 (0)