Skip to content

Commit 863c514

Browse files
authored
set up Documenter (#209)
1 parent 9a90d07 commit 863c514

File tree

9 files changed

+1287
-1
lines changed

9 files changed

+1287
-1
lines changed

.github/workflows/Documentation.yml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
name: Documentation
2+
3+
on:
4+
push:
5+
branches:
6+
- master
7+
tags: '*'
8+
9+
jobs:
10+
build:
11+
runs-on: ubuntu-latest
12+
steps:
13+
- uses: actions/checkout@v2
14+
- uses: julia-actions/setup-julia@latest
15+
with:
16+
version: 1
17+
- name: Install dependencies
18+
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
19+
- name: Build and deploy
20+
env:
21+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
22+
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
23+
run: julia --project=docs/ docs/make.jl

.gitignore

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,8 @@
22
*.jl.*.cov
33
*.jl.mem
44
benchmark/params.jld
5-
test/x.json
5+
test/x.json
6+
docs/Manifest.toml
7+
docs/build
8+
docs/src/assets/indigo.css
9+
Manifest.toml

docs/Project.toml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
[deps]
2+
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
3+
DocThemeIndigo = "8bac0ac5-51bf-41f9-885e-2bf1ac2bec5f"
4+
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"

docs/make.jl

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
using BenchmarkTools
2+
using Documenter
3+
using DocThemeIndigo
4+
indigo = DocThemeIndigo.install(BenchmarkTools)
5+
6+
makedocs(;
7+
modules=[BenchmarkTools],
8+
repo="https://github.com/JuliaCI/BenchmarkTools.jl/blob/{commit}{path}#{line}",
9+
sitename="BenchmarkTools.jl",
10+
format=Documenter.HTML(;
11+
prettyurls=get(ENV, "CI", "false") == "true",
12+
canonical="https://JuliaCI.github.io/BenchmarkTools.jl",
13+
assets=String[indigo],
14+
),
15+
pages=[
16+
"Home" => "index.md",
17+
"Manual" => "manual.md",
18+
"Linux-based environments" => "linuxtips.md",
19+
"Reference" => "reference.md",
20+
],
21+
)
22+
23+
deploydocs(;
24+
repo="github.com/JuliaCI/BenchmarkTools.jl",
25+
)

docs/src/index.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# BenchmarkTools
2+
3+
BenchmarkTools makes **performance tracking of Julia code easy** by supplying a framework for **writing and running groups of benchmarks** as well as **comparing benchmark results**.
4+
5+
This package is used to write and run the benchmarks found in [BaseBenchmarks.jl](https://github.com/JuliaCI/BaseBenchmarks.jl).
6+
7+
The CI infrastructure for automated performance testing of the Julia language is not in this package, but can be found in [Nanosoldier.jl](https://github.com/JuliaCI/Nanosoldier.jl).
8+
9+
## Quick Start
10+
11+
The primary macro provided by BenchmarkTools is `@benchmark`:
12+
13+
```julia
14+
julia> using BenchmarkTools
15+
16+
# The `setup` expression is run once per sample, and is not included in the
17+
# timing results. Note that each sample can require multiple evaluations
18+
# benchmark kernel evaluations. See the BenchmarkTools manual for details.
19+
julia> @benchmark sin(x) setup=(x=rand())
20+
BenchmarkTools.Trial:
21+
memory estimate: 0 bytes
22+
allocs estimate: 0
23+
--------------
24+
minimum time: 4.248 ns (0.00% GC)
25+
median time: 4.631 ns (0.00% GC)
26+
mean time: 5.502 ns (0.00% GC)
27+
maximum time: 60.995 ns (0.00% GC)
28+
--------------
29+
samples: 10000
30+
evals/sample: 1000
31+
```
32+
33+
For quick sanity checks, one can use the [`@btime` macro](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#benchmarking-basics), which is a convenience wrapper around `@benchmark` whose output is analogous to Julia's built-in [`@time` macro](https://docs.julialang.org/en/v1/base/base/#Base.@time):
34+
35+
```julia
36+
julia> @btime sin(x) setup=(x=rand())
37+
4.361 ns (0 allocations: 0 bytes)
38+
0.49587200950472454
39+
```
40+
41+
If the expression you want to benchmark depends on external variables, you should use [`$` to "interpolate"](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#interpolating-values-into-benchmark-expressions) them into the benchmark expression to
42+
[avoid the problems of benchmarking with globals](https://docs.julialang.org/en/v1/manual/performance-tips/#Avoid-global-variables).
43+
Essentially, any interpolated variable `$x` or expression `$(...)` is "pre-computed" before benchmarking begins:
44+
45+
```julia
46+
julia> A = rand(3,3);
47+
48+
julia> @btime inv($A); # we interpolate the global variable A with $A
49+
1.191 μs (10 allocations: 2.31 KiB)
50+
51+
julia> @btime inv($(rand(3,3))); # interpolation: the rand(3,3) call occurs before benchmarking
52+
1.192 μs (10 allocations: 2.31 KiB)
53+
54+
julia> @btime inv(rand(3,3)); # the rand(3,3) call is included in the benchmark time
55+
1.295 μs (11 allocations: 2.47 KiB)
56+
```
57+
58+
Sometimes, interpolating variables into very simple expressions can give the compiler more information than you intended, causing it to "cheat" the benchmark by hoisting the calculation out of the benchmark code
59+
```julia
60+
julia> a = 1; b = 2
61+
2
62+
63+
julia> @btime $a + $b
64+
0.024 ns (0 allocations: 0 bytes)
65+
3
66+
```
67+
As a rule of thumb, if a benchmark reports that it took less than a nanosecond to perform, this hoisting probably occured. You can avoid this by referencing and dereferencing the interpolated variables
68+
```julia
69+
julia> @btime $(Ref(a))[] + $(Ref(b))[]
70+
1.277 ns (0 allocations: 0 bytes)
71+
3
72+
```
73+
74+
As described the [Manual](@ref), the BenchmarkTools package supports many other features, both for additional output and for more fine-grained control over the benchmarking process.

0 commit comments

Comments
 (0)