You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/manual.md
+70-1Lines changed: 70 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,7 +86,18 @@ You can pass the following keyword arguments to `@benchmark`, `@benchmarkable`,
86
86
-`time_tolerance`: The noise tolerance for the benchmark's time estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.time_tolerance = 0.05`.
87
87
-`memory_tolerance`: The noise tolerance for the benchmark's memory estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.memory_tolerance = 0.01`.
88
88
89
-
To change the default values of the above fields, one can mutate the fields of `BenchmarkTools.DEFAULT_PARAMETERS`, for example:
89
+
The following keyword arguments relate to [Running custom benchmarks] are experimental and subject to change, see [Running custom benchmarks] for furthe details.:
90
+
91
+
-`run_customizable_func_only`: If `true`, only the customizable benchmark. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS..run_customizable_func_only = false`.
92
+
-`enable_customizable_func`: If `:ALL` the customizable benchmark runs on every sample, if `:LAST` the customizable benchmark runs on the last sample, if `:FALSE` the customizable benchmark is never run. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.enable_customizable_func = :FALSE`
93
+
-`customizable_gcsample`: If `true`, runs `gc()` before each sample of the customizable benchmark. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.customizable_gcsample = false`
94
+
-`setup_prehook`: Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.teardown_posthook = _nothing_func`, which returns nothing.
95
+
-`teardown_posthook`: Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.teardown_posthook = _nothing_func`, which returns nothing.
96
+
-`sample_result`: Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.teardown_posthook = _nothing_func`, which returns nothing.
97
+
-`prehook`: Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.teardown_posthook = _nothing_func`, which returns nothing.
98
+
-`posthook`: Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.teardown_posthook = _nothing_func`, which returns nothing.
99
+
100
+
To change the default values of the above fields, one can mutate the fields of `BenchmarkTools.DEFAULT_PARAMETERS` (this is not supported for `prehook` and `posthook`), for example:
90
101
91
102
```julia
92
103
# change default for `seconds` to 2.5
@@ -347,10 +358,20 @@ BenchmarkTools.Trial
347
358
gcsample: Bool false
348
359
time_tolerance: Float64 0.05
349
360
memory_tolerance: Float64 0.01
361
+
run_customizable_func_only: Bool false
362
+
enable_customizable_func: Symbol FALSE
363
+
customizable_gcsample: Bool false
364
+
setup_prehook: _nothing_func (function of type typeof(BenchmarkTools._nothing_func))
365
+
teardown_posthook: _nothing_func (function of type typeof(BenchmarkTools._nothing_func))
366
+
sample_result: _nothing_func (function of type typeof(BenchmarkTools._nothing_func))
367
+
prehook: _nothing_func (function of type typeof(BenchmarkTools._nothing_func))
368
+
posthook: _nothing_func (function of type typeof(BenchmarkTools._nothing_func))
As you can see from the above, a couple of different timing estimates are pretty-printed with the `Trial`. You can calculate these estimates yourself using the `minimum`, `maximum`, `median`, `mean`, and `std` functions (Note that `median`, `mean`, and `std` are reexported in `BenchmarkTools` from `Statistics`):
@@ -1008,3 +1029,51 @@ This will display each `Trial` as a violin plot.
1008
1029
- BenchmarkTools attempts to be robust against machine noise occurring between *samples*, but BenchmarkTools can't do very much about machine noise occurring between *trials*. To cut down on the latter kind of noise, it is advised that you dedicate CPUs and memory to the benchmarking Julia process by using a shielding tool such as [cset](http://manpages.ubuntu.com/manpages/precise/man1/cset.1.html).
1009
1030
- On some machines, for some versions of BLAS and Julia, the number of BLAS worker threads can exceed the number of available cores. This can occasionally result in scheduling issues and inconsistent performance for BLAS-heavy benchmarks. To fix this issue, you can use `BLAS.set_num_threads(i::Int)`in the Julia REPL to ensure that the number of BLAS threads is equal to or less than the number of available cores.
1010
1031
-`@benchmark` is evaluated inglobal scope, even if called from local scope.
1032
+
1033
+
## Experimental - Running custom benchmarks
1034
+
1035
+
If you want to run code during a benchmark, e.g. to collect different metrics, say using perf, you can configure a custom benchmark.
1036
+
A custom benchmark runs in the following way, where`benchmark_function` is the function we are benchmarking:
The result from `sample_result` is collected and can be accessed from the `customizable_result` field of `Trial`, which is the type of a benchmark result.
1054
+
1055
+
Note that `prehook` and `posthook` should be as simple and fast as possible, moving any heavy lifting to `setup_prehook`, `sample_result` and `teardown_posthook`.
1056
+
1057
+
As an example, these are the hooks to replicate the normal benchmarking functionality
1058
+
```julia
1059
+
setup_prehook(_) = nothing
1060
+
samplefunc_prehook() = (Base.gc_num(), time_ns())
1061
+
samplefunc_posthook = samplefunc_prehook
1062
+
function samplefunc_sample_result(params, _, prehook_result, posthook_result)
0 commit comments