66
77Benchmarking is used to estimate and compare the execution speed of
88numerical algorithms and programs.
9- The package [ ` benchmark_runner ` ] [ benchmark_runner ] is based on
10- [ ` benchmark_harness ` ] [ benchmark_harness ] and includes helper
9+ The package [ ` benchmark_runner ` ] [ benchmark_runner ] includes helper
1110functions for writing * inline* micro-benchmarks with the option of
1211printing a score ** histogram** and reporting the score ** mean** ± ;
1312** standard deviation** , and score ** median** ± ; ** inter quartile range** .
@@ -36,8 +35,6 @@ Write inline benchmarks using the functions:
3635 asynchronous benchmarks.
3736
3837 ``` Dart
39- // ignore_for_file: unused_local_variable
40-
4138 import 'package:benchmark_runner/benchmark_runner.dart';
4239
4340 /// Returns the value [t] after waiting for [duration].
@@ -53,7 +50,7 @@ Write inline benchmarks using the functions:
5350
5451 await asyncBenchmark('5ms', () async {
5552 await later<int>(27, Duration(milliseconds: 5));
56- }, emitStats: false );
53+ }, scoreEmitter: MeanEmitter() );
5754 });
5855
5956 group('Set', () async {
@@ -115,25 +112,27 @@ To export benchmark scores use the sub-command `export`:
115112$ dart run benchmark_runner export --outputDir=scores --extension=csv searchDirectory
116113```
117114In the example above, ` searchDirectory ` is scanned for ` *_benchmark.dart `
118- files. For each benchmark file a corresponding file ` *_benchmark.csv ` is
119- written to the directory ` scores ` . The directory must exist and the user
120- must have write access.
115+ files. For each benchmark file, a corresponding file ` *_benchmark.csv ` is
116+ written to the directory ` scores ` .
121117
122- Note: When exporting benchmark scores to a file
118+ Note: The directory must exist and the user
119+ must have write access. When exporting benchmark scores to a file
123120and the emitter output is colorized,
124121it is recommended to use the option ` --isMonochrome ` , to
125122avoid spurious characters due to the use of Ansi modifiers.
126123
127- Since version 1.0.0, the functions [ ` benchmark ` ] [ benchmark ] and
128- [ ` asyncBenchmark ` ] [ asyncBenchmark ] accept the optional parameters ` emitter ` and
129- ` report ` . These parameters can be used to customize the score reports e.g.
124+ The functions [ ` benchmark ` ] [ benchmark ] and
125+ [ ` asyncBenchmark ` ] [ asyncBenchmark ] accept the optional parameters ` scoreEmitter ` .
126+ The parameter expects an object of type ` ScoreEmitter ` and
127+ can be used to customize the score reports e.g.
130128to make the score format more suitable for writing to a file:
131129
132130``` Dart
133131import 'package:benchmark_runner/benchmark_runner.dart';
134132
135- class CustomEmitter extends ColorPrintEmitter {
136- void emitMean({required Score score}) {
133+ class CustomEmitter implements ScoreEmitter {
134+ @override
135+ void emit({required description, required Score score}) {
137136 print('# Mean Standard Deviation');
138137 print('${score.stats.mean} ${score.stats.stdDev}');
139138 }
@@ -145,11 +144,8 @@ void main(){
145144 () {
146145 var list = <int>[for (var i = 0; i < 1000; ++i) i];
147146 },
148- emitter: CustomEmitter(),
149- report: (instance, emitter) => emitter.emitMean(
150- score: instance.score(),
151- ),
152- );
147+ scoreEmitter: CustomEmitter(),
148+ );
153149}
154150```
155151
@@ -170,49 +166,54 @@ as reported by [`benchmark_harness`][benchmark_harness] and the
170166score statistics.
171167
172168- By default, [ ` benchmark ` ] [ benchmark ] and
173- [ ` asyncBenchmark ` ] [ asyncBenchmark ] report score statistics. In order to generate
174- the report provided by [ ` benchmark_harness ` ] [ benchmark_harness ] use the
175- optional argument ` report: reportMean ` .
169+ [ ` asyncBenchmark ` ] [ asyncBenchmark ] report score statistics. In order to print
170+ the report similar to that produced by
171+ [ ` benchmark_harness ` ] [ benchmark_harness ] , use the
172+ optional argument ` emitter: MeanEmitter() ` .
176173
177174- Color output can be switched off by using the option: ` --isMonochrome ` or ` -m `
178175when calling the benchmark runner. When executing a single benchmark file the
179176corresponding option is ` --define=isMonochrome=true ` .
180177
181178- The default colors used to style benchmark reports are best suited
182179for a dark terminal background.
183- They can, however, be altered by setting the static variables defined by
180+ They can, however, be altered by setting the * static* variables defined by
184181the class [ ` ColorProfile ` ] [ ColorProfile ] . In the example below, the styling of
185182error messages and the mean value is altered.
186183 ``` Dart
187184 import 'package:ansi_modifier/ansi_modifier.dart';
188185 import 'package:benchmark_runner/benchmark_runner.dart';
189186
190- void customColorProfile () {
187+ void adjustColorProfile () {
191188 ColorProfile.error = Ansi.red + Ansi.bold;
192189 ColorProfile.mean = Ansi.green + Ansi.italic;
193190 }
194191
195192 void main(List<String> args) {
196193 // Call function to apply the new custom color profile.
197- customColorProfile ();
194+ adjustColorProfile ();
198195 }
199196 ```
200197
201198- When running ** asynchronous** benchmarks, the scores are printed in order of
202- completion. The print the scores in sequential order (as they are listed in the
199+ completion. To print the scores in sequential order (as they are listed in the
203200benchmark executable) it is required to * await* the completion
204201of the async benchmark functions and
205202the enclosing group.
206203
207204## Score Sampling
208205
209- In order to calculate benchmark score statistics a sample of scores is
206+ In order to calculate benchmark score * statistics* a sample of scores is
210207required. The question is how to generate the score sample while minimizing
211208systematic errors (like overheads) and keeping the
212- benchmark run times within acceptable limits.
209+ total benchmark run times within acceptable limits.
210+
211+ <details > <summary > Click to show details. </summary >
213212
214- To estimate the benchmark score the functions [ ` warmup ` ] [ warmup ]
215- or [ ` warmupAsync ` ] [ warmupAsync ] are run for 200 milliseconds.
213+ In a first step, benchmark scores are estimated using the
214+ functions [ ` warmup ` ] [ warmup ]
215+ or [ ` warmupAsync ` ] [ warmupAsync ] . The function [ ` BenchmarkHelper.sampleSize ` ] [ sampleSize ]
216+ uses the score estimate to determine the sampling procedure.
216217
217218### 1. Default Sampling Method
218219The graph below shows the sample size (orange curve) as calculated by the function
@@ -234,9 +235,10 @@ averaged over (see the cyan curve in the graph above):
234235* ticks > 1e5 => No preliminary averaging of sample scores.
235236
236237### 2. Custom Sampling Method
237- To amend the score sampling process the static function
238+ To custominze the score sampling process, the static function
238239[ ` BenchmarkHelper.sampleSize ` ] [ sampleSize ] can be replaced with a custom function:
239240``` Dart
241+ /// Generates a sample containing 100 benchmark scores.
240242BenchmarkHelper.sampleSize = (int clockTicks) {
241243 return (outer: 100, inner: 1)
242244}
@@ -256,6 +258,7 @@ The command above lauches a process and runs a [`gnuplot`][gnuplot] script.
256258For this reason, the program [ ` gnuplot ` ] [ gnuplot ] must be installed (with
257259the ` qt ` terminal enabled).
258260
261+ </details >
259262
260263## Contributions
261264
0 commit comments