You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: benchmarks/scripts/criterion-drop-in-replacement/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
This directory contains a Python re-implementation of the Haskell Criterion methodology to run executables (instead of Haskell functions, like Criterion normally does).
4
4
One could call it "benchrunner-runner" because the purpose is to run `benchrunner` many times and calculate the appropriate run time statistics.
5
5
6
-
We take as input some program `prog` with the following interface:
6
+
We take as input a path to some program `prog` (meant to be the `benchrunner`) with the following interface:
7
7
8
8
-`prog` takes `iters` as a command-line argument,
9
9
-`prog` measures run time of a function of interest in a tight loop that repeats `iters` many times, and finally
@@ -29,7 +29,7 @@ will call `benchrunner iters Quicksort Seq 2000` for various `iters`.
29
29
30
30
`sweep_seq` performs a logarithmic sweep over different array sizes, invoking `criterionmethdology.py` at each point.
31
31
32
-
## Arightmetic vs geometric mean
32
+
## Arithmetic vs geometric mean
33
33
34
34
Since performance data is non-negative and judged multiplicatively (twice as good means numbers are half, twice has bad means numbers are doubled; these are all *factors*), the geomean and geo-standard-deviation may make more sense theoretically.
35
35
However, from some testing, the geomean seems to vary wildly for programs with fleeting execution times, even between repeated runs with the same parameters.
0 commit comments