You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
add an option for limiting markdown content size
calculate relative performance with different baselines
calculate relative performance using only already saved data
group results according to suite names and explicit groups
add multiple data columns if multiple --compare specified
Copy file name to clipboardExpand all lines: scripts/benchmarks/README.md
+9Lines changed: 9 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,11 +37,20 @@ By default, the benchmark results are not stored. To store them, use the option
37
37
38
38
To compare a benchmark run with a previously stored result, use the option `--compare <name>`. You can compare with more than one result.
39
39
40
+
In a markdown output file (see below), listing more than two `--compare` options results in displaying performance time. If only one `--compare` option is specified, the relative performance of provided results is calculated against previously saved `baseline`. You can compare your data against results other than `baseline` by using:
If no `--compare` option is specified, the benchmark run is compared against a previously stored `baseline`.
41
45
42
46
Baseline, as well as baseline-v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
43
47
are stored [here](https://oneapi-src.github.io/unified-runtime/benchmark_results.html).
44
48
49
+
50
+
## Output formats
51
+
You can display the results in the form of a HTML file by using `--ouptut-html` and a markdown file by using `--output-markdown`. Due to character limits for posting PR comments, the final content of the markdown file might be reduced. In order to obtain the full markdown output, use `--output-markdown full`.
parser.add_argument("--no-rebuild", help='Do not rebuild the benchmarks from scratch.', action="store_true")
252
260
parser.add_argument("--env", type=str, help='Use env variable for a benchmark run.', action="append", default=[])
253
261
parser.add_argument("--save", type=str, help='Save the results for comparison under a specified name.')
254
-
parser.add_argument("--compare", type=str, help='Compare results against previously saved data.', action="append", default=["baseline"])
262
+
parser.add_argument("--compare", type=str, help='Compare results against previously saved data.', action="append", default=[options.default_baseline])
255
263
parser.add_argument("--iterations", type=int, help='Number of times to run each benchmark to select a median value.', default=options.iterations)
256
264
parser.add_argument("--stddev-threshold", type=float, help='If stddev pct is above this threshold, rerun all iterations', default=options.stddev_threshold)
257
265
parser.add_argument("--timeout", type=int, help='Timeout for individual benchmarks in seconds.', default=options.timeout)
parser.add_argument("--exit-on-failure", help='Exit on first failure.', action="store_true")
262
270
parser.add_argument("--compare-type", type=str, choices=[e.valueforeinCompare], help='Compare results against previously saved data.', default=Compare.LATEST.value)
263
271
parser.add_argument("--compare-max", type=int, help='How many results to read for comparisions', default=options.compare_max)
272
+
parser.add_argument("--output-markdown", nargs='?', const=options.output_markdown, help='Specify whether markdown output should fit the content size limit for request validation')
264
273
parser.add_argument("--output-html", help='Create HTML output', action="store_true", default=False)
parser.add_argument("--dry-run", help='Do not run any actual benchmarks', action="store_true", default=False)
267
275
parser.add_argument("--compute-runtime", nargs='?', const=options.compute_runtime_tag, help="Fetch and build compute runtime")
268
276
parser.add_argument("--iterations-stddev", type=int, help="Max number of iterations of the loop calculating stddev after completed benchmark runs", default=options.iterations_stddev)
269
277
parser.add_argument("--build-igc", help="Build IGC from source instead of using the OS-installed version", action="store_true", default=options.build_igc)
278
+
parser.add_argument("--relative-perf", type=str, help="The name of the results which should be used as a baseline for metrics calculation", default=options.current_run_name)
279
+
parser.add_argument("--new-base-name", help="New name of the default baseline to compare", type=str, default='')
0 commit comments