Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 0 additions & 52 deletions .github/workflows/sycl-benchmark-aggregate.yml

This file was deleted.

19 changes: 10 additions & 9 deletions .github/workflows/sycl-ur-perf-benchmarking.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ on:
PR no. to build SYCL from if specified: SYCL will be built from HEAD
of incoming branch used by the specified PR no.

If both pr_no and commit_hash are empty, the latest SYCL nightly build
will be used.
If both pr_no and commit_hash are empty, the latest commit in
deployment branch will be used.
required: false
default: ''
commit_hash:
Expand Down Expand Up @@ -64,33 +64,34 @@ on:
pr_no:
type: string
description: |
PR no. to build SYCL from:
SYCL will be built from HEAD of incoming branch.
PR no. to build SYCL from - it will be built from HEAD of incoming branch.

Leave both pr_no and commit_hash empty to use the latest commit from branch/tag this workflow started from.
required: false
default: ''
commit_hash:
type: string
description: |
Commit hash (within intel/llvm) to build SYCL from:
Commit hash (within intel/llvm) to build SYCL from.

Leave both pr_no and commit_hash empty to use latest commit.
Leave both pr_no and commit_hash empty to use the latest commit from branch/tag this workflow started from.
required: false
default: ''
save_name:
type: string
description: |
Name to use for the benchmark result:
Name to use for the benchmark result
required: false
default: ''
upload_results:
description: 'Save and upload results (to https://intel.github.io/llvm/benchmarks)'
description: Save and upload results (to https://intel.github.io/llvm/benchmarks)
type: choice
options:
- false
- true
default: true
runner:
description: Self-hosted runner to use for the benchmarks
type: choice
options:
- '["PVC_PERF"]'
Expand Down
4 changes: 2 additions & 2 deletions devops/scripts/benchmarks/CONTRIB.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Architecture

The suite is structured around three main components: Suites, Benchmarks, and Results.
The suite is structured around four main components: Suites, Benchmarks, Results, and BenchmarkMetadata.

1. **Suites:**
* Collections of related benchmarks (e.g., `ComputeBench`, `LlamaCppBench`).
Expand Down Expand Up @@ -170,7 +170,7 @@ The benchmark suite generates an interactive HTML dashboard that visualizes `Res
* If adding to an existing category, modify the corresponding `Suite` class (e.g., `benches/compute.py`) to instantiate and return your new benchmark in its `benchmarks()` method.
* If creating a new category, create a new `Suite` class inheriting from `benches.base.Suite`. Implement `name()` and `benchmarks()`. Add necessary `setup()` if the suite requires shared setup. Add group metadata via `additional_metadata()` if needed.
3. **Register Suite:** Import and add your new `Suite` instance to the `suites` list in `main.py`.
4. **Add to Presets:** If adding a new suite, add its `name()` to the relevant lists in `presets.py` (e.g., "Full", "Normal") so it runs with those presets.
4. **Add to Presets:** If adding a new suite, add its `name()` to the relevant lists in `presets.py` (e.g., "Full", "Normal") so it runs with those presets. Update `README.md` to include the new suite in presets' description.

## Recommendations

Expand Down
28 changes: 15 additions & 13 deletions devops/scripts/benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Unified Runtime Benchmark Runner
# SYCL and Unified Runtime Benchmark Runner

Scripts for running performance tests on SYCL and Unified Runtime.
Scripts for running benchmarks on SYCL and Unified Runtime.

## Benchmarks

Expand Down Expand Up @@ -31,19 +31,21 @@ $ pip install -r requirements.txt
$ ./main.py ~/benchmarks_workdir/ --sycl ~/llvm/build/ --ur ~/ur_install --adapter adapter_name
```

This last command will **download and build** everything in `~/benchmarks_workdir/`
using the built compiler located in `~/llvm/build/`,
UR **install directory** from `~/ur`,
This last command will **download and build** everything in `~/benchmarks_workdir/`
using the built compiler located in `~/llvm/build/` and
installed Unified Runtime in directory `~/ur_install`,
and then **run** the benchmarks for `adapter_name` adapter.

>NOTE: By default `level_zero` adapter is used.

>NOTE: Pay attention to the `--ur` parameter. It points directly to the directory where UR is installed.
To install Unified Runtime in the predefined location, use the `-DCMAKE_INSTALL_PREFIX`.

UR build example:
UR build and install example:
```
$ cmake -DCMAKE_BUILD_TYPE=Release -S~/llvm/unified-runtime -B~/ur_build -DCMAKE_INSTALL_PREFIX=~/ur_install -DUR_BUILD_ADAPTER_L0=ON -DUR_BUILD_ADAPTER_L0_V2=ON
$ cmake --build ~/ur_build -j $(nproc)
$ cmake --install ~/ur_build
```

### Rebuild
Expand Down Expand Up @@ -95,11 +97,12 @@ In addition to the above parameters, there are also additional options that help
`--preset <option>` - limits the types of benchmarks that are run.

The available benchmarks options are:
* `Full` (Compute, Gromacs, llama, SYCL, Velocity and UMF benchmarks)
* `Full` (BenchDNN, Compute, Gromacs, llama, SYCL, Velocity and UMF benchmarks)
* `SYCL` (Compute, llama, SYCL, Velocity)
* `Minimal` (Compute)
* `Normal` (Compute, Gromacs, llama, Velocity)
* `Normal` (BenchDNN, Compute, Gromacs, llama, Velocity)
* `Gromacs` (Gromacs)
* `OneDNN` (BenchDNN)
* `Test` (Test Suite)

`--filter <regex>` - allows to set the regex pattern to filter benchmarks by name.
Expand All @@ -108,18 +111,17 @@ For example `--filter "graph_api_*"`

## Running in CI

The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request.
The benchmarks scripts are used in a GitHub Actions workflow, and can be automatically executed on a preconfigured system against any Pull Request.

![compute benchmarks](workflow.png "Compute Benchmarks CI job")

To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run.
To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Run Benchmarks` workflow. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. Important field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run. Instead, you can specify `Commit hash` from within intel/llvm repository, or leave both empty to run benchmarks against the branch/tag the workflow started from (the value from dropdown list at the top).

You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`.

Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
Once all the information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.

>NOTE: You must be a member of the `oneapi-src` organization to access these features.

## Requirements
### System

Sobel Filter benchmark:
Expand Down
2 changes: 1 addition & 1 deletion devops/scripts/benchmarks/benches/compute.py
Original file line number Diff line number Diff line change
Expand Up @@ -414,7 +414,7 @@ def run(
ret = []
for label, median, stddev, unit in parsed_results:
extra_label = " CPU count" if parse_unit_type(unit) == "instr" else ""
# Note: SYCL CI currently parses for on this "CPU count" value.
# Note: SYCL CI currently relies on this "CPU count" value.
# Please update /devops/scripts/benchmarks/compare.py if this value
# is changed. See compare.py usage (w.r.t. --regression-filter) in
# /devops/actions/run-tests/benchmarks/action.yml.
Expand Down
2 changes: 1 addition & 1 deletion devops/scripts/benchmarks/compare.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ def to_hist(
"--compare-file",
type=str,
required=True,
help="Result file to compare against te historic average",
help="Result file to compare against the historic average",
)
parser_avg.add_argument(
"--results-dir", type=str, required=True, help="Directory storing results"
Expand Down
2 changes: 1 addition & 1 deletion devops/scripts/benchmarks/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -507,7 +507,7 @@ def validate_and_parse_env_args(env_args):
parser.add_argument(
"--compare-max",
type=int,
help="How many results to read for comparisions",
help="How many results to read for comparisons",
default=options.compare_max,
)
parser.add_argument(
Expand Down
5 changes: 5 additions & 0 deletions devops/scripts/benchmarks/options.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
# Copyright (C) 2025 Intel Corporation
# Part of the Unified-Runtime Project, under the Apache License v2.0 with LLVM Exceptions.
# See LICENSE.TXT
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

from dataclasses import dataclass, field
from enum import Enum
import multiprocessing
Expand Down
Loading