Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
e6ca992
Move UR devops scripts to devops folder
ianayl Feb 27, 2025
3d42db2
Restrict number of cores used
ianayl Feb 28, 2025
fc70520
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 4, 2025
4f08dd6
Restore ur-benchmark*.yml
ianayl Mar 4, 2025
497dcce
[benchmarks] improve HTML and Markdown output
pbalcer Mar 5, 2025
3cbed5e
Test UR benchmarking suite
ianayl Mar 5, 2025
1936207
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 5, 2025
f79bbbf
Bump tolerance to 7%
ianayl Mar 5, 2025
ffc8139
Revert "Bump tolerance to 7%"
ianayl Mar 5, 2025
0a34e0d
[benchmarks] fix failing benchmarks, improve html output
pbalcer Mar 6, 2025
3f42420
[benchmarks] fix python formatting with black
pbalcer Mar 6, 2025
1c7b189
update driver version
pbalcer Mar 6, 2025
ad13e93
simplify preset implementation and fix normal preset
pbalcer Mar 6, 2025
68ed0c4
Add PVC and BMG as runners
ianayl Mar 6, 2025
18fff93
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 6, 2025
3a65b98
Install dependencies before running UR script
ianayl Mar 6, 2025
220121a
Use venv for python packages
ianayl Mar 6, 2025
37d361c
Install venv before using venv
ianayl Mar 6, 2025
07f1e10
[benchmarks] allow specifying custom results directories
pbalcer Mar 7, 2025
64cf79c
[benchmarks] sort runs by date for html output
pbalcer Mar 7, 2025
6c28d33
simplify presets, remove suites if all set
pbalcer Mar 10, 2025
e15b94f
[benchmarks] use python venv for scripts
pbalcer Mar 10, 2025
78fd037
Run apt with sudo
ianayl Mar 10, 2025
0ed1599
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 10, 2025
82b6e55
Ignore "missing" apt packages in workflow
ianayl Mar 10, 2025
162cba0
Change pip to install to user
ianayl Mar 10, 2025
848f741
Ignore system controlled python env
ianayl Mar 10, 2025
918604e
[CI] use realpaths when referring to SYCL
ianayl Mar 10, 2025
72d8730
[CI] use minimal preset when running benchmarks
ianayl Mar 10, 2025
066f5a6
[CI] Allow 2 bench scripts locations (#17394)
lukaszstolarczuk Mar 12, 2025
18e5291
add ulls compute benchmarks
pbalcer Mar 12, 2025
237750e
[CI][Benchmark] Decouple results from existing file structure, fetch …
ianayl Mar 11, 2025
ba1297f
[benchmark] Disabling UR test suites
ianayl Mar 12, 2025
cd6097f
update compute benchmarks and fix requirements
pbalcer Mar 13, 2025
c4e92c6
fix url updates
pbalcer Mar 13, 2025
ed8eecc
use timestamps in result file names
pbalcer Mar 13, 2025
130212d
add hostname to benchmark run
pbalcer Mar 13, 2025
a884df8
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 13, 2025
5323386
add SubmitGraph benchmark
pbalcer Mar 13, 2025
5bd1d56
Restore sycl-linux-run-tests benchmarking action
ianayl Mar 13, 2025
e9b1375
Restore old SYCL benchmarking CI
ianayl Mar 13, 2025
a3edf7a
Add benchmarking results to sycl-docs.yml
ianayl Mar 13, 2025
6620e4a
[CI] Bump compute bench (#17431)
lukaszstolarczuk Mar 13, 2025
f4a2e39
Initial implementation of unified benchmark workflow
ianayl Mar 13, 2025
5d3b0d9
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 13, 2025
38394bb
[CI] Use commit hash instead, fix issues with run
ianayl Mar 13, 2025
f232b93
add benchmark metadata
pbalcer Mar 14, 2025
30cd308
apply formatting
pbalcer Mar 14, 2025
5e0539a
fix multiple descriptions/notes
pbalcer Mar 14, 2025
137407a
fix benchmark descriptions
pbalcer Mar 14, 2025
e0f5ca6
fix remote html output
pbalcer Mar 14, 2025
1041db6
fix metadata collection with dry run
pbalcer Mar 14, 2025
fae04f4
cleanup compute bench, fix readme, use newer sycl-bench
pbalcer Mar 14, 2025
b698e9e
Disable ur-benchmarks-* again
ianayl Mar 14, 2025
21a0599
[Test] temporarily hijack sycl-benchmark-aggregate to test benchmark.yml
ianayl Mar 14, 2025
cfa4a9c
[CI] configure upload results
ianayl Mar 14, 2025
12c67cc
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 14, 2025
c4c4a16
Change config to update during workflow run instead
ianayl Mar 14, 2025
46aaf82
Change save name depending on build
ianayl Mar 14, 2025
6b97436
bump to 2024-2025
ianayl Mar 14, 2025
c58407b
Enforce commit hash to be string regardless
ianayl Mar 14, 2025
a3f2b4d
Update benchmark script readme to reflect current state
ianayl Mar 14, 2025
e3e7ec5
Revert "[Test] temporarily hijack sycl-benchmark-aggregate to test be…
ianayl Mar 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
129 changes: 129 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
name: Run Benchmarks

on:
schedule:
- cron: '0 1 * * *' # 2 hrs earlier than sycl-nightly.yml
workflow_call:
inputs:
commit_hash:
type: string
required: false
default: ''
upload_results:
type: string # true/false: workflow_dispatch does not support booleans
required: true
runner:
type: string
required: true
backend:
type: string
required: true
reset_intel_gpu:
type: string # true/false: workflow_dispatch does not support booleans
required: true
default: true

workflow_dispatch:
inputs:
commit_hash:
description: Commit hash to build intel/llvm from
type: string
required: false
default: ''
upload_results:
description: 'Save and upload results'
type: choice
options:
- false
- true
default: true
runner:
type: choice
options:
- '["PVC_PERF"]'
backend:
description: Backend to use
type: choice
options:
- 'level_zero:gpu'
# TODO L0 V2 support
reset_intel_gpu:
description: Reset Intel GPUs
type: choice
options:
- false
- true
default: true

permissions: read-all

jobs:
build_sycl:
name: Build SYCL from PR
if: inputs.commit_hash != ''
Copy link
Contributor

@lukaszstolarczuk lukaszstolarczuk Mar 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

techincally passing empty commit_hash is allowed (required == false), so maybe it's not the best way of checking if this is a PR run or nigthly...?

// FYI, trigger type can be used to establish if nightly or not

Copy link
Contributor Author

@ianayl ianayl Mar 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! think I resolved this via the latest commit: I've set default values on commit_hash to always be a string

// trigger type can be used to determine if a job was scheduled, but I've chosen to use commit_hash here incase the user themselves wants to use the nightly build of sycl manually

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm tbh, I'm not sure if setting default: '' will solve the: != '', buuut If this will be an issue it can be updated later on

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe I'm missing something, but can we easily read the hash commit from PR?

I know we used pr_no and a tricky fetching to get the proper commit from PR (https://github.com/intel/llvm/blob/unify-benchmark-ci/.github/workflows/ur-benchmarks-reusable.yml#L79-L81)

Copy link
Contributor Author

@ianayl ianayl Mar 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sycl-linux-build.yml job in intel/llvm doesn't support compiling commits from other branches, thus we cannot access commits from PR branches as of now: I have chosen to not stir the nest here, but this will probably be a future change that involes changing sycl-linux-build.yml

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm... so what should I pass here to check my PR? or is it intended as a post-merge check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Admittedly I dont think that'd be possible right now 😅

We will figure something out though, given that this PR is no longer urgent, I'll make a PR to change sycl-linux-build.yml so that we can actually accomplish this

uses: ./.github/workflows/sycl-linux-build.yml
with:
build_ref: ${{ inputs.commit_hash }}
build_cache_root: "/__w/"
build_artifact_suffix: "default"
build_cache_suffix: "default"
# Docker image has last nightly pre-installed and added to the PATH
build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest"
cc: clang
cxx: clang++
changes: '[]'

run_benchmarks_build:
name: Run Benchmarks (on PR Build)
needs: [ build_sycl ]
if: inputs.commit_hash != ''
strategy:
matrix:
# Set default values if not specified:
include:
- runner: ${{ inputs.runner || '["PVC_PERF"]' }}
backend: ${{ inputs.backend || 'level_zero:gpu' }}
reset_intel_gpu: ${{ inputs.reset_intel_gpu || 'true' }}
ref: ${{ inputs.commit_hash }}
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
# TODO support other benchmarks
name: Run compute-benchmarks (${{ matrix.runner }}, ${{ matrix.backend }})
runner: ${{ matrix.runner }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
reset_intel_gpu: ${{ matrix.reset_intel_gpu }}
tests_selector: benchmark_v2
benchmark_upload_results: ${{ inputs.upload_results }}
benchmark_build_hash: ${{ inputs.commit_hash }}
repo_ref: ${{ matrix.ref }}
devops_ref: ${{ github.ref }}
sycl_toolchain_artifact: sycl_linux_default
sycl_toolchain_archive: ${{ needs.build_sycl.outputs.artifact_archive_name }}
sycl_toolchain_decompress_command: ${{ needs.build_sycl.outputs.artifact_decompress_command }}

run_benchmarks_nightly:
name: Run Benchmarks (on Nightly Build)
if: inputs.commit_hash == ''
strategy:
matrix:
# Set default values if not specified:
include:
- runner: ${{ inputs.runner || '["PVC_PERF"]' }}
backend: ${{ inputs.backend || 'level_zero:gpu' }}
reset_intel_gpu: ${{ inputs.reset_intel_gpu || 'true' }}
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
# TODO support other benchmarks
name: Run compute-benchmarks (${{ matrix.runner }}, ${{ matrix.backend }})
runner: ${{ matrix.runner }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
reset_intel_gpu: ${{ matrix.reset_intel_gpu }}
tests_selector: benchmark_v2
benchmark_upload_results: ${{ inputs.upload_results }}
repo_ref: ${{ github.ref }}
6 changes: 6 additions & 0 deletions .github/workflows/sycl-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,13 @@ jobs:
mkdir clang
mv $GITHUB_WORKSPACE/build/tools/sycl/doc/html/* .
mv $GITHUB_WORKSPACE/build/tools/clang/docs/html/* clang/
cp -r $GITHUB_WORKSPACE/repo/devops/scripts/benchmarks/html benchmarks
touch .nojekyll
# Update benchmarking dashboard configuration
cat << 'EOF' > benchmarks/config.js
remoteDataUrl = 'https://raw.githubusercontent.com/intel/llvm-ci-perf-results/refs/heads/unify-ci/UR_DNP_INTEL_06_03/data.json';
defaultCompareNames = ["Baseline_PVC_L0"];
EOF
# Upload the generated docs as an artifact and deploy to GitHub Pages.
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
Expand Down
23 changes: 23 additions & 0 deletions .github/workflows/sycl-linux-run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,15 @@ on:
default: ''
required: False

benchmark_upload_results:
type: string
default: 'false'
required: False
benchmark_build_hash:
type: string
default: ''
required: False

workflow_dispatch:
inputs:
runner:
Expand All @@ -126,6 +135,7 @@ on:
- '["cts-cpu"]'
- '["Linux", "build"]'
- '["cuda"]'
- '["Linux", "bmg"]'
- '["PVC_PERF"]'
image:
type: choice
Expand Down Expand Up @@ -154,6 +164,7 @@ on:
- e2e
- cts
- compute-benchmarks
- benchmark_v2

env:
description: |
Expand Down Expand Up @@ -329,3 +340,15 @@ jobs:
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}

- name: Run benchmarks
if: inputs.tests_selector == 'benchmark_v2'
uses: ./devops/actions/run-tests/benchmark_v2
with:
target_devices: ${{ inputs.target_devices }}
upload_results: ${{ inputs.benchmark_upload_results }}
build_hash: ${{ inputs.benchmark_build_hash }}
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}

2 changes: 1 addition & 1 deletion .github/workflows/ur-build-hw.yml
Original file line number Diff line number Diff line change
Expand Up @@ -151,4 +151,4 @@ jobs:

- name: Get information about platform
if: ${{ always() }}
run: ${{github.workspace}}/unified-runtime/.github/scripts/get_system_info.sh
run: ${{github.workspace}}/devops/scripts/get_system_info.sh
1 change: 0 additions & 1 deletion devops/actions/run-tests/benchmark/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,6 @@ runs:
if: always()
shell: bash
run: |
# TODO -- waiting on security clearance
# Load configuration values
$(python ./devops/scripts/benchmarking/load_config.py ./devops constants)

Expand Down
135 changes: 135 additions & 0 deletions devops/actions/run-tests/benchmark_v2/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
name: 'Run benchmarks'

# This action assumes the following prerequisites:
#
# - SYCL is placed in ./toolchain -- TODO change this
# - /devops has been checked out in ./devops.
# - env.GITHUB_TOKEN was properly set, because according to Github, that's
# apparently the recommended way to pass a secret into a github action:

# https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#accessing-your-secrets
#
# - env.RUNNER_TAG set to the runner tag used to run this workflow: Currently,
# only specific runners are fully supported.

inputs:
target_devices:
type: string
required: True
upload_results:
type: string
required: True
build_hash:
type: string
required: False
default: ''

runs:
using: "composite"
steps:
- name: Check specified runner type / target backend
shell: bash
env:
TARGET_DEVICE: ${{ inputs.target_devices }}
RUNNER_NAME: ${{ runner.name }}
run: |
case "$RUNNER_TAG" in
'["PVC_PERF"]' ) ;;
*)
echo "#"
echo "# WARNING: Only specific tuned runners are fully supported."
echo "# This workflow is not guaranteed to work with other runners."
echo "#" ;;
esac
# Ensure runner name has nothing injected
# TODO: in terms of security, is this overkill?
if [ -z "$(printf '%s' "$RUNNER_NAME" | grep -oE '^[a-zA-Z0-9_-]+$')" ]; then
echo "Bad runner name, please ensure runner name is [a-zA-Z0-9_-]."
exit 1
fi
echo "RUNNER_NAME=$RUNNER_NAME" >> $GITHUB_ENV
# input.target_devices is not directly used, as this allows code injection
case "$TARGET_DEVICE" in
level_zero:*) ;;
*)
echo "#"
echo "# WARNING: Only level_zero backend is fully supported."
echo "# This workflow is not guaranteed to work with other backends."
echo "#" ;;
esac
echo "ONEAPI_DEVICE_SELECTOR=$TARGET_DEVICE" >> $GITHUB_ENV
- name: Compute CPU core range to run benchmarks on
shell: bash
run: |
# Compute the core range for the first NUMA node; second node is used by
# UMF. Skip the first 4 cores as the kernel is likely to schedule more
# work on these.
CORES="$(lscpu | awk '
/NUMA node0 CPU|On-line CPU/ {line=$0}
END {
split(line, a, " ")
split(a[4], b, ",")
sub(/^0/, "4", b[1])
print b[1]
}')"
echo "CPU core range to use: $CORES"
echo "CORES=$CORES" >> $GITHUB_ENV
ZE_AFFINITY_MASK=0
echo "ZE_AFFINITY_MASK=$ZE_AFFINITY_MASK" >> $GITHUB_ENV
- name: Checkout results repo
shell: bash
run: |
git clone -b unify-ci https://github.com/intel/llvm-ci-perf-results
- name: Run compute-benchmarks
env:
BUILD_HASH: ${{ inputs.build_hash }}
shell: bash
run: |
# TODO generate summary + display helpful message here
export CMPLR_ROOT=./toolchain
echo "-----"
sycl-ls
echo "-----"
pip install --user --break-system-packages -r ./devops/scripts/benchmarks/requirements.txt
echo "-----"
mkdir -p "./llvm-ci-perf-results/$RUNNER_NAME"
# TODO accomodate for different GPUs and backends
SAVE_NAME="Baseline_PVC_L0"
if [ -n "$BUILD_HASH" ]; then
SAVE_NAME="Commit_PVC_$BUILD_HASH"
fi
taskset -c "$CORES" ./devops/scripts/benchmarks/main.py \
"$(realpath ./llvm_test_workdir)" \
--sycl "$(realpath ./toolchain)" \
--save "$SAVE_NAME" \
--output-html remote \
--results-dir "./llvm-ci-perf-results/$RUNNER_NAME" \
--output-dir "./llvm-ci-perf-results/$RUNNER_NAME" \
--preset Minimal
echo "-----"
- name: Push compute-benchmarks results
if: inputs.upload_results == 'true' && always()
shell: bash
run: |
# TODO redo configuration
# $(python ./devops/scripts/benchmarking/load_config.py ./devops constants)
cd "./llvm-ci-perf-results"
git config user.name "SYCL Benchmarking Bot"
git config user.email "[email protected]"
git pull
git add .
# Make sure changes have been made
if git diff --quiet && git diff --cached --quiet; then
echo "No new results added, skipping push."
else
git commit -m "[GHA] Upload compute-benchmarks results from https://github.com/intel/llvm/actions/runs/${{ github.run_id }}"
git push "https://[email protected]/intel/llvm-ci-perf-results.git" unify-ci
fi
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ Scripts for running performance tests on SYCL and Unified Runtime.

- [Velocity Bench](https://github.com/oneapi-src/Velocity-Bench)
- [Compute Benchmarks](https://github.com/intel/compute-benchmarks/)
- [LlamaCpp Benchmarks](https://github.com/ggerganov/llama.cpp)
- [SYCL-Bench](https://github.com/unisa-hpc/sycl-bench)

## Running

Expand All @@ -27,8 +29,6 @@ You can also include additional benchmark parameters, such as environment variab

Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.

By default, all benchmark runs are compared against `baseline`, which is a well-established set of the latest data.

You must be a member of the `oneapi-src` organization to access these features.

## Comparing results
Expand All @@ -37,8 +37,8 @@ By default, the benchmark results are not stored. To store them, use the option

You can compare benchmark results using `--compare` option. The comparison will be presented in a markdown output file (see below). If you want to calculate the relative performance of the new results against the previously saved data, use `--compare <previously_saved_data>` (i.e. `--compare baseline`). In case of comparing only stored data without generating new results, use `--dry-run --compare <name1> --compare <name2> --relative-perf <name1>`, where `name1` indicates the baseline for the relative performance calculation and `--dry-run` prevents the script for running benchmarks. Listing more than two `--compare` options results in displaying only execution time, without statistical analysis.

Baseline, as well as baseline-v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
are stored [here](https://oneapi-src.github.io/unified-runtime/benchmark_results.html).
Baseline_PVC_L0 is updated automatically during a nightly job. The results
are stored [here](https://intel.github.io/llvm/benchmarks/).

## Output formats
You can display the results in the form of a HTML file by using `--ouptut-html` and a markdown file by using `--output-markdown`. Due to character limits for posting PR comments, the final content of the markdown file might be reduced. In order to obtain the full markdown output, use `--output-markdown full`.
Expand Down
Loading
Loading