Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
e6ca992
Move UR devops scripts to devops folder
ianayl Feb 27, 2025
3d42db2
Restrict number of cores used
ianayl Feb 28, 2025
fc70520
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 4, 2025
4f08dd6
Restore ur-benchmark*.yml
ianayl Mar 4, 2025
497dcce
[benchmarks] improve HTML and Markdown output
pbalcer Mar 5, 2025
3cbed5e
Test UR benchmarking suite
ianayl Mar 5, 2025
1936207
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 5, 2025
f79bbbf
Bump tolerance to 7%
ianayl Mar 5, 2025
ffc8139
Revert "Bump tolerance to 7%"
ianayl Mar 5, 2025
0a34e0d
[benchmarks] fix failing benchmarks, improve html output
pbalcer Mar 6, 2025
3f42420
[benchmarks] fix python formatting with black
pbalcer Mar 6, 2025
1c7b189
update driver version
pbalcer Mar 6, 2025
ad13e93
simplify preset implementation and fix normal preset
pbalcer Mar 6, 2025
68ed0c4
Add PVC and BMG as runners
ianayl Mar 6, 2025
18fff93
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 6, 2025
3a65b98
Install dependencies before running UR script
ianayl Mar 6, 2025
220121a
Use venv for python packages
ianayl Mar 6, 2025
37d361c
Install venv before using venv
ianayl Mar 6, 2025
07f1e10
[benchmarks] allow specifying custom results directories
pbalcer Mar 7, 2025
64cf79c
[benchmarks] sort runs by date for html output
pbalcer Mar 7, 2025
6c28d33
simplify presets, remove suites if all set
pbalcer Mar 10, 2025
e15b94f
[benchmarks] use python venv for scripts
pbalcer Mar 10, 2025
78fd037
Run apt with sudo
ianayl Mar 10, 2025
0ed1599
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 10, 2025
82b6e55
Ignore "missing" apt packages in workflow
ianayl Mar 10, 2025
162cba0
Change pip to install to user
ianayl Mar 10, 2025
848f741
Ignore system controlled python env
ianayl Mar 10, 2025
918604e
[CI] use realpaths when referring to SYCL
ianayl Mar 10, 2025
72d8730
[CI] use minimal preset when running benchmarks
ianayl Mar 10, 2025
066f5a6
[CI] Allow 2 bench scripts locations (#17394)
lukaszstolarczuk Mar 12, 2025
18e5291
add ulls compute benchmarks
pbalcer Mar 12, 2025
237750e
[CI][Benchmark] Decouple results from existing file structure, fetch …
ianayl Mar 11, 2025
ba1297f
[benchmark] Disabling UR test suites
ianayl Mar 12, 2025
cd6097f
update compute benchmarks and fix requirements
pbalcer Mar 13, 2025
c4e92c6
fix url updates
pbalcer Mar 13, 2025
ed8eecc
use timestamps in result file names
pbalcer Mar 13, 2025
130212d
add hostname to benchmark run
pbalcer Mar 13, 2025
70c393f
[CI] Split UR benchmarks from original benchmarks
ianayl Mar 13, 2025
f49e3b1
Add benchmarking dashboard to sycl-docs workflow
ianayl Mar 13, 2025
a884df8
Merge branch 'sycl' of https://github.com/intel/llvm into unify-bench…
ianayl Mar 13, 2025
35f8a51
Merge branch 'unify-benchmark-ci' of https://github.com/intel/llvm in…
ianayl Mar 13, 2025
ef7960a
Revert "[benchmark] Disabling UR test suites"
ianayl Mar 13, 2025
b23a596
Revert "Restore ur-benchmark*.yml"
ianayl Mar 13, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/sycl-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,10 @@ jobs:
mkdir $GITHUB_WORKSPACE/install_docs
cd $GITHUB_WORKSPACE/install_docs
mkdir clang
mkdir benchmarks
mv $GITHUB_WORKSPACE/build/tools/sycl/doc/html/* .
mv $GITHUB_WORKSPACE/build/tools/clang/docs/html/* clang/
cp -r $GITHUB_WORKSPACE/devops/scripts/benchmarks/html benchmarks/
touch .nojekyll
# Upload the generated docs as an artifact and deploy to GitHub Pages.
- name: Upload artifact
Expand Down
11 changes: 11 additions & 0 deletions .github/workflows/sycl-linux-run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ on:
- '["cts-cpu"]'
- '["Linux", "build"]'
- '["cuda"]'
- '["Linux", "bmg"]'
- '["PVC_PERF"]'
image:
type: choice
Expand Down Expand Up @@ -154,6 +155,7 @@ on:
- e2e
- cts
- compute-benchmarks
- benchmark-v2

env:
description: |
Expand Down Expand Up @@ -329,3 +331,12 @@ jobs:
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}

- name: Run Benchmarks
if: inputs.tests_selector == 'benchmark-v2'
uses: ./devops/actions/run-tests/benchmark_v2
with:
target_devices: ${{ inputs.target_devices }}
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}
2 changes: 1 addition & 1 deletion .github/workflows/ur-build-hw.yml
Original file line number Diff line number Diff line change
Expand Up @@ -151,4 +151,4 @@ jobs:

- name: Get information about platform
if: ${{ always() }}
run: ${{github.workspace}}/unified-runtime/.github/scripts/get_system_info.sh
run: ${{github.workspace}}/devops/scripts/get_system_info.sh
23 changes: 1 addition & 22 deletions devops/actions/run-tests/benchmark/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,27 +46,6 @@ runs:
echo "# This workflow is not guaranteed to work with other backends."
echo "#" ;;
esac
- name: Compute CPU core range to run benchmarks on
shell: bash
run: |
# Taken from ur-benchmark-reusable.yml:

# Compute the core range for the first NUMA node; second node is used by
# UMF. Skip the first 4 cores as the kernel is likely to schedule more
# work on these.
CORES="$(lscpu | awk '
/NUMA node0 CPU|On-line CPU/ {line=$0}
END {
split(line, a, " ")
split(a[4], b, ",")
sub(/^0/, "4", b[1])
print b[1]
}')"
echo "CPU core range to use: $CORES"
echo "CORES=$CORES" >> $GITHUB_ENV

ZE_AFFINITY_MASK=0
echo "ZE_AFFINITY_MASK=$ZE_AFFINITY_MASK" >> $GITHUB_ENV
- name: Run compute-benchmarks
shell: bash
run: |
Expand All @@ -90,7 +69,7 @@ runs:
echo "-----"
sycl-ls
echo "-----"
taskset -c "$CORES" ./devops/scripts/benchmarking/benchmark.sh -n '${{ runner.name }}' -s || exit 1
./devops/scripts/benchmarking/benchmark.sh -n '${{ runner.name }}' -s || exit 1
- name: Push compute-benchmarks results
if: always()
shell: bash
Expand Down
134 changes: 134 additions & 0 deletions devops/actions/run-tests/benchmark_v2/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
name: 'Run Benchmarks'

# This action assumes the following prerequisites:
#
# - SYCL is placed in ./toolchain -- TODO change this
# - /devops has been checked out in ./devops.
# - env.GITHUB_TOKEN was properly set, because according to Github, that's
# apparently the recommended way to pass a secret into a github action:

# https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#accessing-your-secrets
#
# - env.RUNNER_TAG set to the runner tag used to run this workflow: Currently,
# only specific runners are fully supported.

inputs:
target_devices:
type: string
required: True

runs:
using: "composite"
steps:
- name: Check specified runner type / target backend
shell: bash
env:
TARGET_DEVICE: ${{ inputs.target_devices }}
RUNNER_NAME: ${{ runner.name }}
run: |
case "$RUNNER_TAG" in
'["PVC_PERF"]' ) ;;
*)
echo "#"
echo "# WARNING: Only specific tuned runners are fully supported."
echo "# This workflow is not guaranteed to work with other runners."
echo "#" ;;
esac

# Ensure runner name has nothing injected
# TODO: in terms of security, is this overkill?
if [ -z "$(printf '%s' "$RUNNER_NAME" | grep -oE '^[a-zA-Z0-9_-]+$')" ]; then
echo "Bad runner name, please ensure runner name is [a-zA-Z0-9_-]."
exit 1
fi
echo "RUNNER_NAME=$RUNNER_NAME" >> $GITHUB_ENV

# input.target_devices is not directly used, as this allows code injection
case "$TARGET_DEVICE" in
level_zero:*) ;;
*)
echo "#"
echo "# WARNING: Only level_zero backend is fully supported."
echo "# This workflow is not guaranteed to work with other backends."
echo "#" ;;
esac
echo "ONEAPI_DEVICE_SELECTOR=$TARGET_DEVICE" >> $GITHUB_ENV

- name: Compute CPU core range to run benchmarks on
shell: bash
run: |
# Compute the core range for the first NUMA node; second node is used by
# UMF. Skip the first 4 cores as the kernel is likely to schedule more
# work on these.
CORES="$(lscpu | awk '
/NUMA node0 CPU|On-line CPU/ {line=$0}
END {
split(line, a, " ")
split(a[4], b, ",")
sub(/^0/, "4", b[1])
print b[1]
}')"
echo "CPU core range to use: $CORES"
echo "CORES=$CORES" >> $GITHUB_ENV

ZE_AFFINITY_MASK=0
echo "ZE_AFFINITY_MASK=$ZE_AFFINITY_MASK" >> $GITHUB_ENV
- name: Checkout results repo
shell: bash
run: |
git clone -b unify-ci https://github.com/intel/llvm-ci-perf-results
- name: Run compute-benchmarks
shell: bash
run: |
# TODO generate summary + display helpful message here
export CMPLR_ROOT=./toolchain
echo "-----"
sycl-ls
echo "-----"
pip install --user --break-system-packages -r ./devops/scripts/benchmarks/requirements.txt
echo "-----"
mkdir -p "./llvm-ci-perf-results/$RUNNER_NAME"
taskset -c "$CORES" ./devops/scripts/benchmarks/main.py \
"$(realpath ./llvm_test_workdir)" \
--sycl "$(realpath ./toolchain)" \
--save baseline \
--output-html remote \
--results-dir "./llvm-ci-perf-results/$RUNNER_NAME" \
--output-dir "./llvm-ci-perf-results/$RUNNER_NAME" \
--preset Minimal
echo "-----"
ls
- name: Push compute-benchmarks results
if: always()
shell: bash
run: |
# TODO redo configuration
# $(python ./devops/scripts/benchmarking/load_config.py ./devops constants)

cd "./llvm-ci-perf-results"
git config user.name "SYCL Benchmarking Bot"
git config user.email "[email protected]"
git pull
git add .
# Make sure changes have been made
if git diff --quiet && git diff --cached --quiet; then
echo "No new results added, skipping push."
else
git commit -m "[GHA] Upload compute-benchmarks results from https://github.com/intel/llvm/actions/runs/${{ github.run_id }}"
git push "https://[email protected]/intel/llvm-ci-perf-results.git" unify-ci
fi
# - name: Find benchmark result artifact here
# if: always()
# shell: bash
# run: |
# cat << EOF
# #
# # Artifact link for benchmark results here:
# #
# EOF
# - name: Archive compute-benchmark results
# if: always()
# uses: actions/upload-artifact@v4
# with:
# name: Compute-benchmark run ${{ github.run_id }} (${{ runner.name }})
# path: ./artifact
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import os
import shutil
from pathlib import Path
from .result import Result
from utils.result import Result
from options import options
from utils.utils import download, run
import urllib.request
Expand Down Expand Up @@ -55,16 +55,25 @@ def create_data_path(self, name, skip_data_dir=False):
data_path = os.path.join(self.directory, name)
else:
data_path = os.path.join(self.directory, "data", name)
if options.rebuild and Path(data_path).exists():
if options.redownload and Path(data_path).exists():
shutil.rmtree(data_path)

Path(data_path).mkdir(parents=True, exist_ok=True)

return data_path

def download(self, name, url, file, untar=False, unzip=False, skip_data_dir=False):
def download(
self,
name,
url,
file,
untar=False,
unzip=False,
skip_data_dir=False,
checksum="",
):
self.data_path = self.create_data_path(name, skip_data_dir)
return download(self.data_path, url, file, untar, unzip)
return download(self.data_path, url, file, untar, unzip, checksum)

def name(self):
raise NotImplementedError()
Expand Down
Loading
Loading