Skip to content
Merged
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions .github/workflows/longitudinal-benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Longitudinal Benchmarking
#
# This workflow will run the benchmarks defined in the environment variable BENCHMARKS.
# It will collect and aggreate the benchmark output, format it and feed it to github-action-benchmark.
#
# The benchmark charts are live at https://input-output-hk.github.io/plutus/dev/bench
# The benchmark data is available at https://input-output-hk.github.io/plutus/dev/bench/data.js

name: Longitudinal Benchmarking

on:
push:
branches:
- master

permissions:
# Deployments permission to deploy GitHub pages website
deployments: write
# Contents permission to update benchmark contents in gh-pages branch
contents: write

jobs:
new-benchmark:
name: Performance regression check
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]

- name: Run benchmarks
env:
BENCHMARKS: "validation validation-decode"
run: |
for bench in $BENCHMARKS; do
2>&1 cabal run "$bench" | tee "$bench-output.txt"
done

read -r -d '' PYTHON_SCRIPT <<- END_SCRIPT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we put this in its own file? Makes it a lot easier to test...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

import json
result = []
for benchmark in "$BENCHMARKS".split():
with open(f"{benchmark}-output.txt", "r") as file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You decided not to use criterion's JSON output? seems like it might have been a bit easier, but not a big deal I guess.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that but it turned out to be more complicated to use -> the JSON output is somewhat "lower level" (no unit) and it doesn't actually include the same figures that you get in the standard output. 🤷

name = ""
for line in file.readlines():
if line.startswith("benchmarking"):
name = line.split()[1]
elif line.startswith("mean"):
parts = line.split()
mean = parts[1]
unit = parts[2]
result.append({
"name": f"{benchmark}-{name}",
"unit": unit,
"value": float(mean)
})
with open("output.json", "w") as file:
json.dump(result, file)
END_SCRIPT

- name: Store benchmark result
uses: benchmark-action/[email protected]
with:
name: My Project Go Benchmark
tool: 'customSmallerIsBetter'
output-file-path: output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
# Push and deploy GitHub pages branch automatically
auto-push: true
# Enable alert commit comment
comment-on-alert: true
# Mention @input-output-hk/plutus-core in the commit comment
alert-comment-cc-users: '@input-output-hk/plutus-core'
# Percentage value like "110%".
# It is a ratio indicating how worse the current benchmark result is.
# For example, if we now get 110 ns/iter and previously got 100 ns/iter, it gets 110% worse.
alert-threshold: '105%'