Skip to content

Setup Continuous Performance#1614

Merged
alganet merged 1 commit intoRespect:mainfrom
alganet:ci-perf
Jan 21, 2026
Merged

Setup Continuous Performance#1614
alganet merged 1 commit intoRespect:mainfrom
alganet:ci-perf

Conversation

@alganet
Copy link
Member

@alganet alganet commented Jan 20, 2026

A new workflow, continuous-integration-perf.yml was introduced. It:

  • Checks out the benchmarks branch locally.
  • Runs the benchmarks, accounting for non-existant baselines and target (main/PR).
  • Stores the .phpbench storage folder and a human-readable report in the benchmarks branch.
  • Does not make a PR fail, and never reports a failure when merging to main.
  • Allows workflow_dispatch for quick re-runs, and has an option to reset the baseline in case something changes (GitHub runner setup gets faster/slower, major refactors, etc).

Thus, it keeps a historical record of all benchmark results.

These results can be viewed by exploring GitHub via the web interface and seeing the changes in latest.md (the human file commited).

Additionally, one can clone the benchmarks branch and run phpbench log to explore the history in more detail.

Some adjustments to previously added benchmarks were made:

  • Assertions were included in order to track time and memory tresholds.
  • The benchmarks are now more surgical, and address the concrete validators instead of the whole chain validate.

These changes were made to make benchmarks more isolated, with the intention of adding chain-related benchmarks separately in the future.


A branch benchmarks was created in the official repo. If this PR gets approved, we'll need to setup branch protection to ensure the historical benchmarks are not deleted. This is a chore I'll perform before merging.

  • Setup branch protection for the benchmarks branch.

A preview of how that branch looks like after a few runs is available in my personal fork:

https://github.com/alganet/Validation/tree/benchmarks

Human-readable report:

image

Viewing a commit link leads to a quick visual comparison between runs:
image


Detailed view of the parametrized workflow_dispatch:

image

@alganet alganet marked this pull request as ready for review January 20, 2026 18:28
@alganet alganet requested a review from Copilot January 20, 2026 18:28
@codecov
Copy link

codecov bot commented Jan 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.81%. Comparing base (3270c1f) to head (0333424).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #1614      +/-   ##
============================================
+ Coverage     97.77%   97.81%   +0.04%     
  Complexity     1007     1007              
============================================
  Files           212      212              
  Lines          2339     2339              
============================================
+ Hits           2287     2288       +1     
+ Misses           52       51       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a continuous performance monitoring workflow that automatically tracks and stores benchmark results for the Respect\Validation library. The workflow runs benchmarks on every push and pull request, comparing results against a historical baseline stored in a dedicated benchmarks branch.

Changes:

  • Added a new GitHub Actions workflow for continuous performance monitoring with baseline comparison and historical tracking
  • Updated benchmark configuration to include performance assertions with 10% tolerance thresholds for time and memory metrics
  • Refactored test providers and benchmarks to use direct validator instantiation instead of the fluent builder API

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
.github/workflows/continuous-integration-perf.yml New workflow that runs benchmarks, compares against baseline, and stores results in the benchmarks branch
tests/benchmark/ValidatorBench.php Added performance assertions and updated to use the new evaluate() method with proper setup hooks
tests/library/SmokeTestProvider.php Converted from fluent API (v::validator()) to direct instantiation (new vs\Validator()) for more precise benchmarking
tests/feature/SerializableTest.php Updated to use the new evaluate() API method instead of validate()

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

A new workflow, continuous-integration-perf.yml was introduced. It:

 - Checks out the `benchmarks` branch locally.
 - Runs the benchmarks, accounting for non-existant baselines
   and target (main/PR).
 - Stores the .phpbench storage folder and a human-readable
   report in the `benchmarks` branch.
 - Does not make a PR fail, and never reports a failure
   when merging to main.
 - Allows workflow_dispatch for quick re-runs, and has an
   option to reset the baseline in case something changes
   (GitHub runner setup gets faster/slower, major refactors,
   etc).

Thus, it keeps a historical record of all benchmark results.

These results can be viewed by exploring GitHub via the web
interface and seeing the changes in `latest.md` (the human
file commited).

Additionally, one can clone the `benchmarks` branch and run
`phpbench log` to explore the history in more detail.

Some adjustments to previously added benchmarks were made:

 - Assertions were included in order to track time and memory
   tresholds.
 - The benchmarks are now more surgical, and address the
   concrete validators instead of the whole chain validate.

These changes were made to make benchmarks more isolated, with
the intention of adding chain-related benchmarks separately
in the future.
@alganet alganet merged commit 9862963 into Respect:main Jan 21, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants