Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1614 +/- ##
============================================
+ Coverage 97.77% 97.81% +0.04%
Complexity 1007 1007
============================================
Files 212 212
Lines 2339 2339
============================================
+ Hits 2287 2288 +1
+ Misses 52 51 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
This PR introduces a continuous performance monitoring workflow that automatically tracks and stores benchmark results for the Respect\Validation library. The workflow runs benchmarks on every push and pull request, comparing results against a historical baseline stored in a dedicated benchmarks branch.
Changes:
- Added a new GitHub Actions workflow for continuous performance monitoring with baseline comparison and historical tracking
- Updated benchmark configuration to include performance assertions with 10% tolerance thresholds for time and memory metrics
- Refactored test providers and benchmarks to use direct validator instantiation instead of the fluent builder API
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
.github/workflows/continuous-integration-perf.yml |
New workflow that runs benchmarks, compares against baseline, and stores results in the benchmarks branch |
tests/benchmark/ValidatorBench.php |
Added performance assertions and updated to use the new evaluate() method with proper setup hooks |
tests/library/SmokeTestProvider.php |
Converted from fluent API (v::validator()) to direct instantiation (new vs\Validator()) for more precise benchmarking |
tests/feature/SerializableTest.php |
Updated to use the new evaluate() API method instead of validate() |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
A new workflow, continuous-integration-perf.yml was introduced. It: - Checks out the `benchmarks` branch locally. - Runs the benchmarks, accounting for non-existant baselines and target (main/PR). - Stores the .phpbench storage folder and a human-readable report in the `benchmarks` branch. - Does not make a PR fail, and never reports a failure when merging to main. - Allows workflow_dispatch for quick re-runs, and has an option to reset the baseline in case something changes (GitHub runner setup gets faster/slower, major refactors, etc). Thus, it keeps a historical record of all benchmark results. These results can be viewed by exploring GitHub via the web interface and seeing the changes in `latest.md` (the human file commited). Additionally, one can clone the `benchmarks` branch and run `phpbench log` to explore the history in more detail. Some adjustments to previously added benchmarks were made: - Assertions were included in order to track time and memory tresholds. - The benchmarks are now more surgical, and address the concrete validators instead of the whole chain validate. These changes were made to make benchmarks more isolated, with the intention of adding chain-related benchmarks separately in the future.
A new workflow, continuous-integration-perf.yml was introduced. It:
benchmarksbranch locally.benchmarksbranch.Thus, it keeps a historical record of all benchmark results.
These results can be viewed by exploring GitHub via the web interface and seeing the changes in
latest.md(the human file commited).Additionally, one can clone the
benchmarksbranch and runphpbench logto explore the history in more detail.Some adjustments to previously added benchmarks were made:
These changes were made to make benchmarks more isolated, with the intention of adding chain-related benchmarks separately in the future.
A branch
benchmarkswas created in the official repo. If this PR gets approved, we'll need to setup branch protection to ensure the historical benchmarks are not deleted. This is a chore I'll perform before merging.benchmarksbranch.A preview of how that branch looks like after a few runs is available in my personal fork:
https://github.com/alganet/Validation/tree/benchmarks
Human-readable report:
Viewing a commit link leads to a quick visual comparison between runs:

Detailed view of the parametrized workflow_dispatch: