feat(benches): add benchmarks for historic and latest events scanning#254
feat(benches): add benchmarks for historic and latest events scanning#2540xNeshi merged 41 commits intoOpenZeppelin:mainfrom
Conversation
Signed-off-by: yug49 <148035793+yug49@users.noreply.github.com>
465a88a to
fa67e5b
Compare
|
Hey @0xNeshi |
|
All good 👍 |
|
GM @0xNeshi, Workflow Architecture Three GitHub Actions workflows handle different scenarios:
Required Setup
How It Works When code is merged to |
0xNeshi
left a comment
There was a problem hiding this comment.
Excellent work, let's polish now
Co-authored-by: Nenad <xinef.it@gmail.com>
Co-authored-by: Nenad <xinef.it@gmail.com>
Co-authored-by: Nenad <xinef.it@gmail.com>
|
@0xNeshi what do you think about creating multiple dump files and starting anvil from that state instead of having to recreate it every time we start a bench |
Co-authored-by: Nenad <xinef.it@gmail.com>
Co-authored-by: Nenad <xinef.it@gmail.com>
|
Hey @0xNeshi |
Co-authored-by: Nenad <xinef.it@gmail.com>
Co-authored-by: Nenad <xinef.it@gmail.com>
Co-authored-by: Nenad <xinef.it@gmail.com>
|
Hey @0xNeshi I think this setup makes sense, since this What do you think? Will be happy to implement if you prefer a different approach. |
3fd137c to
3b34076
Compare
0xNeshi
left a comment
There was a problem hiding this comment.
@yug49 re: #254 (comment)
We actually need to make a distinction between:
- running benches locally - means benches and/or the code being benched are being debugged, so makes sense to enable the flag to address any issues
- manually triggering CI - means we're running benches outside the normal schedule
Let's actually remove the --err flag completely.
Yes, that sounds good |
Related to #229
Hey!
This PR introduces a benchmarking system for Event Scanner using Criterion.rs to measure performance impact of changes to the scanner. Currently drafting, Bencher CI integration coming in follow-up.
What's Included
New benches Crate Structure
Benchmarks Implemented
example of Historic:

How Regression Testing Works
Criterion stores baseline results in
target/criterion/<benchmark>/base/. On subsequent runs, it compares new measurements against this baseline and reports:If a change introduces a regression, you'll see something like:
This makes it easy to catch slowdowns before merging
Running Benchmarks
Next Steps