- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 365
add benchmarks using pytest-benchmark and codspeed #3562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| @zarr-developers/steering-council I don't have permission to register this repo with codspeed. I submitted a request to register it, could someone approve it? | 
| 
 done | 
| does anyone have opinions about benchmarks? feel free to suggest something concrete. Otherwise, I think we should take this as-is and deal with later benchmarks (like partial shard read / writes) in a subsequent pr | 
| uses: CodSpeedHQ/action@v4 | ||
| with: | ||
| mode: instrumentation | ||
| run: hatch run test.py3.11-1.26-minimal:run-benchmark | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we test the latest instead? seems more appropriate...
| 
 indexing please. that'll exercise the codec pipeline too. a peakmem metric would be good to track also, if possible. | 
since #3554 was an unpopular direction I'm going instead with codspeed + pytest-benchmark. Opening as a draft because I haven't looked into how codspeed works at all, but I'd like people to weigh in on whether these initial benchmarks make sense. Naturally we can add more specific ones later, but I figured just some bulk array read / write workloads would be a good start.