- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 365
add benchmarks using pytest-benchmark and codspeed #3562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
          
     Open
      
      
            d-v-b
  wants to merge
  10
  commits into
  zarr-developers:main
  
    
      
        
          
  
    
      Choose a base branch
      
     
    
      
        
      
      
        
          
          
        
        
          
            
              
              
              
  
           
        
        
          
            
              
              
           
        
       
     
  
        
          
            
          
            
          
        
       
    
      
from
d-v-b:chore/benchmarks
  
      
      
   
  
    
  
  
  
 
  
      
    base: main
Could not load branches
            
              
  
    Branch not found: {{ refName }}
  
            
                
      Loading
              
            Could not load tags
            
            
              Nothing to show
            
              
  
            
                
      Loading
              
            Are you sure you want to change the base?
            Some commits from the old base branch may be removed from the timeline,
            and old review comments may become outdated.
          
          
      
        
          +127
        
        
          −1
        
        
          
        
      
    
  
  
     Open
                    Changes from 8 commits
      Commits
    
    
            Show all changes
          
          
            10 commits
          
        
        Select commit
          Hold shift + click to select a range
      
      ef5db51
              
                add benchmarks
              
              
                d-v-b 4c2a935
              
                remove failing zipstore
              
              
                d-v-b 3e5c6cb
              
                don't do benchmarking in default pytest runs
              
              
                d-v-b fc3388d
              
                changelog
              
              
                d-v-b da64194
              
                codspeed workflow
              
              
                d-v-b ba2c4cf
              
                lint
              
              
                d-v-b 009f739
              
                remove pedantic mode
              
              
                d-v-b 7b26982
              
                only run benchmarks in one environment
              
              
                d-v-b c0342ee
              
                use better string id for test params, make test data 1MB, and simplif…
              
              
                d-v-b 800b64c
              
                move layout to an external file
              
              
                d-v-b File filter
Filter by extension
Conversations
          Failed to load comments.   
        
        
          
      Loading
        
  Jump to
        
          Jump to file
        
      
      
          Failed to load files.   
        
        
          
      Loading
        
  Diff view
Diff view
There are no files selected for viewing
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
              
              | Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,30 @@ | ||
| name: CodSpeed Benchmarks | ||
|  | ||
| on: | ||
| push: | ||
| branches: | ||
| - "main" # or "master" | ||
| pull_request: | ||
| # `workflow_dispatch` allows CodSpeed to trigger backtest | ||
| # performance analysis in order to generate initial data. | ||
| workflow_dispatch: | ||
|  | ||
| jobs: | ||
| benchmarks: | ||
| name: Run benchmarks | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v5 | ||
| with: | ||
| fetch-depth: 0 # grab all branches and tags | ||
| - name: Set up Python | ||
| uses: actions/setup-python@v6 | ||
| - name: Install Hatch | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| pip install hatch | ||
| - name: Run the benchmarks | ||
| uses: CodSpeedHQ/action@v4 | ||
| with: | ||
| mode: instrumentation | ||
| run: hatch run test.py3.11-1.26-minimal:run-benchmark | ||
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
              | Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1 @@ | ||
| Add continuous performance benchmarking infrastructure. | 
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
              
              Empty file.
          
    
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
              | Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| """ | ||
| Test the basic end-to-end read/write performance of Zarr | ||
| """ | ||
|  | ||
| from __future__ import annotations | ||
|  | ||
| from dataclasses import dataclass | ||
| from typing import TYPE_CHECKING | ||
|  | ||
| if TYPE_CHECKING: | ||
| from pytest_benchmark.fixture import BenchmarkFixture | ||
|  | ||
| from zarr.abc.store import Store | ||
| from zarr.core.common import NamedConfig | ||
| from operator import getitem, setitem | ||
| from typing import Any, Literal | ||
|  | ||
| import pytest | ||
|  | ||
| from zarr import create_array | ||
|  | ||
| CompressorName = Literal["gzip"] | None | ||
|  | ||
| compressors: dict[CompressorName, NamedConfig[Any, Any] | None] = { | ||
| None: None, | ||
| "gzip": {"name": "gzip", "configuration": {"level": 1}}, | ||
| } | ||
|  | ||
|  | ||
| @dataclass(kw_only=True, frozen=True) | ||
| class Layout: | ||
| shape: tuple[int, ...] | ||
| chunks: tuple[int, ...] | ||
| shards: tuple[int, ...] | None | ||
|  | ||
|  | ||
| layouts: tuple[Layout, ...] = ( | ||
| Layout(shape=(16,), chunks=(1,), shards=None), | ||
| Layout(shape=(16,), chunks=(16,), shards=None), | ||
| Layout(shape=(16,), chunks=(1,), shards=(1,)), | ||
| Layout(shape=(16,), chunks=(1,), shards=(16,)), | ||
| Layout(shape=(16,) * 2, chunks=(1,) * 2, shards=None), | ||
| Layout(shape=(16,) * 2, chunks=(16,) * 2, shards=None), | ||
| Layout(shape=(16,) * 2, chunks=(1,) * 2, shards=(1,) * 2), | ||
| Layout(shape=(16,) * 2, chunks=(1,) * 2, shards=(16,) * 2), | ||
| ) | ||
|  | ||
|  | ||
| @pytest.mark.parametrize("compression_name", [None, "gzip"]) | ||
| @pytest.mark.parametrize("layout", layouts) | ||
| @pytest.mark.parametrize("store", ["memory", "local", "zip"], indirect=["store"]) | ||
| def test_write_array( | ||
| store: Store, layout: Layout, compression_name: CompressorName, benchmark: BenchmarkFixture | ||
| ) -> None: | ||
| """ | ||
| Test the time required to fill an array with a single value | ||
| """ | ||
| arr = create_array( | ||
| store, | ||
| dtype="uint8", | ||
| shape=layout.shape, | ||
| chunks=layout.chunks, | ||
| shards=layout.shards, | ||
| compressors=compressors[compression_name], # type: ignore[arg-type] | ||
| fill_value=0, | ||
| ) | ||
|  | ||
| benchmark(setitem, arr, Ellipsis, 1) | ||
|  | ||
|  | ||
| @pytest.mark.parametrize("compression_name", [None, "gzip"]) | ||
| @pytest.mark.parametrize("layout", layouts) | ||
| @pytest.mark.parametrize("store", ["memory", "local"], indirect=["store"]) | ||
| def test_read_array( | ||
| store: Store, layout: Layout, compression_name: CompressorName, benchmark: BenchmarkFixture | ||
| ) -> None: | ||
| """ | ||
| Test the time required to fill an array with a single value | ||
| """ | ||
| arr = create_array( | ||
| store, | ||
| dtype="uint8", | ||
| shape=layout.shape, | ||
| chunks=layout.chunks, | ||
| shards=layout.shards, | ||
| compressors=compressors[compression_name], # type: ignore[arg-type] | ||
| fill_value=0, | ||
| ) | ||
| arr[:] = 1 | ||
| benchmark(getitem, arr, Ellipsis) | 
      
      Oops, something went wrong.
        
    
  
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we test the latest instead? seems more appropriate...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The latest version of python? What's the reasoning? I'd rather update this file when we drop a supported version vs when a new version of python comes out.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we'd want to catch a perf regression from upstream changes too? I'm suggested latest version of released libraries
py=3.13, np=2.2There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have an upper bound on numpy versions, so I don't think this particular workflow will help us catch regressions from upstream changes -- we would need to update this workflow every time a new version of numpy is released. IMO that's something we should do in a separate benchmark workflow. This workflow here will run on every PR, and in that case the oldest version of numpy we support seems better.
we also don't have to use a pre-baked hatch environment here, we could define a dependency set specific to benchmarking. but my feeling is that benchmarking against older versions of stuff gives us a better measure of what users will actually experience.