Skip to content

Commit 215ff96

Browse files
committed
Merge branch 'main' of github.com:zarr-developers/zarr-python into feat/read-funcs
2 parents 7a5cbe7 + a7714c7 commit 215ff96

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+554
-290
lines changed

.github/workflows/gpu_test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ jobs:
5555
cache: 'pip'
5656
- name: Install Hatch and CuPy
5757
run: |
58-
python -m pip install --upgrade pip
58+
python -m pip install --upgrade pip
5959
pip install hatch
6060
- name: Set Up Hatch Env
6161
run: |

.github/workflows/releases.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323

2424
- name: Install PyBuild
2525
run: |
26-
python -m pip install --upgrade pip
26+
python -m pip install --upgrade pip
2727
pip install hatch
2828
- name: Build wheel and sdist
2929
run: hatch build
@@ -55,7 +55,7 @@ jobs:
5555
with:
5656
name: releases
5757
path: dist
58-
- uses: pypa/[email protected].2
58+
- uses: pypa/[email protected].3
5959
with:
6060
user: __token__
6161
password: ${{ secrets.pypi_password }}

.github/workflows/test.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ jobs:
5252
cache: 'pip'
5353
- name: Install Hatch
5454
run: |
55-
python -m pip install --upgrade pip
55+
python -m pip install --upgrade pip
5656
pip install hatch
5757
- name: Set Up Hatch Env
5858
run: |
@@ -84,7 +84,7 @@ jobs:
8484
cache: 'pip'
8585
- name: Install Hatch
8686
run: |
87-
python -m pip install --upgrade pip
87+
python -m pip install --upgrade pip
8888
pip install hatch
8989
- name: Set Up Hatch Env
9090
run: |

.pre-commit-config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,9 @@
11
ci:
22
autoupdate_commit_msg: "chore: update pre-commit hooks"
3+
autoupdate_schedule: "monthly"
34
autofix_commit_msg: "style: pre-commit fixes"
45
autofix_prs: false
56
default_stages: [pre-commit, pre-push]
6-
default_language_version:
7-
python: python3
87
repos:
98
- repo: https://github.com/astral-sh/ruff-pre-commit
109
rev: v0.8.2
@@ -16,11 +15,12 @@ repos:
1615
rev: v2.3.0
1716
hooks:
1817
- id: codespell
19-
args: ["-L", "ba,ihs,kake,nd,noe,nwo,te,fo,zar", "-S", "fixture"]
18+
args: ["-L", "fo,ihs,kake,te", "-S", "fixture"]
2019
- repo: https://github.com/pre-commit/pre-commit-hooks
2120
rev: v5.0.0
2221
hooks:
2322
- id: check-yaml
23+
- id: trailing-whitespace
2424
- repo: https://github.com/pre-commit/mirrors-mypy
2525
rev: v1.13.0
2626
hooks:

README-v3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ hatch env create test
3838
## Run the Tests
3939

4040
```
41-
hatch run test:run
41+
hatch run test:run
4242
```
4343

4444
or

bench/compress_normal.txt

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Line # Hits Time Per Hit % Time Line Contents
1919
==============================================================
2020
137 def compress(source, char* cname, int clevel, int shuffle):
2121
138 """Compress data in a numpy array.
22-
139
22+
139
2323
140 Parameters
2424
141 ----------
2525
142 source : array-like
@@ -30,33 +30,33 @@ Line # Hits Time Per Hit % Time Line Contents
3030
147 Compression level.
3131
148 shuffle : int
3232
149 Shuffle filter.
33-
150
33+
150
3434
151 Returns
3535
152 -------
3636
153 dest : bytes-like
3737
154 Compressed data.
38-
155
38+
155
3939
156 """
40-
157
40+
157
4141
158 cdef:
4242
159 char *source_ptr
4343
160 char *dest_ptr
4444
161 Py_buffer source_buffer
4545
162 size_t nbytes, cbytes, itemsize
4646
163 200 506 2.5 0.2 array.array char_array_template = array.array('b', [])
4747
164 array.array dest
48-
165
48+
165
4949
166 # setup source buffer
5050
167 200 458 2.3 0.2 PyObject_GetBuffer(source, &source_buffer, PyBUF_ANY_CONTIGUOUS)
5151
168 200 119 0.6 0.0 source_ptr = <char *> source_buffer.buf
52-
169
52+
169
5353
170 # setup destination
5454
171 200 239 1.2 0.1 nbytes = source_buffer.len
5555
172 200 103 0.5 0.0 itemsize = source_buffer.itemsize
5656
173 200 2286 11.4 0.8 dest = array.clone(char_array_template, nbytes + BLOSC_MAX_OVERHEAD,
5757
174 zero=False)
5858
175 200 129 0.6 0.0 dest_ptr = <char *> dest.data.as_voidptr
59-
176
59+
176
6060
177 # perform compression
6161
178 200 1734 8.7 0.6 if _get_use_threads():
6262
179 # allow blosc to use threads internally
@@ -67,24 +67,24 @@ Line # Hits Time Per Hit % Time Line Contents
6767
184 cbytes = blosc_compress(clevel, shuffle, itemsize, nbytes,
6868
185 source_ptr, dest_ptr,
6969
186 nbytes + BLOSC_MAX_OVERHEAD)
70-
187
70+
187
7171
188 else:
7272
189 with nogil:
7373
190 cbytes = blosc_compress_ctx(clevel, shuffle, itemsize, nbytes,
7474
191 source_ptr, dest_ptr,
7575
192 nbytes + BLOSC_MAX_OVERHEAD, cname,
7676
193 0, 1)
77-
194
77+
194
7878
195 # release source buffer
7979
196 200 616 3.1 0.2 PyBuffer_Release(&source_buffer)
80-
197
80+
197
8181
198 # check compression was successful
8282
199 200 120 0.6 0.0 if cbytes <= 0:
8383
200 raise RuntimeError('error during blosc compression: %d' % cbytes)
84-
201
84+
201
8585
202 # resize after compression
8686
203 200 1896 9.5 0.6 array.resize(dest, cbytes)
87-
204
87+
204
8888
205 200 186 0.9 0.1 return dest
8989

9090
*******************************************************************************
@@ -100,19 +100,19 @@ Line # Hits Time Per Hit % Time Line Contents
100100
==============================================================
101101
75 def decompress(source, dest):
102102
76 """Decompress data.
103-
77
103+
77
104104
78 Parameters
105105
79 ----------
106106
80 source : bytes-like
107107
81 Compressed data, including blosc header.
108108
82 dest : array-like
109109
83 Object to decompress into.
110-
84
110+
84
111111
85 Notes
112112
86 -----
113113
87 Assumes that the size of the destination buffer is correct for the size of
114114
88 the uncompressed data.
115-
89
115+
89
116116
90 """
117117
91 cdef:
118118
92 int ret
@@ -122,7 +122,7 @@ Line # Hits Time Per Hit % Time Line Contents
122122
96 array.array source_array
123123
97 Py_buffer dest_buffer
124124
98 size_t nbytes
125-
99
125+
99
126126
100 # setup source buffer
127127
101 200 573 2.9 0.2 if PY2 and isinstance(source, array.array):
128128
102 # workaround fact that array.array does not support new-style buffer
@@ -134,13 +134,13 @@ Line # Hits Time Per Hit % Time Line Contents
134134
108 200 112 0.6 0.0 release_source_buffer = True
135135
109 200 144 0.7 0.1 PyObject_GetBuffer(source, &source_buffer, PyBUF_ANY_CONTIGUOUS)
136136
110 200 98 0.5 0.0 source_ptr = <char *> source_buffer.buf
137-
111
137+
111
138138
112 # setup destination buffer
139139
113 200 552 2.8 0.2 PyObject_GetBuffer(dest, &dest_buffer,
140140
114 PyBUF_ANY_CONTIGUOUS | PyBUF_WRITEABLE)
141141
115 200 100 0.5 0.0 dest_ptr = <char *> dest_buffer.buf
142142
116 200 84 0.4 0.0 nbytes = dest_buffer.len
143-
117
143+
117
144144
118 # perform decompression
145145
119 200 1856 9.3 0.8 if _get_use_threads():
146146
120 # allow blosc to use threads internally
@@ -149,12 +149,12 @@ Line # Hits Time Per Hit % Time Line Contents
149149
123 else:
150150
124 with nogil:
151151
125 ret = blosc_decompress_ctx(source_ptr, dest_ptr, nbytes, 1)
152-
126
152+
126
153153
127 # release buffers
154154
128 200 754 3.8 0.3 if release_source_buffer:
155155
129 200 326 1.6 0.1 PyBuffer_Release(&source_buffer)
156156
130 200 165 0.8 0.1 PyBuffer_Release(&dest_buffer)
157-
131
157+
131
158158
132 # handle errors
159159
133 200 128 0.6 0.1 if ret <= 0:
160160
134 raise RuntimeError('error during blosc decompression: %d' % ret)

docs/conf.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@
4747
"sphinx_issues",
4848
"sphinx_copybutton",
4949
"sphinx_design",
50+
'sphinx_reredirects',
5051
]
5152

5253
issues_github_path = "zarr-developers/zarr-python"
@@ -81,6 +82,13 @@
8182
version = get_version("zarr")
8283
release = get_version("zarr")
8384

85+
redirects = {
86+
"spec": "https://zarr-specs.readthedocs.io",
87+
"spec/v1": 'https://zarr-specs.readthedocs.io/en/latest/v1/v1.0.html',
88+
"spec/v2": "https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html",
89+
"spec/v3": "https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html",
90+
}
91+
8492
# The language for content autogenerated by Sphinx. Refer to documentation
8593
# for a list of supported languages.
8694
#

docs/contributing.rst

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Contributing to Zarr
2-
====================
1+
Contributing
2+
============
33

44
Zarr is a community maintained project. We welcome contributions in the form of bug
55
reports, bug fixes, documentation, enhancement proposals and more. This page provides
@@ -307,7 +307,8 @@ Data format compatibility
307307
The data format used by Zarr is defined by a specification document, which should be
308308
platform-independent and contain sufficient detail to construct an interoperable
309309
software library to read and/or write Zarr data using any programming language. The
310-
latest version of the specification document is available from the :ref:`spec` page.
310+
latest version of the specification document is available on the
311+
`Zarr specifications website <https://zarr-specs.readthedocs.io>`_.
311312

312313
Here, **data format compatibility** means that all software libraries that implement a
313314
particular version of the Zarr storage specification are interoperable, in the sense

docs/guide/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,6 @@ Guide
44
.. toctree::
55
:maxdepth: 1
66

7+
whatsnew_v3
78
storage
89
consolidated_metadata

docs/guide/storage.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Storage
44
Zarr-Python supports multiple storage backends, including: local file systems,
55
Zip files, remote stores via ``fsspec`` (S3, HTTP, etc.), and in-memory stores. In
66
Zarr-Python 3, stores must implement the abstract store API from
7-
:class:`zarr.abc.store.Store`.
7+
:class:`zarr.abc.store.Store`.
88

99
.. note::
1010
Unlike Zarr-Python 2 where the store interface was built around a generic ``MutableMapping``
@@ -50,8 +50,8 @@ filesystem.
5050
Zip Store
5151
~~~~~~~~~
5252

53-
The :class:`zarr.storage.ZipStore` stores the contents of a Zarr hierarchy in a single
54-
Zip file. The `Zip Store specification_` is currently in draft form.
53+
The :class:`zarr.storage.ZipStore` stores the contents of a Zarr hierarchy in a single
54+
Zip file. The `Zip Store specification_` is currently in draft form.
5555

5656
.. code-block:: python
5757
@@ -65,7 +65,7 @@ Remote Store
6565

6666
The :class:`zarr.storage.RemoteStore` stores the contents of a Zarr hierarchy in following the same
6767
logical layout as the ``LocalStore``, except the store is assumed to be on a remote storage system
68-
such as cloud object storage (e.g. AWS S3, Google Cloud Storage, Azure Blob Store). The
68+
such as cloud object storage (e.g. AWS S3, Google Cloud Storage, Azure Blob Store). The
6969
:class:`zarr.storage.RemoteStore` is backed by `Fsspec_` and can support any Fsspec backend
7070
that implements the `AbstractFileSystem` API,
7171

@@ -80,7 +80,7 @@ Memory Store
8080
~~~~~~~~~~~~
8181

8282
The :class:`zarr.storage.RemoteStore` a in-memory store that allows for serialization of
83-
Zarr data (metadata and chunks) to a dictionary.
83+
Zarr data (metadata and chunks) to a dictionary.
8484

8585
.. code-block:: python
8686

0 commit comments

Comments
 (0)