@@ -159,18 +159,18 @@ If you're working with really big arrays, try the 'lazy' option:
159
159
nbytes: 3.6P; cbytes: 0; initialized: 0/1000000000
160
160
mode: w; path: big.zarr
161
161
162
- See the [ persistence documentation]( PERSISTENCE.rst) for more details of the
163
- file format.
162
+ See the ` persistence documentation < PERSISTENCE.rst >`_ for more
163
+ details of the file format.
164
164
165
165
Tuning
166
166
------
167
167
168
- ``zarr `` is optimised for accessing and storing data in contiguous slices,
169
- of the same size or larger than chunks. It is not and will never be
170
- optimised for single item access.
168
+ ``zarr `` is optimised for accessing and storing data in contiguous
169
+ slices, of the same size or larger than chunks. It is not and probably
170
+ never will be optimised for single item access.
171
171
172
- Chunks sizes >= 1M are generally good. Optimal chunk shape will depend on
173
- the correlation structure in your data.
172
+ Chunks sizes >= 1M are generally good. Optimal chunk shape will depend
173
+ on the correlation structure in your data.
174
174
175
175
``zarr `` is designed for use in parallel computations working
176
176
chunk-wise over data. Try it with `dask.array
@@ -179,12 +179,6 @@ multi-threaded, set zarr to use blosc in contextual mode::
179
179
180
180
>>> zarr.set_blosc_options(use_context=True)
181
181
182
- If using zarr in a single-threaded context, set zarr to use blosc in
183
- non-contextual mode, which allows blosc to use multiple threads
184
- internally::
185
-
186
- >>> zarr.set_blosc_options(use_context=False, nthreads=4)
187
-
188
182
Acknowledgments
189
183
---------------
190
184
0 commit comments