@@ -64,22 +64,24 @@ compression_opts
64
64
compression library.
65
65
fill_value
66
66
A scalar value providing the default value to use for uninitialized
67
- portions of the array.
67
+ portions of the array, or `` null `` if no fill_value is to be used .
68
68
order
69
69
Either "C" or "F", defining the layout of bytes within each chunk of the
70
70
array. "C" means row-major order, i.e., the last dimension varies fastest;
71
71
"F" means column-major order, i.e., the first dimension varies fastest.
72
72
filters
73
- TODO
73
+ A list of JSON objects providing filter configurations, or ``null `` if no
74
+ filters are to be applied. Each filter configuration object MUST contain a
75
+ ``"name" `` key identifying the filter to be used.
74
76
75
77
Other keys MUST NOT be present within the metadata object.
76
78
77
79
For example, the JSON object below defines a 2-dimensional array of 64-bit
78
80
little-endian floating point numbers with 10000 rows and 10000 columns, divided
79
81
into chunks of 1000 rows and 1000 columns (so there will be 100 chunks in total
80
82
arranged in a 10 by 10 grid). Within each chunk the data are laid out in C
81
- contiguous order, and each chunk is compressed using the Blosc compression
82
- library::
83
+ contiguous order. Each chunk is encoded using a delta filter and compressed
84
+ using the Blosc compression library prior to storage ::
83
85
84
86
{
85
87
"chunks": [
@@ -93,9 +95,9 @@ library::
93
95
"shuffle": 1
94
96
},
95
97
"dtype": "<f8",
96
- "fill_value": null ,
98
+ "fill_value": "NaN" ,
97
99
"filters": [
98
- {"name": "delta", "enc_dtype ": "<f4 ", "dec_dtype ": "<f8 "}
100
+ {"name": "delta", "dtype ": "<f8 ", "astype ": "<f4 "}
99
101
],
100
102
"order": "C",
101
103
"shape": [
@@ -147,7 +149,6 @@ Positive Infinity ``"Infinity"``
147
149
Negative Infinity ``"-Infinity" ``
148
150
================= ===============
149
151
150
-
151
152
Chunks
152
153
~~~~~~
153
154
@@ -184,7 +185,12 @@ contents of any chunk region falling outside the array are undefined.
184
185
Filters
185
186
~~~~~~~
186
187
187
- TODO
188
+ Optionally a sequence of one or more filters can be used to transform chunk
189
+ data prior to compression. When storing data, filters are applied in the order
190
+ specified in array metadata to encode data, then the encoded data are passed to
191
+ the primary compressor. When retrieving data, stored chunk data are
192
+ decompressed by the primary compressor then decoded using filters in the
193
+ reverse order.
188
194
189
195
Hierarchies
190
196
-----------
@@ -463,7 +469,8 @@ Changes in version 2
463
469
* Added support for storing multiple arrays in the same store and organising
464
470
arrays into hierarchies using groups.
465
471
* Array metadata is now stored under the ".zarray" key instead of the "meta"
466
- key
472
+ key.
467
473
* Custom attributes are now stored under the ".zattrs" key instead of the
468
- "attrs" key
469
- * TODO filters
474
+ "attrs" key.
475
+ * Added support for filters.
476
+ * Changed encoding of "fill_value" field within array metadata.
0 commit comments