@@ -863,13 +863,13 @@ Consolidating metadata
863
863
864
864
Since there is a significant overhead for every connection to a cloud object
865
865
store such as S3, the pattern described in the previous section may incur
866
- significant latency while scanning the metadata of the dataset hierarchy, even
866
+ significant latency while scanning the metadata of the array hierarchy, even
867
867
though each individual metadata object is small. For cases such as these, once
868
868
the data are static and can be regarded as read-only, at least for the
869
- metadata/structure of the dataset hierarchy, the many metadata objects can be
869
+ metadata/structure of the array hierarchy, the many metadata objects can be
870
870
consolidated into a single one via
871
871
:func: `zarr.convenience.consolidate_metadata `. Doing this can greatly increase
872
- the speed of reading the dataset metadata, e.g.::
872
+ the speed of reading the array metadata, e.g.::
873
873
874
874
>>> zarr.consolidate_metadata(store) # doctest: +SKIP
875
875
@@ -886,7 +886,7 @@ backend storage.
886
886
887
887
Note that, the hierarchy could still be opened in the normal way and altered,
888
888
causing the consolidated metadata to become out of sync with the real state of
889
- the dataset hierarchy. In this case,
889
+ the array hierarchy. In this case,
890
890
:func: `zarr.convenience.consolidate_metadata ` would need to be called again.
891
891
892
892
To protect against consolidated metadata accidentally getting out of sync, the
@@ -930,8 +930,8 @@ copying a group named 'foo' from an HDF5 file to a Zarr group::
930
930
└── baz (100,) int64
931
931
>>> source.close()
932
932
933
- If rather than copying a single group or dataset you would like to copy all
934
- groups and datasets , use :func: `zarr.convenience.copy_all `, e.g.::
933
+ If rather than copying a single group or array you would like to copy all
934
+ groups and arrays , use :func: `zarr.convenience.copy_all `, e.g.::
935
935
936
936
>>> source = h5py.File('data/example.h5', mode='r')
937
937
>>> dest = zarr.open_group('data/example2.zarr', mode='w')
@@ -1004,7 +1004,7 @@ String arrays
1004
1004
There are several options for storing arrays of strings.
1005
1005
1006
1006
If your strings are all ASCII strings, and you know the maximum length of the string in
1007
- your dataset , then you can use an array with a fixed-length bytes dtype. E.g.::
1007
+ your array , then you can use an array with a fixed-length bytes dtype. E.g.::
1008
1008
1009
1009
>>> z = zarr.zeros(10, dtype='S6')
1010
1010
>>> z
0 commit comments