diff --git a/README.md b/README.md index ccb1f82..70c2f3e 100644 --- a/README.md +++ b/README.md @@ -13,6 +13,10 @@ of options for reading and writing 3D imaging data. The final write-up can be found at [https://heftieproject.github.io/zarr-benchmarks/](https://heftieproject.github.io/zarr-benchmarks/). +This is based on the results in the `/example_results` directory. To re-create +all plots locally (inside `data/plots`), follow the installation instructions +below then run: `python src/zarr_benchmarks/create_plots.py --example_results`. + ## Other related work - [`zarr-developers/zarr-benchmark`](https://github.com/zarr-developers/zarr-benchmark) diff --git a/docs/index.md b/docs/index.md index 02a40bb..21f5afc 100644 --- a/docs/index.md +++ b/docs/index.md @@ -30,7 +30,7 @@ These benchmarks are part of the wider increase compression ratio: - image data, setting it to `"shuffle"` - sparse labels, setting it to `"bitshuffle"` - - dense labels not setting `shuffle` at all + - dense labels, not setting `shuffle` at all ## Configuration @@ -142,9 +142,9 @@ tensorstore library). ![Shuffle vs write time with a shorter write time for shuffle than for no shuffle](assets/shuffle_write.png) Setting the _shuffle_ configuration to "shuffle" increases the compression ratio -for imagaing data from ~1.5 to ~1.9, and does not substatially change the read -or write times. We found that different shuffle options have different outcomes -for different types of data however. +for imaging data from ~1.5 to ~1.9, and does not substatially change the read or +write times. We found that different shuffle options have different outcomes for +different types of data however. ### Chunk size