-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
While experimenting with #18 I found the following while testing an unpack of a pure random file.
- baseline: 558.67M/s
- Remove slab verification while reading each slab (blake 64): 941.72M/s
- Remove compression of data slab & slab verification: 1.42G/s
I think we can better performance of 2 while maintaining data integrity by using a faster CRC. As for using compressed data slab, maybe we could incorporate compression on a slab by slab basis within the slab file? Maybe by using some type of running entropy calculations or by doing the compression and not using the compressed copy if it offers very little space savings?
In my case the data wasn't compression-able, thus worst case scenario.
The other possibility would be using a compression thread pool for reads, like we do for writes.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels