Replies: 2 comments 10 replies
-
Useful table https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/edit?pli=1#gid=2126998674 |
Beta Was this translation helpful? Give feedback.
0 replies
-
I've created the issue to document it better openzfs/openzfs-docs#263 |
Beta Was this translation helpful? Give feedback.
10 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have 8 disks raidz2 pool with
ashift=13
. I created a bunch of datasets with differentrecordsize
and did write 64M of data parted in files of various sizes:I checked the
used
property of the file system and captured the values in the table below.For any combination of
recordsize
and the patterns ofnumber_of_files
andfile_size
the test was:I am having trouble to make sense of the values. Why is there such a high overhead for writing (small) files, in particular for small
recordsize
?Beta Was this translation helpful? Give feedback.
All reactions