What is a good value for zfs_arc_max? #14064
Answered
by
IvanVolosyuk
victornoel
asked this question in
Q&A
Replies: 1 comment 2 replies
-
You can probably estimate your working set and set zfs_arc_max accordingly.
I'm using 64G ram: 16G (VM's explicit hugepages), 64G zram (with my scripts
to spill to disk compressed pages over 4G), 16G min 32G max ARC.
My working set is usually ~20G, but when I compile stuff it can go to >40G.
I set zfs_arc_min to 16G to force usage of zram and don't break L2ARC in
memory headers + MFU.
ZFS will actually shrinks ARC when more memory needed for system, thus I
also use:
```echo 50 >/sys/module/zfs/parameters/zfs_arc_pc_percent```
in order to apply some memory pressure to the rest of the system and use
zram more actively.
|
Beta Was this translation helpful? Give feedback.
2 replies
Answer selected by
victornoel
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using a computer with 32G memory and I'm thinking that using the default value for zfs_arc_max (half of the memory if I correctly understood) is a bit over the top for so much memory (especially because I run processes that can potentially need a lot of memory).
Basically, the hypothesis here is that the best value for zfs_arc_max should be dependent on my ZFS installation (number of pools, dataset, size of disks, etc) and not a function of the size of my memory.
I couldn't find anywhere any recommended "good" values for having a functioning system. Ideally, I would like to know what is a minimum good value, maybe depending on the kind of ZFS layout I have.
In my case, for example, I have one pool with one disk of ~1TB and 2 datasets, one for root and one for home, with encryption for a typical dev desktop usage.
Beta Was this translation helpful? Give feedback.
All reactions