Replies: 2 comments 1 reply
-
|
I used to have a similar setup. The issue is that txg if written to all
disks, even if your data datasets are completely idle and all the activity
happens on datasets fully on SSDs - you will have metadata updated on your
HDDs, which can happen every 5 seconds with the default txg timeout, can be
faster during SSD activity. I ended up having dedicated SSD pool and
replicate the data to dedicated HDD pool with a bit of L2ARC.
…On Sat, Feb 28, 2026, 09:43 Pesc0 ***@***.***> wrote:
Hi all, I am trying to setup my proxmox system to have a hybrid nvme/hdd
pool to achive what is in my opinion the best configuration for the hw i
have. All vdevs are mirror pairs, 2 nvmes as special, the rest hdds.
The idea is to set special_small_blocks to a large number on the datasets
that i want to be fully in fast storage, and leaving it unset or tuned for
other datasets to make use of the hdds, such as for data archives.
In this test setup I am almost there:
***@***.***:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 2.04T 193G 1.85T - - 24% 9% 1.00x ONLINE -
ata-WDC_WD2500AAJS-00VTA0_WD-WMART1987430 233G 179G 52.9G - - 24% 77.2% - ONLINE
special - - - - - - - - - -
nvme-eui.0025385931b08034-part3 1.82T 13.7G 1.80T - - 0% 0.73% - ONLINE
***@***.***:~# zfs get special_small_blocks
NAME PROPERTY VALUE SOURCE
rpool special_small_blocks 0 default
rpool/ROOT special_small_blocks 1M received
rpool/ROOT/pve-1 special_small_blocks 1M inherited from rpool/ROOT
rpool/data special_small_blocks 1M received
rpool/data/vm-101-disk-0 special_small_blocks 1M inherited from rpool/data
rpool/data/vm-101-disk-1 special_small_blocks 1M inherited from rpool/data
rpool/hdd-archive special_small_blocks 0 default
rpool/hdd-archive/... special_small_blocks 0 default
rpool/var-lib-vz special_small_blocks 1M local
And thanks to zfs 2.4.0 the vm zvols can be sent to special vdev as well.
I already made sure to send | recv the affected datasets to transfer the
blocks to the special device.
Everything seems fine: from performance to disk activity. No errors in
logs, scrub is ok.
Now the issue: hdd-archive is accessed infrequently, so i would like for
the disks to spin down. Yes i am aware of the downsides of it, but I think
in my situation it might make sense. Or it might not in the future if
requirements change and i can always disable it. But that's not the point.
The disk refuses to sleep as transactions every 5s (default) keep it awake.
I captured a log (not sure how useful it is), sdb is the hard drive:
***@***.***:~# blktrace -d /dev/sdb -o - | blkparse -i -
8,16 0 1 0.000000000 454 Q FW [txg_sync]
8,16 0 2 0.000001410 454 G FW [txg_sync]
8,16 0 3 0.000007600 175 D FN [kworker/0:1H]
8,16 1 1 1266874889.708846957 383 A W 2512 + 8 <- (8,17) 464
8,16 1 2 1266874889.708847267 383 Q W 2512 + 8 [z_null_iss]
8,16 1 3 1266874889.708851127 383 G W 2512 + 8 [z_null_iss]
8,16 1 4 1266874889.708852417 383 P N [z_null_iss]
8,16 1 5 1266874889.708852897 383 U N [z_null_iss] 1
8,16 1 6 1266874889.708853637 383 I W 2512 + 8 [z_null_iss]
8,16 1 7 1266874889.708859447 383 D W 2512 + 8 [z_null_iss]
8,16 1 8 1266874889.708872187 383 A W 3024 + 8 <- (8,17) 976
8,16 1 9 1266874889.708872387 383 Q W 3024 + 8 [z_null_iss]
8,16 1 10 1266874889.708873307 383 G W 3024 + 8 [z_null_iss]
8,16 1 11 1266874889.708873577 383 P N [z_null_iss]
8,16 1 12 1266874889.708873837 383 U N [z_null_iss] 1
8,16 1 13 1266874889.708874147 383 I W 3024 + 8 [z_null_iss]
8,16 1 14 1266874889.708875707 383 D W 3024 + 8 [z_null_iss]
8,16 1 15 1266874889.708884527 383 A W 488379856 + 8 <- (8,17) 488377808
8,16 1 16 1266874889.708884727 383 Q W 488379856 + 8 [z_null_iss]
8,16 1 17 1266874889.708885587 383 G W 488379856 + 8 [z_null_iss]
8,16 1 18 1266874889.708885827 383 P N [z_null_iss]
8,16 1 19 1266874889.708886067 383 U N [z_null_iss] 1
8,16 1 20 1266874889.708886357 383 I W 488379856 + 8 [z_null_iss]
8,16 1 21 1266874889.708887717 383 D W 488379856 + 8 [z_null_iss]
8,16 1 22 1266874889.708896457 383 A W 488380368 + 8 <- (8,17) 488378320
8,16 1 23 1266874889.708896657 383 Q W 488380368 + 8 [z_null_iss]
8,16 1 24 1266874889.708897467 383 G W 488380368 + 8 [z_null_iss]
8,16 1 25 1266874889.708897697 383 P N [z_null_iss]
8,16 1 26 1266874889.708897937 383 U N [z_null_iss] 1
8,16 1 27 1266874889.708898217 383 I W 488380368 + 8 [z_null_iss]
8,16 1 28 1266874889.708899487 383 D W 488380368 + 8 [z_null_iss]
8,16 2 1 1266874889.709101470 0 C W 2512 + 8 [0]
8,16 2 2 1266874889.709247122 0 C W 3024 + 8 [0]
8,16 2 3 1266874889.709367443 0 C W 488379856 + 8 [0]
8,16 2 4 1266874889.709520255 0 C W 488380368 + 8 [0]
8,16 2 5 0.054509352 0 C FN 0 [0]
8,16 2 6 0.054517462 0 C WS 0 [0]
8,16 2 7 5.120694575 0 C W 2520 + 8 [0]
8,16 2 8 5.120840817 0 C W 3032 + 8 [0]
8,16 2 9 5.120991129 0 C W 488379864 + 8 [0]
8,16 2 10 5.121112901 0 C W 488380376 + 8 [0]
8,16 2 11 5.165692154 0 C FN 0 [0]
8,16 2 12 5.165693324 0 C WS 0 [0]
8,16 1 29 5.120457542 383 A W 2520 + 8 <- (8,17) 472
8,16 1 30 5.120457902 383 Q W 2520 + 8 [z_null_iss]
8,16 1 31 5.120461662 383 G W 2520 + 8 [z_null_iss]
8,16 1 32 5.120462942 383 P N [z_null_iss]
8,16 1 33 5.120463392 383 U N [z_null_iss] 1
8,16 1 34 5.120464052 383 I W 2520 + 8 [z_null_iss]
8,16 1 35 5.120469742 383 D W 2520 + 8 [z_null_iss]
8,16 1 36 5.120482323 383 A W 3032 + 8 <- (8,17) 984
8,16 1 37 5.120482513 383 Q W 3032 + 8 [z_null_iss]
8,16 1 38 5.120483423 383 G W 3032 + 8 [z_null_iss]
8,16 1 39 5.120483703 383 P N [z_null_iss]
8,16 1 40 5.120483953 383 U N [z_null_iss] 1
8,16 1 41 5.120484263 383 I W 3032 + 8 [z_null_iss]
8,16 1 42 5.120485673 383 D W 3032 + 8 [z_null_iss]
8,16 1 43 5.120494723 383 A W 488379864 + 8 <- (8,17) 488377816
8,16 1 44 5.120494913 383 Q W 488379864 + 8 [z_null_iss]
8,16 1 45 5.120495803 383 G W 488379864 + 8 [z_null_iss]
8,16 1 46 5.120496053 383 P N [z_null_iss]
8,16 1 47 5.120496283 383 U N [z_null_iss] 1
8,16 1 48 5.120496593 383 I W 488379864 + 8 [z_null_iss]
8,16 1 49 5.120497803 383 D W 488379864 + 8 [z_null_iss]
8,16 1 50 5.120506323 383 A W 488380376 + 8 <- (8,17) 488378328
8,16 1 51 5.120506523 383 Q W 488380376 + 8 [z_null_iss]
8,16 1 52 5.120507593 383 G W 488380376 + 8 [z_null_iss]
8,16 1 53 5.120507823 383 P N [z_null_iss]
8,16 1 54 5.120508063 383 U N [z_null_iss] 1
8,16 1 55 5.120508363 383 I W 488380376 + 8 [z_null_iss]
8,16 1 56 5.120509533 383 D W 488380376 + 8 [z_null_iss]
8,16 0 4 5.121143631 454 Q FW [txg_sync]
8,16 0 5 5.121144941 454 G FW [txg_sync]
8,16 0 6 5.121150871 175 D FN [kworker/0:1H]
8,16 2 13 10.239666310 0 C W 2528 + 8 [0]
8,16 2 14 10.239782482 0 C W 3040 + 8 [0]
8,16 2 15 10.239902813 0 C W 488379872 + 8 [0]
8,16 2 16 10.240023875 0 C W 488380384 + 8 [0]
8,16 0 7 10.240055275 454 Q FW [txg_sync]
8,16 0 8 10.240056605 454 G FW [txg_sync]
8,16 0 9 10.240062645 175 D FN [kworker/0:1H]
8,16 1 57 10.239433957 383 A W 2528 + 8 <- (8,17) 480
8,16 1 58 10.239434587 383 Q W 2528 + 8 [z_null_iss]
8,16 1 59 10.239438397 383 G W 2528 + 8 [z_null_iss]
8,16 1 60 10.239439547 383 P N [z_null_iss]
8,16 1 61 10.239440007 383 U N [z_null_iss] 1
8,16 1 62 10.239440637 383 I W 2528 + 8 [z_null_iss]
8,16 1 63 10.239446617 383 D W 2528 + 8 [z_null_iss]
8,16 1 64 10.239459088 383 A W 3040 + 8 <- (8,17) 992
8,16 1 65 10.239459268 383 Q W 3040 + 8 [z_null_iss]
8,16 1 66 10.239460218 383 G W 3040 + 8 [z_null_iss]
8,16 1 67 10.239460508 383 P N [z_null_iss]
8,16 1 68 10.239460768 383 U N [z_null_iss] 1
8,16 1 69 10.239461078 383 I W 3040 + 8 [z_null_iss]
8,16 1 70 10.239462578 383 D W 3040 + 8 [z_null_iss]
8,16 1 71 10.239471188 383 A W 488379872 + 8 <- (8,17) 488377824
8,16 1 72 10.239471378 383 Q W 488379872 + 8 [z_null_iss]
8,16 1 73 10.239472198 383 G W 488379872 + 8 [z_null_iss]
8,16 1 74 10.239472438 383 P N [z_null_iss]
8,16 1 75 10.239472678 383 U N [z_null_iss] 1
8,16 1 76 10.239472968 383 I W 488379872 + 8 [z_null_iss]
8,16 1 77 10.239474178 383 D W 488379872 + 8 [z_null_iss]
8,16 1 78 10.239482768 383 A W 488380384 + 8 <- (8,17) 488378336
8,16 1 79 10.239483008 383 Q W 488380384 + 8 [z_null_iss]
8,16 1 80 10.239483848 383 G W 488380384 + 8 [z_null_iss]
8,16 1 81 10.239484078 383 P N [z_null_iss]
8,16 1 82 10.239484318 383 U N [z_null_iss] 1
8,16 1 83 10.239484608 383 I W 488380384 + 8 [z_null_iss]
8,16 1 84 10.239485798 383 D W 488380384 + 8 [z_null_iss]
8,16 2 17 10.287764359 0 C FN 0 [0]
8,16 2 18 10.287766329 0 C WS 0 [0]
8,16 2 19 15.351393070 0 C W 2536 + 8 [0]
8,16 2 20 15.351538742 0 C W 3048 + 8 [0]
8,16 2 21 15.351659153 0 C W 488379880 + 8 [0]
8,16 2 22 15.351811355 0 C W 488380392 + 8 [0]
8,16 0 10 15.351826355 454 Q FW [txg_sync]
8,16 0 11 15.351828115 454 G FW [txg_sync]
8,16 0 12 15.351835255 175 D FN [kworker/0:1H]
8,16 1 85 15.351149407 383 A W 2536 + 8 <- (8,17) 488
8,16 1 86 15.351149837 383 Q W 2536 + 8 [z_null_iss]
8,16 1 87 15.351154307 383 G W 2536 + 8 [z_null_iss]
8,16 1 88 15.351155507 383 P N [z_null_iss]
8,16 1 89 15.351156027 383 U N [z_null_iss] 1
8,16 1 90 15.351156667 383 I W 2536 + 8 [z_null_iss]
8,16 1 91 15.351162847 383 D W 2536 + 8 [z_null_iss]
8,16 1 92 15.351176087 383 A W 3048 + 8 <- (8,17) 1000
8,16 1 93 15.351176287 383 Q W 3048 + 8 [z_null_iss]
8,16 1 94 15.351177227 383 G W 3048 + 8 [z_null_iss]
8,16 1 95 <https://www.google.com/maps/search/16+++1+++++++95?entry=gmail&source=g> 15.351177497 383 P N [z_null_iss]
8,16 1 96 15.351177757 383 U N [z_null_iss] 1
8,16 1 97 15.351178067 383 I W 3048 + 8 [z_null_iss]
8,16 1 98 15.351179567 383 D W 3048 + 8 [z_null_iss]
8,16 1 99 15.351188567 383 A W 488379880 + 8 <- (8,17) 488377832
8,16 1 100 15.351188757 383 Q W 488379880 + 8 [z_null_iss]
8,16 1 101 15.351189587 383 G W 488379880 + 8 [z_null_iss]
8,16 1 102 15.351189827 383 P N [z_null_iss]
8,16 1 103 15.351190067 383 U N [z_null_iss] 1
8,16 1 104 15.351190357 383 I W 488379880 + 8 [z_null_iss]
8,16 1 105 15.351191707 383 D W 488379880 + 8 [z_null_iss]
8,16 1 106 15.351200367 383 A W 488380392 + 8 <- (8,17) 488378344
8,16 1 107 15.351200547 383 Q W 488380392 + 8 [z_null_iss]
8,16 1 108 15.351201687 383 G W 488380392 + 8 [z_null_iss]
8,16 1 109 15.351201927 383 P N [z_null_iss]
8,16 1 110 15.351202167 383 U N [z_null_iss] 1
8,16 1 111 15.351202447 383 I W 488380392 + 8 [z_null_iss]
8,16 1 112 15.351203747 383 D W 488380392 + 8 [z_null_iss]
8,16 2 23 15.398942892 0 C FN 0 [0]
8,16 2 24 15.398944982 0 C WS 0 [0]
8,16 0 13 20.469450211 454 Q FW [txg_sync]
8,16 0 14 20.469451661 454 G FW [txg_sync]
8,16 0 15 20.469457351 175 D FN [kworker/0:1H]
8,16 2 25 20.469005256 0 C W 2544 + 8 [0]
8,16 2 26 20.469150777 0 C W 3056 + 8 [0]
8,16 2 27 20.469271649 0 C W 488379888 + 8 [0]
8,16 2 28 20.469423131 0 C W 488380400 + 8 [0]
8,16 1 113 20.468753102 383 A W 2544 + 8 <- (8,17) 496
8,16 1 114 20.468753472 383 Q W 2544 + 8 [z_null_iss]
8,16 1 115 20.468757342 383 G W 2544 + 8 [z_null_iss]
8,16 1 116 20.468758532 383 P N [z_null_iss]
8,16 1 117 20.468758992 383 U N [z_null_iss] 1
8,16 1 118 20.468759932 383 I W 2544 + 8 [z_null_iss]
8,16 1 119 20.468765562 383 D W 2544 + 8 [z_null_iss]
8,16 1 120 20.468778003 383 A W 3056 + 8 <- (8,17) 1008
8,16 1 121 20.468778193 383 Q W 3056 + 8 [z_null_iss]
8,16 1 122 20.468779093 383 G W 3056 + 8 [z_null_iss]
8,16 1 123 20.468779353 383 P N [z_null_iss]
8,16 1 124 20.468779603 383 U N [z_null_iss] 1
8,16 1 125 20.468779913 383 I W 3056 + 8 [z_null_iss]
8,16 1 126 20.468781353 383 D W 3056 + 8 [z_null_iss]
8,16 1 127 20.468790053 383 A W 488379888 + 8 <- (8,17) 488377840
8,16 1 128 20.468790233 383 Q W 488379888 + 8 [z_null_iss]
8,16 1 129 20.468791063 383 G W 488379888 + 8 [z_null_iss]
8,16 1 130 20.468791293 383 P N [z_null_iss]
8,16 1 131 20.468791563 383 U N [z_null_iss] 1
8,16 1 132 20.468791853 383 I W 488379888 + 8 [z_null_iss]
8,16 1 133 20.468793043 383 D W 488379888 + 8 [z_null_iss]
8,16 1 134 20.468801773 383 A W 488380400 + 8 <- (8,17) 488378352
8,16 1 135 20.468801953 383 Q W 488380400 + 8 [z_null_iss]
8,16 1 136 20.468802773 383 G W 488380400 + 8 [z_null_iss]
8,16 1 137 20.468803013 383 P N [z_null_iss]
8,16 1 138 20.468803243 383 U N [z_null_iss] 1
8,16 1 139 20.468803533 383 I W 488380400 + 8 [z_null_iss]
8,16 1 140 20.468804853 383 D W 488380400 + 8 [z_null_iss]
8,16 2 29 20.515588725 0 C FN 0 [0]
8,16 2 30 20.515590645 0 C WS 0 [0]
8,16 2 31 25.592446096 0 C W 2552 + 8 [0]
8,16 2 32 25.592592018 0 C W 3064 + 8 [0]
8,16 2 33 25.592712650 0 C W 488379896 + 8 [0]
8,16 2 34 25.592865232 0 C W 488380408 + 8 [0]
8,16 0 16 25.592880472 454 Q FW [txg_sync]
8,16 0 17 25.592882342 454 G FW [txg_sync]
8,16 0 18 25.592889602 175 D FN [kworker/0:1H]
8,16 1 141 25.592196333 383 A W 2552 + 8 <- (8,17) 504
8,16 1 142 25.592196703 383 Q W 2552 + 8 [z_null_iss]
8,16 1 143 25.592200793 383 G W 2552 + 8 [z_null_iss]
8,16 1 144 25.592202173 383 P N [z_null_iss]
8,16 1 145 25.592202653 383 U N [z_null_iss] 1
8,16 1 146 25.592203293 383 I W 2552 + 8 [z_null_iss]
8,16 1 147 25.592208833 383 D W 2552 + 8 [z_null_iss]
8,16 1 148 25.592221773 383 A W 3064 + 8 <- (8,17) 1016
8,16 1 149 25.592221973 383 Q W 3064 + 8 [z_null_iss]
8,16 1 150 25.592222923 383 G W 3064 + 8 [z_null_iss]
8,16 1 151 25.592223223 383 P N [z_null_iss]
8,16 1 152 25.592223483 383 U N [z_null_iss] 1
8,16 1 153 25.592223773 383 I W 3064 + 8 [z_null_iss]
8,16 1 154 25.592225354 383 D W 3064 + 8 [z_null_iss]
8,16 1 155 25.592234424 383 A W 488379896 + 8 <- (8,17) 488377848
8,16 1 156 25.592234624 383 Q W 488379896 + 8 [z_null_iss]
8,16 1 157 25.592235544 383 G W 488379896 + 8 [z_null_iss]
8,16 1 158 25.592235774 383 P N [z_null_iss]
8,16 1 159 25.592236014 383 U N [z_null_iss] 1
8,16 1 160 25.592236314 383 I W 488379896 + 8 [z_null_iss]
8,16 1 161 25.592237654 383 D W 488379896 + 8 [z_null_iss]
8,16 1 162 25.592246394 383 A W 488380408 + 8 <- (8,17) 488378360
8,16 1 163 25.592246584 383 Q W 488380408 + 8 [z_null_iss]
8,16 1 164 25.592247404 383 G W 488380408 + 8 [z_null_iss]
8,16 1 165 25.592247634 383 P N [z_null_iss]
8,16 1 166 25.592247874 383 U N [z_null_iss] 1
8,16 1 167 25.592248164 383 I W 488380408 + 8 [z_null_iss]
8,16 1 168 25.592249334 383 D W 488380408 + 8 [z_null_iss]
8,16 2 35 25.640543376 0 C FN 0 [0]
8,16 2 36 25.640547956 0 C WS 0 [0]
^CCPU0 (8,16):
Reads Queued: 0, 0KiB Writes Queued: 6, 0KiB
Read Dispatches: 6, 0KiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 0, 0KiB Writes Completed: 0, 0KiB
Read Merges: 0, 0KiB Write Merges: 0, 0KiB
Read depth: 2 Write depth: 4
IO unplugs: 0 Timer unplugs: 0
CPU1 (8,16):
Reads Queued: 0, 0KiB Writes Queued: 24, 96KiB
Read Dispatches: 0, 0KiB Write Dispatches: 24, 96KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 0, 0KiB Writes Completed: 0, 0KiB
Read Merges: 0, 0KiB Write Merges: 0, 0KiB
Read depth: 2 Write depth: 4
IO unplugs: 24 Timer unplugs: 0
CPU2 (8,16):
Reads Queued: 0, 0KiB Writes Queued: 0, 0KiB
Read Dispatches: 0, 0KiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 6, 0KiB Writes Completed: 30, 96KiB
Read Merges: 0, 0KiB Write Merges: 0, 0KiB
Read depth: 2 Write depth: 4
IO unplugs: 0 Timer unplugs: 0
Total (8,16):
Reads Queued: 0, 0KiB Writes Queued: 30, 96KiB
Read Dispatches: 6, 0KiB Write Dispatches: 24, 96KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 6, 0KiB Writes Completed: 30, 96KiB
Read Merges: 0, 0KiB Write Merges: 0, 0KiB
IO unplugs: 24 Timer unplugs: 0
Throughput (R/W): 0KiB/s / 3KiB/s
Events (8,16): 222 entries
Skips: 0 forward (0 - 0.0%)
Trace started at Fri Feb 27 22:51:25 2026
Disk activity is very little, 4k or 8k every 5s maybe, but it is not zero
(according to zpool iostat). What's the issue here? is this fixable or is
that just how zfs works?
I thought with this setup the hdds could remain idle until data needs to
be accessed on them, but i cannot figure out how to make it work. There
should be no writes to these disks right?
I am asking myself if this is caused by the fact that the pool was created
without special dev, it was added later and the datasets rewritten. Would a
fresh pool fix the issue?
(small rant, it would be nice if zfs rewrite could act on datasets / zvols
and not only files. rewriting root and the vm disks did not work for this
reason and i had to resort to send | recv)
Thanks to anyone that can help with this :)
—
Reply to this email directly, view it on GitHub
<#18265>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HMSNYJTTU6XXEPPEYL4ODCABAVCNFSM6AAAAACWCHTLVKVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZZGU2DONZYGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
-
|
Thank you so much for the quick reply. I took that as a challenge and gave it a shot here: #18269 :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all, I am trying to setup my proxmox system to have a hybrid nvme/hdd pool to achive what is in my opinion the best configuration for the hw i have. All vdevs are mirror pairs, 2 nvmes as special, the rest hdds.
The idea is to set special_small_blocks to a large number on the datasets that i want to be fully in fast storage, and leaving it unset or tuned for other datasets to make use of the hdds, such as for data archives.
In this test setup I am almost there:
And thanks to zfs 2.4.0 the vm zvols can be sent to special vdev as well.
I already made sure to send | recv the affected datasets to transfer the blocks to the special device.
Everything seems fine: from performance to disk activity. No errors in logs, scrub is ok.
Now the issue: hdd-archive is accessed infrequently, so i would like for the disks to spin down. Yes i am aware of the downsides of it, but I think in my situation it might make sense. Or it might not in the future if requirements change and i can always disable it. But that's not the point. The disk refuses to sleep as transactions every 5s (default) keep it awake.
I captured a log (not sure how useful it is), sdb is the hard drive:
Disk activity is very little, 4k or 8k every 5s maybe, but it is not zero (according to zpool iostat). What's the issue here? is this fixable or is that just how zfs works?
I thought with this setup the hdds could remain idle until data needs to be accessed on them, but i cannot figure out how to make it work. There should be no writes to these disks right?
I am asking myself if this is caused by the fact that the pool was created without special dev, it was added later and the datasets rewritten. Would a fresh pool fix the issue?
(small rant, it would be nice if zfs rewrite could act on datasets / zvols and not only files. rewriting root and the vm disks did not work for this reason and i had to resort to send | recv)
Thanks to anyone that can help with this :)
Beta Was this translation helpful? Give feedback.
All reactions