Incremental size way bigger than expected after fstrim #139
-
|
Version used Describe the bug -rw-r--r-- 1 root root 13G Oct 3 01:12 sda.full.data Expected behavior Hypervisor information:
Logfiles: Workaround: |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments 10 replies
-
|
The Backup operates on blocklevel and i think fstrim results in changed blocks. So qemu marks these blocks as dirty and they end up in the Backup (as they are Part of the qcow bitmap as in marked „dirty“). I dont know if there is a way to Tell qemu to „discard“ changes done by fstrim. I dont think i can change anything in virtnbdbackup to behave different. skipping blocks marked as dirty is Not an option.. results in unusable disks After restore. Maybe libvirt/qemu projects have some documentation on that. |
Beta Was this translation helpful? Give feedback.
-
|
Fair enough. I thought that may be the case here, it was just a slight hope that maybe you encountered this and found a fix for such cases. |
Beta Was this translation helpful? Give feedback.
-
|
I dont have a Solution (Other than timing fstrim with full backups) .. Or check libvirt/qemu docs for options that might help |
Beta Was this translation helpful? Give feedback.
-
|
other solutions Based on dirty bitmaps have the same „Issue“: https://forum.proxmox.com/threads/huge-dirty-bitmap-after-sunday.110233/ So id say it works as designed. Maybe running fstrim on the host makes more Sense. |
Beta Was this translation helpful? Give feedback.
-
|
This is definitely a bug and the problem is in the way how the software is obtaining changed bitmap from the QEMU. The problem of this software is that it MUST query 2 metacontexts from NBD:
In this case each request for block status will provide 2 arrays of extents - one for CBT and one for the actual block status. The software should request data block for reading only if it is marked as non-zero and changed. Zero changed blocks should not be requested. Volya. Changed zeroed blocks are legal. Backup software should handle this correctly. |
Beta Was this translation helpful? Give feedback.
-
|
See issue #250 |
Beta Was this translation helpful? Give feedback.
-
|
On 3/27/25 17:37, Samuel wrote:
@dlunev <https://github.com/dlunev>
This is definitely a bug and the problem is in the way how the
software is obtaining changed bitmap from the QEMU.
The problem of this software is that it MUST query 2 metacontexts
from NBD:
|* bitmap * base allocation |
do you use this virtnbdbackup version with the new zeros detection on
big .qcow2 machines?
We running into a time issue, this need on some machines very long time:
#260 (comment)
<#260 (comment)>
I would be interested to know if you have this too?
—
Reply to this email directly, view it on GitHub
<#139 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC4HBFPOWYNC6IZP2ASQ67D2WQLKVAVCNFSM6AAAAABXAXYEKWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENRUGM2TMNQ>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
that looks strange. The real question is where the time is spent.
CPU time should not be a problem. Any verbose logs from the
tool?
|
Beta Was this translation helpful? Give feedback.
-
|
I'm getting such huge incremental backups of ZFS OS daily, and to save space, after an NVME broke with too much writes, i'm running full backup. Checked for fstrim, iostat, snapshot increments on the host, and just don't see where it was coming from. |
Beta Was this translation helpful? Give feedback.
See issue #250