Skip to content

Conversation

alek-p
Copy link
Contributor

@alek-p alek-p commented Aug 13, 2025

Motivation and Context

We've seen a handful of the rare cases of corruption where the checksum for the block is correct, but it fails to decompress.
Since a scrub does not decompress the blocks that it reads, currently, a scrub will not catch this type of corruption.

To address this, we've implemented a module parameter (zfs_scrub_decompress ) that, when set to a non-zero value, will cause all scrubbed blocks to be decompressed, and if this decompression fails, the failure will be reported in zpool status -v output as permanent data error(s).

Description

To implement this dsl_scan_scrub_done() callback was modified to optionally do decompression for scrubbed blocks.
The existing scurb behavior is preserved by default as the tunable in set to 0 by default.

How Has This Been Tested?

Ran the zpool scrub-related tests in the zfs test suite, which passed, and some surface performance tests. The performance tests suggest about 10-15% higher peak CPU usage when using decompressive scrub.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Quality assurance (non-breaking change which makes the code more robust against bugs)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@alek-p alek-p added the Status: Code Review Needed Ready for review and testing label Aug 13, 2025
/*
* If the block was read without error, is compressed, and we're doing
* a decompression scrub we will now attempt to decompress it to
* further verify its integrity.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For encrypted datasets, unless the encryption keys are loaded I'd expect the decompression to fail. It'd be nice to optionally do this check as part of the normal pipeline.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Existing scrub code always uses ZIO_FLAG_RAW flag to disable both decryption and decompression. Unless that is what you mean, Brian, I wonder if instead of the additional code here we could just set the flags properly to make ZIO pipeline to also verify encryption MACs and only then decompress.

I also wonder whether it is realistic (I really doubt it happens now) to try to recover those errors by reading from other data copies. I don't know what is the use case of this patch, that can be either memory corruption (in which case all copies will be broken) or some on-disk corruption that managed to slip through checksums (that is unlikely, but in which case there might be a hope).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless that is what you mean... we could just set the flags properly...

Yes, this is what I was suggesting. Although, I'm not convinced we can set the flags properly since the encryption keys could be unloaded any at time while we're scrubbing. A new ZIO_FLAG_ might be needed as a hint to decrypt and decompress the block only if possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to go this route before by setting ZIO_FLAG_RAW_ENCRYPT instead of ZIO_FLAG_RAW and hoping to see a decompression failure during scrub, but I couldn't find any evidence of the failure. I'll look again.

The use case is we have a handful of cases of on-disk corruption where the checksum for the block is correct, but it fails to decompress. We would like to identify all such cases using a scrub. By the way, demand-reading such a block returns an EIO and doesn't add anything to the errlog. We have a patch that changes this as well.

The recovery question is an interesting one. It might be possible to recover from this in some cases, but in the case I've dug into, it is not possible since we never had a good copy of that block. The bad block was received already containing the corruption (in a compressed send stream), and the source did not have redundancy configured.

@alek-p alek-p force-pushed the decompressive_scrub branch from 3b714d2 to d828236 Compare August 15, 2025 17:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Code Review Needed Ready for review and testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants