-
Notifications
You must be signed in to change notification settings - Fork 1.9k
added module param to enable decompression of scrubed blocks #17630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
alek-p
wants to merge
1
commit into
openzfs:master
Choose a base branch
from
alek-p:decompressive_scrub
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
63 changes: 63 additions & 0 deletions
63
tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_decompress.ksh
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
#!/bin/ksh -p | ||
# SPDX-License-Identifier: CDDL-1.0 | ||
# | ||
# This file and its contents are supplied under the terms of the | ||
# Common Development and Distribution License ("CDDL"), version 1.0. | ||
# You may only use this file in accordance with the terms of version | ||
# 1.0 of the CDDL. | ||
# | ||
# A full copy of the text of the CDDL should have accompanied this | ||
# source. A copy of the CDDL is also available via the Internet at | ||
# http://www.illumos.org/license/CDDL. | ||
# | ||
|
||
# | ||
# Copyright (c) 2025 ConnectWise Inc. All rights reserved. | ||
# | ||
|
||
. $STF_SUITE/include/libtest.shlib | ||
. $STF_SUITE/tests/functional/cli_root/zpool_scrub/zpool_scrub.cfg | ||
|
||
# | ||
# DESCRIPTION: | ||
# 'zpool scrub' with zfs_scrub_decompress set works | ||
# | ||
# STRATEGY: | ||
# 1 Set zfs_scrub_decompress tunable. | ||
# 2. Start a scrub and wait for it to finish. | ||
# 3. Repeat this with zfs_scrub_decompress not set. | ||
# | ||
|
||
function cleanup | ||
{ | ||
set_tunable32 SCRUB_DECOMPRESS 0 | ||
zfs destroy $TESTPOOL/newfs | ||
} | ||
|
||
verify_runnable "global" | ||
|
||
log_onexit cleanup | ||
|
||
log_assert "Scrub with decompression tunable set - works." | ||
|
||
# Create out testing dataset | ||
log_must zfs create $TESTPOOL/newfs | ||
# Make sure compression is on | ||
log_must zfs set compression=on $TESTPOOL/newfs | ||
typeset file="/$TESTPOOL/newfs/$TESTFILE0" | ||
# Create some data in our dataset | ||
log_must dd if=/dev/urandom of=$file bs=1024 count=1024 oflag=sync | ||
# Make sure data is compressible | ||
log_must eval "echo 'aaaaaaaa' >> "$file | ||
|
||
# Enable decompression of blocks read by scrub | ||
log_must set_tunable32 SCRUB_DECOMPRESS 1 | ||
# Run and wait for scrub | ||
log_must zpool scrub -w $TESTPOOL | ||
|
||
# Disable decompression of blocks read by scrub | ||
log_must set_tunable32 SCRUB_DECOMPRESS 0 | ||
# Run and wait for scrub | ||
log_must zpool scrub -w $TESTPOOL | ||
|
||
log_pass "Scrub with decompression tunable set - works." |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For encrypted datasets, unless the encryption keys are loaded I'd expect the decompression to fail. It'd be nice to optionally do this check as part of the normal pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Existing scrub code always uses
ZIO_FLAG_RAW
flag to disable both decryption and decompression. Unless that is what you mean, Brian, I wonder if instead of the additional code here we could just set the flags properly to make ZIO pipeline to also verify encryption MACs and only then decompress.I also wonder whether it is realistic (I really doubt it happens now) to try to recover those errors by reading from other data copies. I don't know what is the use case of this patch, that can be either memory corruption (in which case all copies will be broken) or some on-disk corruption that managed to slip through checksums (that is unlikely, but in which case there might be a hope).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is what I was suggesting. Although, I'm not convinced we can set the flags properly since the encryption keys could be unloaded any at time while we're scrubbing. A new
ZIO_FLAG_
might be needed as a hint to decrypt and decompress the block only if possible.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to go this route before by setting
ZIO_FLAG_RAW_ENCRYPT
instead ofZIO_FLAG_RAW
and hoping to see a decompression failure during scrub, but I couldn't find any evidence of the failure. I'll look again.The use case is we have a handful of cases of on-disk corruption where the checksum for the block is correct, but it fails to decompress. We would like to identify all such cases using a scrub. By the way, demand-reading such a block returns an EIO and doesn't add anything to the errlog. We have a patch that changes this as well.
The recovery question is an interesting one. It might be possible to recover from this in some cases, but in the case I've dug into, it is not possible since we never had a good copy of that block. The bad block was received already containing the corruption (in a compressed send stream), and the source did not have redundancy configured.