Skip to content

Commit 08904a0

Browse files
authored
Merge pull request ceph#54238 from zdover23/wip-doc-2023-10-30-rados-config-osd-config-ref-scrubbing
doc/rados: improve "scrubbing" explanation Reviewed-by: Ronen Friedman <rfriedma@redhat.com>
2 parents d5792f0 + 19b1399 commit 08904a0

File tree

1 file changed

+14
-11
lines changed

1 file changed

+14
-11
lines changed

doc/rados/configuration/osd-config-ref.rst

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -145,17 +145,20 @@ See `Pool & PG Config Reference`_ for details.
145145
Scrubbing
146146
=========
147147

148-
In addition to making multiple copies of objects, Ceph ensures data integrity by
149-
scrubbing placement groups. Ceph scrubbing is analogous to ``fsck`` on the
150-
object storage layer. For each placement group, Ceph generates a catalog of all
151-
objects and compares each primary object and its replicas to ensure that no
152-
objects are missing or mismatched. Light scrubbing (daily) checks the object
153-
size and attributes. Deep scrubbing (weekly) reads the data and uses checksums
154-
to ensure data integrity.
155-
156-
Scrubbing is important for maintaining data integrity, but it can reduce
157-
performance. You can adjust the following settings to increase or decrease
158-
scrubbing operations.
148+
One way that Ceph ensures data integrity is by "scrubbing" placement groups.
149+
Ceph scrubbing is analogous to ``fsck`` on the object storage layer. Ceph
150+
generates a catalog of all objects in each placement group and compares each
151+
primary object to its replicas, ensuring that no objects are missing or
152+
mismatched. Light scrubbing checks the object size and attributes, and is
153+
usually done daily. Deep scrubbing reads the data and uses checksums to ensure
154+
data integrity, and is usually done weekly. The freqeuncies of both light
155+
scrubbing and deep scrubbing are determined by the cluster's configuration,
156+
which is fully under your control and subject to the settings explained below
157+
in this section.
158+
159+
Although scrubbing is important for maintaining data integrity, it can reduce
160+
the performance of the Ceph cluster. You can adjust the following settings to
161+
increase or decrease the frequency and depth of scrubbing operations.
159162

160163

161164
.. confval:: osd_max_scrubs

0 commit comments

Comments
 (0)