Skip to content

Commit 1a8a7e4

Browse files
authored
Merge pull request ceph#59856 from zdover23/wip-doc-2024-09-18-rados-ops-health-checks
doc/rados: add confval directives to health-checks Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 827eafb + a159821 commit 1a8a7e4

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

doc/rados/operations/health-checks.rst

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1499,18 +1499,19 @@ ____________________
14991499
One or more Placement Groups (PGs) have not been deep scrubbed recently. PGs
15001500
are normally scrubbed every :confval:`osd_deep_scrub_interval` seconds at most.
15011501
This health check is raised if a certain percentage (determined by
1502-
``mon_warn_pg_not_deep_scrubbed_ratio``) of the interval has elapsed after the
1503-
time the scrub was scheduled and no scrub has been performed.
1502+
:confval:`mon_warn_pg_not_deep_scrubbed_ratio`) of the interval has elapsed
1503+
after the time the scrub was scheduled and no scrub has been performed.
15041504

15051505
PGs will receive a deep scrub only if they are flagged as *clean* (which means
15061506
that they are to be cleaned, and not that they have been examined and found to
15071507
be clean). Misplaced or degraded PGs might not be flagged as ``clean`` (see
15081508
*PG_AVAILABILITY* and *PG_DEGRADED* above).
15091509

15101510
This document offers two methods of setting the value of
1511-
``osd_deep_scrub_interval``. The first method listed here changes the value of
1512-
``osd_deep_scrub_interval`` globally. The second method listed here changes the
1513-
value of ``osd_deep scrub interval`` for OSDs and for the Manager daemon.
1511+
:confval:`osd_deep_scrub_interval`. The first method listed here changes the
1512+
value of :confval:`osd_deep_scrub_interval` globally. The second method listed
1513+
here changes the value of :confval:`osd_deep scrub interval` for OSDs and for
1514+
the Manager daemon.
15141515

15151516
First Method
15161517
~~~~~~~~~~~~
@@ -1521,10 +1522,10 @@ To manually initiate a deep scrub of a clean PG, run the following command:
15211522

15221523
ceph pg deep-scrub <pgid>
15231524

1524-
Under certain conditions, the warning ``X PGs not deep-scrubbed in time``
1525+
Under certain conditions, the warning ``PGs not deep-scrubbed in time``
15251526
appears. This might be because the cluster contains many large PGs, which take
15261527
longer to deep-scrub. To remedy this situation, you must change the value of
1527-
``osd_deep_scrub_interval`` globally.
1528+
:confval:`osd_deep_scrub_interval` globally.
15281529

15291530
#. Confirm that ``ceph health detail`` returns a ``pgs not deep-scrubbed in
15301531
time`` warning::
@@ -1555,10 +1556,10 @@ To manually initiate a deep scrub of a clean PG, run the following command:
15551556

15561557
ceph pg deep-scrub <pgid>
15571558

1558-
Under certain conditions, the warning ``X PGs not deep-scrubbed in time``
1559+
Under certain conditions, the warning ``PGs not deep-scrubbed in time``
15591560
appears. This might be because the cluster contains many large PGs, which take
15601561
longer to deep-scrub. To remedy this situation, you must change the value of
1561-
``osd_deep_scrub_interval`` for OSDs and for the Manager daemon.
1562+
:confval:`osd_deep_scrub_interval` for OSDs and for the Manager daemon.
15621563

15631564
#. Confirm that ``ceph health detail`` returns a ``pgs not deep-scrubbed in
15641565
time`` warning::

0 commit comments

Comments
 (0)