Skip to content

Commit 8b0f3a5

Browse files
authored
Merge pull request ceph#59635 from zdover23/wip-doc-2024-09-06-rados-ops-health-checks
doc/rados: add "pgs not deep scrubbed in time" info Reviewed-by: Anthony D'Atri <[email protected]>
2 parents fc70b44 + d620a51 commit 8b0f3a5

File tree

1 file changed

+26
-0
lines changed

1 file changed

+26
-0
lines changed

doc/rados/operations/health-checks.rst

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1513,6 +1513,32 @@ To manually initiate a deep scrub of a clean PG, run the following command:
15131513

15141514
ceph pg deep-scrub <pgid>
15151515

1516+
Under certain conditions, the warning ``X PGs not deep-scrubbed in time``
1517+
appears. This might be because the cluster contains many large PGs, which take
1518+
longer to deep-scrub. To remedy this situation, you must change the value of
1519+
``osd_deep_scrub_interval`` either globally or for the Manager daemon.
1520+
1521+
#. Confirm that ``ceph health detail`` returns a ``pgs not deep-scrubbed in
1522+
time`` warning::
1523+
1524+
# ceph health detail
1525+
HEALTH_WARN 1161 pgs not deep-scrubbed in time
1526+
[WRN] PG_NOT_DEEP_SCRUBBED: 1161 pgs not deep-scrubbed in time
1527+
pg 86.fff not deep-scrubbed since 2024-08-21T02:35:25.733187+0000
1528+
1529+
#. Change ``osd_deep_scrub_interval`` globally:
1530+
1531+
.. prompt:: bash #
1532+
1533+
ceph config set global osd_deep_scrub_interval 1209600
1534+
1535+
The above procedure was developed by Eugen Block in September of 2024.
1536+
1537+
See `Eugen Block's blog post <https://heiterbiswolkig.blogs.nde.ag/2024/09/06/pgs-not-deep-scrubbed-in-time/>`_ for much more detail.
1538+
1539+
See `Redmine tracker issue #44959 <https://tracker.ceph.com/issues/44959>`_.
1540+
1541+
15161542

15171543
PG_SLOW_SNAP_TRIMMING
15181544
_____________________

0 commit comments

Comments
 (0)