Skip to content

Commit ee0ef76

Browse files
committed
doc/rados: fix sentences in health-checks (2 of x)
Make sentences agree at the head of each section in doc/rados/operations/health-checks.rst. The sentences were sometimes in the imperative mood and sometimes in the declarative mood. This commit edits the second third of doc/rados/operations/health-checks.rst. Zac: cf. 000228 Signed-off-by: Zac Dover <[email protected]>
1 parent baa6b7b commit ee0ef76

File tree

1 file changed

+31
-34
lines changed

1 file changed

+31
-34
lines changed

doc/rados/operations/health-checks.rst

Lines changed: 31 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -641,9 +641,10 @@ command:
641641
BLUESTORE_FRAGMENTATION
642642
_______________________
643643

644-
As BlueStore operates, the free space on the underlying storage will become
645-
fragmented. This is normal and unavoidable, but excessive fragmentation causes
646-
slowdown. To inspect BlueStore fragmentation, run the following command:
644+
``BLUESTORE_FRAGMENTATION`` indicates that the free space that underlies
645+
BlueStore has become fragmented. This is normal and unavoidable, but excessive
646+
fragmentation causes slowdown. To inspect BlueStore fragmentation, run the
647+
following command:
647648

648649
.. prompt:: bash $
649650

@@ -682,11 +683,9 @@ One or more OSDs have BlueStore volumes that were created prior to the
682683
Nautilus release. (In Nautilus, BlueStore tracks its internal usage
683684
statistics on a granular, per-pool basis.)
684685

685-
If *all* OSDs
686-
are older than Nautilus, this means that the per-pool metrics are
687-
simply unavailable. But if there is a mixture of pre-Nautilus and
688-
post-Nautilus OSDs, the cluster usage statistics reported by ``ceph
689-
df`` will be inaccurate.
686+
If *all* OSDs are older than Nautilus, this means that the per-pool metrics are
687+
simply unavailable. But if there is a mixture of pre-Nautilus and post-Nautilus
688+
OSDs, the cluster usage statistics reported by ``ceph df`` will be inaccurate.
690689

691690
The old OSDs can be updated to use the new usage-tracking scheme by stopping
692691
each OSD, running a repair operation, and then restarting the OSD. For example,
@@ -798,7 +797,7 @@ about the source of the problem.
798797
BLUESTORE_SPURIOUS_READ_ERRORS
799798
______________________________
800799

801-
One or more BlueStore OSDs detect read errors on the main device.
800+
One (or more) BlueStore OSDs detects read errors on the main device.
802801
BlueStore has recovered from these errors by retrying disk reads. This alert
803802
might indicate issues with underlying hardware, issues with the I/O subsystem,
804803
or something similar. Such issues can cause permanent data
@@ -826,7 +825,7 @@ _______________________________
826825

827826
There are BlueStore log messages that reveal storage drive issues
828827
that can cause performance degradation and potentially data unavailability or
829-
loss. These may indicate a storage drive that is failing and should be
828+
loss. These may indicate a storage drive that is failing and should be
830829
evaluated and possibly removed and replaced.
831830

832831
``read stalled read 0x29f40370000~100000 (buffered) since 63410177.290546s, timeout is 5.000000s``
@@ -853,7 +852,7 @@ To change this, run the following command:
853852
ceph config set global bdev_stalled_read_warn_lifetime 10
854853
ceph config set global bdev_stalled_read_warn_threshold 5
855854

856-
this may be done for specific OSDs or a given mask. For example,
855+
This may be done for specific OSDs or a given mask. For example,
857856
to apply only to SSD OSDs:
858857

859858
.. prompt:: bash $
@@ -866,30 +865,28 @@ to apply only to SSD OSDs:
866865
WAL_DEVICE_STALLED_READ_ALERT
867866
_____________________________
868867

869-
The warning state ``WAL_DEVICE_STALLED_READ_ALERT`` is raised to
870-
indicate ``stalled read`` instances on a given BlueStore OSD's ``WAL_DEVICE``.
871-
This warning can be configured via the :confval:`bdev_stalled_read_warn_lifetime` and
872-
:confval:`bdev_stalled_read_warn_threshold` options with commands similar to those
873-
described in the
874-
``BLOCK_DEVICE_STALLED_READ_ALERT`` warning section.
868+
The warning state ``WAL_DEVICE_STALLED_READ_ALERT`` is raised to indicate
869+
``stalled read`` instances on a given BlueStore OSD's ``WAL_DEVICE``. This
870+
warning can be configured via the :confval:`bdev_stalled_read_warn_lifetime`
871+
and :confval:`bdev_stalled_read_warn_threshold` options with commands similar
872+
to those described in the ``BLOCK_DEVICE_STALLED_READ_ALERT`` warning section.
875873

876874
DB_DEVICE_STALLED_READ_ALERT
877875
____________________________
878876

879-
The warning state ``DB_DEVICE_STALLED_READ_ALERT`` is raised to
880-
indicate ``stalled read`` instances on a given BlueStore OSD's ``DB_DEVICE``.
881-
This warning can be configured via the :confval:`bdev_stalled_read_warn_lifetime` and
882-
:confval:`bdev_stalled_read_warn_threshold` options with commands similar to those
883-
described in the
884-
``BLOCK_DEVICE_STALLED_READ_ALERT`` warning section.
877+
The warning state ``DB_DEVICE_STALLED_READ_ALERT`` is raised to indicate
878+
``stalled read`` instances on a given BlueStore OSD's ``DB_DEVICE``. This
879+
warning can be configured via the :confval:`bdev_stalled_read_warn_lifetime`
880+
and :confval:`bdev_stalled_read_warn_threshold` options with commands similar
881+
to those described in the ``BLOCK_DEVICE_STALLED_READ_ALERT`` warning section.
885882

886883
BLUESTORE_SLOW_OP_ALERT
887884
_______________________
888885

889-
There are BlueStore log messages that reveal storage drive issues
890-
that can lead to performance degradation and data unavailability or loss.
891-
These indicate that the storage drive may be failing and should be investigated
892-
and potentially replaced.
886+
There are BlueStore log messages that reveal storage drive issues that can lead
887+
to performance degradation and data unavailability or loss. These indicate
888+
that the storage drive may be failing and should be investigated and
889+
potentially replaced.
893890

894891
``log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.028621219s, txc = 0x55a107c30f00``
895892
``log_latency_fn slow operation observed for upper_bound, latency = 6.25955s``
@@ -1121,8 +1118,8 @@ LARGE_OMAP_OBJECTS
11211118
__________________
11221119

11231120
One or more pools contain large omap objects, as determined by
1124-
``osd_deep_scrub_large_omap_object_key_threshold`` (threshold for the number of
1125-
keys to determine what is considered a large omap object) or
1121+
``osd_deep_scrub_large_omap_object_key_threshold`` (the threshold for the
1122+
number of keys to determine what is considered a large omap object) or
11261123
``osd_deep_scrub_large_omap_object_value_sum_threshold`` (the threshold for the
11271124
summed size in bytes of all key values to determine what is considered a large
11281125
omap object) or both. To find more information on object name, key count, and
@@ -1142,7 +1139,7 @@ CACHE_POOL_NEAR_FULL
11421139
____________________
11431140

11441141
A cache-tier pool is nearly full, as determined by the ``target_max_bytes`` and
1145-
``target_max_objects`` properties of the cache pool. Once the pool reaches the
1142+
``target_max_objects`` properties of the cache pool. When the pool reaches the
11461143
target threshold, write requests to the pool might block while data is flushed
11471144
and evicted from the cache. This state normally leads to very high latencies
11481145
and poor performance.
@@ -1288,10 +1285,10 @@ For more information, see :ref:`choosing-number-of-placement-groups` and
12881285
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED
12891286
____________________________________
12901287

1291-
One or more pools have a ``target_size_bytes`` property that is set in order to
1292-
estimate the expected size of the pool, but the value(s) of this property are
1293-
greater than the total available storage (either by themselves or in
1294-
combination with other pools).
1288+
One or more pools does have a ``target_size_bytes`` property that is set in
1289+
order to estimate the expected size of the pool, but the value or values of
1290+
this property are greater than the total available storage (either by
1291+
themselves or in combination with other pools).
12951292

12961293
This alert is usually an indication that the ``target_size_bytes`` value for
12971294
the pool is too large and should be reduced or set to zero. To reduce the

0 commit comments

Comments
 (0)