Skip to content

Commit d6afce7

Browse files
authored
Merge pull request ceph#60930 from zdover23/wip-doc-2024-12-03-rados-ops-health-checks-3
doc/rados: fix sentences in health-checks (3 of x) Reviewed-by: Cole Mitchell <[email protected]>
2 parents c609ce5 + 97df447 commit d6afce7

File tree

1 file changed

+13
-10
lines changed

1 file changed

+13
-10
lines changed

doc/rados/operations/health-checks.rst

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1425,8 +1425,8 @@ resolution, see :ref:`storage-capacity` and :ref:`no-free-drive-space`.
14251425
OBJECT_MISPLACED
14261426
________________
14271427

1428-
One or more objects in the cluster are not stored on the node that CRUSH would
1429-
prefer that they be stored on. This alert is an indication that data migration
1428+
One or more objects in the cluster are not stored on the node that CRUSH
1429+
prefers that they be stored on. This alert is an indication that data migration
14301430
due to a recent cluster change has not yet completed.
14311431

14321432
Misplaced data is not a dangerous condition in and of itself; data consistency
@@ -1625,9 +1625,10 @@ Stretch Mode
16251625
INCORRECT_NUM_BUCKETS_STRETCH_MODE
16261626
__________________________________
16271627

1628-
Stretch mode currently only support 2 dividing buckets with OSDs, this warning suggests
1629-
that the number of dividing buckets is not equal to 2 after stretch mode is enabled.
1630-
You can expect unpredictable failures and MON assertions until the condition is fixed.
1628+
Stretch mode currently only support 2 dividing buckets with OSDs, this warning
1629+
suggests that the number of dividing buckets is not equal to 2 after stretch
1630+
mode is enabled. You can expect unpredictable failures and MON assertions
1631+
until the condition is fixed.
16311632

16321633
We encourage you to fix this by removing additional dividing buckets or bump the
16331634
number of dividing buckets to 2.
@@ -1650,17 +1651,19 @@ NVMeoF Gateway
16501651
NVMEOF_SINGLE_GATEWAY
16511652
_____________________
16521653

1653-
One of the gateway group has only one gateway. This is not ideal because it makes
1654-
high availability (HA) impossible with a single gatway in a group. This can lead to
1655-
problems with failover and failback operations for the NVMeoF gateway.
1654+
One of the gateway group has only one gateway. This is not ideal because it
1655+
makes high availability (HA) impossible with a single gatway in a group. This
1656+
can lead to problems with failover and failback operations for the NVMeoF
1657+
gateway.
16561658

16571659
It's recommended to have multiple NVMeoF gateways in a group.
16581660

16591661
NVMEOF_GATEWAY_DOWN
16601662
___________________
16611663

1662-
Some of the gateways are in the GW_UNAVAILABLE state. If a NVMeoF daemon has crashed,
1663-
the daemon log file (found at ``/var/log/ceph/``) may contain troubleshooting information.
1664+
Some of the gateways are in the GW_UNAVAILABLE state. If a NVMeoF daemon has
1665+
crashed, the daemon log file (found at ``/var/log/ceph/``) may contain
1666+
troubleshooting information.
16641667

16651668

16661669
Miscellaneous

0 commit comments

Comments
 (0)