@@ -262,14 +262,14 @@ with the CRUSH topology.
262262 ceph mon enable_stretch_mode e stretch_rule datacenter
263263
264264When stretch mode is enabled, PGs will become active only when they peer
265- across CRUSH ``datacenter``s (or across whichever CRUSH bucket type was specified),
265+ across CRUSH ``datacenter `` (or across whichever CRUSH bucket type was specified),
266266assuming both are available. Pools will increase in size from the default ``3 `` to
267267``4 ``, and two replicas will be placed at each site. OSDs will be allowed to
268268connect to Monitors only if they are in the same data center as the Monitors.
269269New Monitors will not be allowed to join the cluster if they do not specify a
270270CRUSH location.
271271
272- If all OSDs and Monitors in one of the ``datacenter``s become inaccessible at once,
272+ If all OSDs and Monitors in one of the ``datacenter `` become inaccessible at once,
273273the cluster in the surviving ``datacenter `` enters *degraded stretch mode *.
274274A health state warning will be
275275raised, pools' ``min_size `` will be reduced to ``1 ``, and the cluster will be
@@ -337,8 +337,8 @@ each data center. If pools exist in the cluster that do not have the default
337337``size `` or ``min_size ``, Ceph will not enter stretch mode. An example of such
338338a CRUSH rule is given above.
339339
340- Because stretch mode runs with poos ' ``min_size `` set to ``1 ``
341- , we recommend enabling stretch mode only when using OSDs on
340+ Because stretch mode runs with pools ' ``min_size `` set to ``1 ``,
341+ we recommend enabling stretch mode only when using OSDs on
342342SSDs. Hybrid HDD+SSD or HDD-only OSDs are not recommended
343343due to the long time it takes for them to recover after connectivity between
344344data centers has been restored. This reduces the potential for data loss.
0 commit comments