Skip to content

Commit e8b8c95

Browse files
authored
Merge pull request ceph#63836 from zdover23/wip-doc-2025-06-10-63824-followup-test
doc/rados/operations: Address suggestions for stretch-mode.rst Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 8216229 + 660d163 commit e8b8c95

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

doc/rados/operations/stretch-mode.rst

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,9 @@ one-third to one-half of the total cluster).
1818

1919
Ceph is designed with the expectation that all parts of its network and cluster
2020
will be reliable and that failures will be distributed randomly across the
21-
CRUSH topology. If a host or network switch goes down and causes the loss of many OSDs, Ceph is
22-
designed so that the remaining OSDs and monitors will route around such a loss.
21+
CRUSH topology. When a host or network switch goes down, many OSDs will
22+
become unavailable. Ceph is designed so that the remaining OSDs and
23+
Monitors will maintain access to data.
2324

2425
Sometimes this cannot be relied upon. If you have a "stretched-cluster"
2526
deployment in which much of your cluster is behind a single network component,
@@ -30,12 +31,12 @@ data centers (or, in clouds, two availability zones), and a configuration with
3031
three data centers.
3132

3233
In the two-site configuration, Ceph arranges for each site to hold a copy of
33-
the data, with a third site that has a tiebreaker (arbiter, witness)
34-
monitor. This tiebreaker monitor picks a winner when a network connection
34+
the data. A third site houses a tiebreaker (arbiter, witness)
35+
Monitor. This tiebreaker Monitor picks a winner when a network connection
3536
between sites fails and both data centers remain alive.
3637

3738
The tiebreaker monitor can be a VM. It can also have higher network latency
38-
to the two main sites.
39+
to the OSD site(s) than OSD site(s) can have to each other.
3940

4041
The standard Ceph configuration is able to survive many network failures or
4142
data-center failures without compromising data availability. When enough
@@ -57,7 +58,7 @@ without human intervention.
5758

5859
Ceph does not permit the compromise of data integrity or data consistency, but
5960
there are situations in which *data availability* is compromised. These
60-
situations can occur even though there are sufficient replias of data available to satisfy
61+
situations can occur even though there are sufficient replicas of data available to satisfy
6162
consistency and sizing constraints. In some situations, you might
6263
discover that your cluster does not satisfy those constraints.
6364

@@ -87,8 +88,7 @@ Individual Stretch Pools
8788
========================
8889
Setting individual ``stretch pool`` attributes allows for
8990
specific pools to be distributed across two or more data centers.
90-
This is done by executing the ``ceph osd pool stretch set`` command on each desired pool,
91-
contrasted with a cluster-wide strategy with *stretch mode*.
91+
This is done by executing the ``ceph osd pool stretch set`` command on each desired pool.
9292
See :ref:`setting_values_for_a_stretch_pool`
9393

9494
Use stretch mode when you have exactly two data centers and require a uniform
@@ -185,8 +185,8 @@ with the CRUSH topology.
185185
step emit
186186
}
187187

188-
.. warning:: If a CRUSH rule is defined in stretch mode cluster and the
189-
rule has multiple ``take`` steps, then ``MAX AVAIL`` for the pools
188+
.. warning:: When a CRUSH rule is defined in a stretch mode cluster and the
189+
rule has multiple ``take`` steps, ``MAX AVAIL`` for the pools
190190
associated with the CRUSH rule will report that the available size is all
191191
of the available space from the datacenter, not the available space for
192192
the pools associated with the CRUSH rule.
@@ -264,7 +264,7 @@ with the CRUSH topology.
264264
When stretch mode is enabled, PGs will become active only when they peer
265265
across CRUSH ``datacenter``s (or across whichever CRUSH bucket type was specified),
266266
assuming both are available. Pools will increase in size from the default ``3`` to
267-
``4``, and two replicas will be place at each site. OSDs will be allowed to
267+
``4``, and two replicas will be placed at each site. OSDs will be allowed to
268268
connect to Monitors only if they are in the same data center as the Monitors.
269269
New Monitors will not be allowed to join the cluster if they do not specify a
270270
CRUSH location.
@@ -302,20 +302,20 @@ To exit stretch mode, run the following command:
302302

303303
.. describe:: {crush_rule}
304304

305-
The CRUSH rule to now use for all pools. If this
305+
The non-stretch CRUSH rule to use for all pools. If this
306306
is not specified, the pools will move to the default CRUSH rule.
307307

308308
:Type: String
309309
:Required: No.
310310

311-
This command will move the cluster back to normal mode;
311+
This command moves the cluster back to normal mode;
312312
the cluster will no longer be in stretch mode.
313313
All pools will be set with their prior ``size`` and ``min_size``
314314
values. At this point the user is responsible for scaling down the cluster
315315
to the desired number of OSDs if they choose to operate with fewer OSDs.
316316

317-
Please note that the command will not execute when the cluster is in
318-
recovery stretch mode. The command will only execute when the cluster
317+
Note that the command will not execute when the cluster is in
318+
recovery stretch mode. The command executes only when the cluster
319319
is in degraded stretch mode or healthy stretch mode.
320320

321321
Limitations of Stretch Mode

0 commit comments

Comments
 (0)