@@ -94,6 +94,29 @@ configuration across the entire cluster. Conversely, opt for a ``stretch pool``
9494when you need a particular pool to be replicated across ``more than two data centers ``,
9595providing a more granular level of control and a larger cluster size.
9696
97+ Limitations
98+ -----------
99+
100+ Individual Stretch Pools do not support I/O operations during a netsplit
101+ scenario between two or more zones. While the cluster remains accessible for
102+ basic Ceph commands, I/O usage remains unavailable until the netsplit is
103+ resolved. This is different from ``stretch mode ``, where the tiebreaker monitor
104+ can isolate one zone of the cluster and continue I/O operations in degraded
105+ mode during a netsplit. See :ref: `stretch_mode1 `
106+
107+ Ceph is designed to tolerate multiple host failures. However, if more than 25% of
108+ the OSDs in the cluster go down, Ceph may stop marking OSDs as out which will prevent rebalancing
109+ and some PGs might go inactive. This behavior is controlled by the ``mon_osd_min_in_ratio `` parameter.
110+ By default, mon_osd_min_in_ratio is set to 0.75, meaning that at least 75% of the OSDs
111+ in the cluster must remain ``active `` before any additional OSDs can be marked out.
112+ This setting prevents too many OSDs from being marked out as this might lead to significant
113+ data movement. The data movement can cause high client I/O impact and long recovery times when
114+ the OSDs are returned to service. If Ceph stops marking OSDs as out, some PGs may fail to
115+ rebalance to surviving OSDs, potentially leading to ``inactive `` PGs.
116+ See https://tracker.ceph.com/issues/68338 for more information.
117+
118+ .. _stretch_mode1 :
119+
97120Stretch Mode
98121============
99122Stretch mode is designed to handle deployments in which you cannot guarantee the
0 commit comments