@@ -53,7 +53,7 @@ GET _cluster/allocation/explain
5353[[fix-watermark-errors-temporary]]
5454==== Temporary Relief
5555
56- To immediately restore write operations, you can temporarily increase the
56+ To immediately restore write operations, you may consider temporarily increasing the
5757<<disk-based-shard-allocation,disk watermarks>> and remove the
5858<<index-block-settings,write block>>.
5959
@@ -102,19 +102,34 @@ PUT _cluster/settings
102102[[fix-watermark-errors-resolve]]
103103==== Resolve
104104
105- As a long-term solution , we recommend you do one of the following best suited
105+ To resolve , we recommend you do one of the following best suited
106106to your use case:
107107
108- * add nodes to the affected <<data-tiers,data tiers>>
109- +
110- TIP: You should enable <<xpack-autoscaling,autoscaling>> for clusters deployed using our {ess}, {ece}, and {eck} platforms.
108+ * horizontally scale nodes of the affected <<data-tiers,data tiers>>
111109
112- * upgrade existing nodes to increase disk space
113- +
114- TIP: On {ess}, https://support.elastic.co[Elastic Support] intervention may
115- become necessary if <<cluster-health,cluster health>> reaches `status:red`.
110+ * vertically scale existing nodes to increase disk space
116111
117- * delete unneeded indices using the <<indices-delete-index,delete index API>>
112+ * delete indices using the <<indices-delete-index,delete index API>>, either
113+ permanently if unneeded or temporarily to later
114+ <<snapshots-restore-snapshot,snapshot restore>>
118115
119116* update related <<index-lifecycle-management,ILM policy>> to push indices
120117through to later <<data-tiers,data tiers>>
118+
119+ TIP: On {ess} and {ece}, indices may need to be temporarily deleted via
120+ its {cloud}/ec-api-console.html[Elasticsearch API Console] to later
121+ <<snapshots-restore-snapshot,snapshot restore>> in order to resolve
122+ <<cluster-health,cluster health>> `status:red` which will block
123+ {cloud}/ec-activity-page.html[attempted changes]. If you experience issues
124+ with this resolution flow on {ess}, kindly reach out to
125+ https://support.elastic.co[Elastic Support] for assistance.
126+
127+ To avoid this situation, we recommend you do one of the following best
128+ suited to your use case:
129+
130+ * enable <<xpack-autoscaling,autoscaling>> for clusters deployed using our
131+ {ess}, {ece}, and {eck} platforms
132+
133+ * setup {kibana-ref}/kibana-alerts.html[Stack Monitoring Alerts] on top of
134+ <<monitor-elasticsearch-cluster,{es} Monitoring>> to be notified before
135+ flood-stage watermark is reached
0 commit comments