From 10fb2b1c6d51523b879792cc4f23f8629717c5a4 Mon Sep 17 00:00:00 2001 From: Stef Nestor <26751266+stefnestor@users.noreply.github.com> Date: Sat, 16 Nov 2024 13:25:20 -0700 Subject: [PATCH 1/2] (Doc+) Expand watermark resolution MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 👋 howdy, team! Related to https://github.com/elastic/cloud/pull/134247, expands the resolution section to better enable users to self-service on ESS & avoid rather than thinking they need to contact Support per @kunisen 's feedback (cheers!) --- .../disk-usage-exceeded.asciidoc | 35 +++++++++++++------ 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc b/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc index 2222d09c26db2..2e12b0158a253 100644 --- a/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc +++ b/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc @@ -53,7 +53,7 @@ GET _cluster/allocation/explain [[fix-watermark-errors-temporary]] ==== Temporary Relief -To immediately restore write operations, you can temporarily increase the +To immediately restore write operations, you may consider temporarily increasing the <> and remove the <>. @@ -102,19 +102,34 @@ PUT _cluster/settings [[fix-watermark-errors-resolve]] ==== Resolve -As a long-term solution, we recommend you do one of the following best suited +To resolve, we recommend you do one of the following best suited to your use case: -* add nodes to the affected <> -+ -TIP: You should enable <> for clusters deployed using our {ess}, {ece}, and {eck} platforms. +* horizontally scale nodes of the affected <> -* upgrade existing nodes to increase disk space -+ -TIP: On {ess}, https://support.elastic.co[Elastic Support] intervention may -become necessary if <> reaches `status:red`. +* vertically scale existing nodes to increase disk space -* delete unneeded indices using the <> +* delete indices using the <>, either +permanently if unneeded or temporarily to later +<> * update related <> to push indices through to later <> + +TIP: On {ess} and {ece}, indices may need to be temporarily deleted via +its {cloud}/ec-api-console.html[Elasticsearch API Console] to later +<> in order to resolve +<> `status:red` which will block +{cloud}/ec-activity-page.html[attempted changes]. If you experience issues +with this resolution flow on {ess}, kindly reach out to +https://support.elastic.co[Elastic Support] for assistance. + +To avoid this situation, we recommend you do one of the following best +suited to your use case: + +* enable <> for clusters deployed using our +{ess}, {ece}, and {eck} platforms + +* setup {kibana-ref}/kibana-alerts.html[Stack Monitoring Alerts] on top of +<> to be notified before +flood-stage watermark is reached From 2eea9803a0631bf0ccc0a84d1298fbf9649130dc Mon Sep 17 00:00:00 2001 From: Kuniyasu Sen Date: Mon, 18 Nov 2024 23:12:57 +0900 Subject: [PATCH 2/2] Added some change to make it more recognize-able between ESS and self-managed --- .../troubleshooting-watermark-widget.asciidoc | 40 +++++++++++++++++++ .../troubleshooting-watermark.asciidoc | 25 ++++++++++++ .../disk-usage-exceeded.asciidoc | 32 +-------------- 3 files changed, 66 insertions(+), 31 deletions(-) create mode 100644 docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark-widget.asciidoc create mode 100644 docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark.asciidoc diff --git a/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark-widget.asciidoc b/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark-widget.asciidoc new file mode 100644 index 0000000000000..43c888d3e352b --- /dev/null +++ b/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::troubleshooting-watermark.asciidoc[tag=cloud] + +++++ +
+ +
+++++ diff --git a/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark.asciidoc b/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark.asciidoc new file mode 100644 index 0000000000000..322bd864c4981 --- /dev/null +++ b/docs/reference/tab-widgets/troubleshooting/watermark/troubleshooting-watermark.asciidoc @@ -0,0 +1,25 @@ +// tag::cloud[] + +* {ref}/docs-delete.html[Delete unused data]. +* {cloud}/ec-customize-deployment-components.html[Increase the disk size on your Hot data and Content tier (scale up)] + +NOTE: If your {es} cluster is unhealthy and reports a status of red, then increasing the disk size of your Hot data and Content tier may fail. You might need to delete some data so the configuration can be edited. If you want to increase your disk size without deleting data, then {cloud}/ec-get-help.html[reach out to Elastic support] and we will assist you with scaling up. + +Also, for future prevention, you can do below: + +* {cloud}/ec-customize-deployment-components.html[Increase the disk size on your Hot data and Content tier (scale up)] +* Enable <> for clusters deployed using our {ess}, {ece}, and {eck} platforms. +* Update related <> to push indices through to later <> + +// end::cloud[] + +// tag::self-managed[] +As a long-term solution, we recommend you do one of the following best suited +to your use case: + +* Add nodes to the affected <> +* Upgrade existing nodes to increase disk space +* Delete unneeded indices using the <> +* Update related <> to push indices through to later <> + +// end::self-managed[] diff --git a/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc b/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc index 2e12b0158a253..3e7c170b4707e 100644 --- a/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc +++ b/docs/reference/troubleshooting/common-issues/disk-usage-exceeded.asciidoc @@ -102,34 +102,4 @@ PUT _cluster/settings [[fix-watermark-errors-resolve]] ==== Resolve -To resolve, we recommend you do one of the following best suited -to your use case: - -* horizontally scale nodes of the affected <> - -* vertically scale existing nodes to increase disk space - -* delete indices using the <>, either -permanently if unneeded or temporarily to later -<> - -* update related <> to push indices -through to later <> - -TIP: On {ess} and {ece}, indices may need to be temporarily deleted via -its {cloud}/ec-api-console.html[Elasticsearch API Console] to later -<> in order to resolve -<> `status:red` which will block -{cloud}/ec-activity-page.html[attempted changes]. If you experience issues -with this resolution flow on {ess}, kindly reach out to -https://support.elastic.co[Elastic Support] for assistance. - -To avoid this situation, we recommend you do one of the following best -suited to your use case: - -* enable <> for clusters deployed using our -{ess}, {ece}, and {eck} platforms - -* setup {kibana-ref}/kibana-alerts.html[Stack Monitoring Alerts] on top of -<> to be notified before -flood-stage watermark is reached +include::{es-ref-dir}/tab-widgets/troubleshooting/watermark/troubleshooting-watermark-widget.asciidoc[]