diff --git a/troubleshoot/deployments/cloud-enterprise/troubleshooting-container-engines.md b/troubleshoot/deployments/cloud-enterprise/troubleshooting-container-engines.md index c1fe3cd87f..249aa069a2 100644 --- a/troubleshoot/deployments/cloud-enterprise/troubleshooting-container-engines.md +++ b/troubleshoot/deployments/cloud-enterprise/troubleshooting-container-engines.md @@ -38,7 +38,7 @@ This should indicate an issue with the {{es}} configuration rather than any Dock While troubleshooting `unhealthy` {{ece}} system containers (name prefix `frc-`), *some* may be restarted while others should not. -{{ece}}'s [runners](https://www.elastic.co/guide/en/cloud-enterprise/current/get-runners.html) will automatically create or restart missing system containers. If you’re attempting to permanently remove a system container by removing its role from the host, you’d instead [update runner roles](https://www.elastic.co/guide/en/cloud-enterprise/current/set-runner-roles.html). If eligible system containers return to an `unhealthy` status after restart, we recommend reviewing their start-up Docker [`logs`](https://docs.docker.com/reference/cli/docker/container/logs/). +{{ece}}'s [runners](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-runners) will automatically create or restart missing system containers. If you’re attempting to permanently remove a system container by removing its role from the host, you’d instead [update runner roles](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-runner-roles). If eligible system containers return to an `unhealthy` status after restart, we recommend reviewing their start-up Docker [`logs`](https://docs.docker.com/reference/cli/docker/container/logs/). It is safe to restart the following via Docker [`stop`](https://docs.docker.com/reference/cli/docker/container/stop/) followed by Docker [`rm`](https://docs.docker.com/reference/cli/docker/container/rm/) on: diff --git a/troubleshoot/kibana/migration-failures.md b/troubleshoot/kibana/migration-failures.md index 0d6c3c49a4..652f6fdc0f 100644 --- a/troubleshoot/kibana/migration-failures.md +++ b/troubleshoot/kibana/migration-failures.md @@ -156,7 +156,7 @@ If the cluster exceeded the low watermark for disk usage, the output should cont "The node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [11.692661332965082%]" ``` -Refer to the {{es}} guide for how to [fix common cluster issues](https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-usage-exceeded.html). +Refer to the {{es}} guide for how to [fix common cluster issues](https://www.elastic.co/guide/en/elasticsearch/reference/current/fix-watermark-errors.html). If routing allocation is the issue, the `_cluster/allocation/explain` API will return an entry similar to this: diff --git a/troubleshoot/observability/apm/known-issues.md b/troubleshoot/observability/apm/known-issues.md index 28bd49d5e5..c9ed73fe92 100644 --- a/troubleshoot/observability/apm/known-issues.md +++ b/troubleshoot/observability/apm/known-issues.md @@ -111,7 +111,7 @@ There are three ways to fix this error: 1. Find broken rules :::::{admonition} - To identify rules in this exact state, you can use the [find rules endpoint](https://www.elastic.co/guide/en/kibana/current/find-rules-api.html) and search for the APM anomaly rule type as well as this exact error message indicating that the rule is in the broken state. We will also use the `fields` parameter to specify only the fields required when making the update request later. + To identify rules in this exact state, you can use the [find rules endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) and search for the APM anomaly rule type as well as this exact error message indicating that the rule is in the broken state. We will also use the `fields` parameter to specify only the fields required when making the update request later. * `search_fields=alertTypeId` * `search=apm.anomaly` @@ -188,7 +188,7 @@ There are three ways to fix this error: 3. Update each rule using the `PUT /api/alerting/rule/{{id}}` API ::::{admonition} - For each rule, submit a PUT request to the [update rule endpoint](https://www.elastic.co/guide/en/kibana/current/update-rule-api.html) using that rule’s ID and its stored update document from the previous step. For example, assuming the first broken rule’s ID is `046c0d4f`: + For each rule, submit a PUT request to the [update rule endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) using that rule’s ID and its stored update document from the previous step. For example, assuming the first broken rule’s ID is `046c0d4f`: ```shell curl -u "$KIBANA_USER":"$KIBANA_PASSWORD" -XPUT "$KIBANA_URL/api/alerting/rule/046c0d4f" -H 'Content-Type: application/json' -H 'kbn-xsrf: rule-update' -d @046c0d4f.json