diff --git a/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md b/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md index 9436a0364d..71c922c2aa 100644 --- a/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md +++ b/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md @@ -24,7 +24,7 @@ The **Deployments** page in the Cloud UI provides several ways to find deploymen * Search for partial or whole deployment names or IDs in the search text box * Add filters to the **Deployments** view to filter for specific conditions: - :::{image} ../../../images/cloud-enterprise-deployment-filter.png + :::{image} /troubleshoot/images/cloud-enterprise-deployment-filter.png :alt: Add a filter ::: diff --git a/troubleshoot/deployments/cloud-enterprise/common-issues.md b/troubleshoot/deployments/cloud-enterprise/common-issues.md index c8a03107c7..76b143a9b9 100644 --- a/troubleshoot/deployments/cloud-enterprise/common-issues.md +++ b/troubleshoot/deployments/cloud-enterprise/common-issues.md @@ -102,7 +102,7 @@ To check if you are affected and to resolve this issue: A quick check in the AWS EC2 Dashboard confirms that `192.168.40.73` is a private IP address, which is not accessible externally: - :::{image} ../../../images/cloud-enterprise-ece-aws-private-ip.png + :::{image} /troubleshoot/images/cloud-enterprise-ece-aws-private-ip.png :alt: Private IP address information in AWS EC2 Dashboard ::: @@ -111,7 +111,7 @@ To check if you are affected and to resolve this issue: * For the Cloud UI, use the public host name or public IP. In this example, the Cloud UI is accessible externally at `ec2-54-162-168-86.compute-1.amazonaws.com:12400`. * For Elasticsearch and Kibana, [update your cluster endpoint](../../../deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md) to use the public IP address. In this example, you can use `54.162.168.86`: - :::{image} ../../../images/cloud-enterprise-ece-aws-public-ip.png + :::{image} /troubleshoot/images/cloud-enterprise-ece-aws-public-ip.png :alt: Public IP address is used for cluster endpoints in the Cloud UI ::: diff --git a/troubleshoot/elasticsearch/add-tier.md b/troubleshoot/elasticsearch/add-tier.md index ef518f8fef..a36916afcb 100644 --- a/troubleshoot/elasticsearch/add-tier.md +++ b/troubleshoot/elasticsearch/add-tier.md @@ -30,7 +30,7 @@ In order to get the shards assigned we need enable a new tier in the deployment. 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md index 9756748a1a..ffbfd3953c 100644 --- a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md @@ -30,7 +30,7 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/allow-all-index-allocation.md b/troubleshoot/elasticsearch/allow-all-index-allocation.md index adcf4a8f5b..f5346b9e18 100644 --- a/troubleshoot/elasticsearch/allow-all-index-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-index-allocation.md @@ -31,7 +31,7 @@ In order to get the shards assigned we’ll need to change the value of the [con 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md index 25382bf1c5..01bccbdea1 100644 --- a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md +++ b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md @@ -34,7 +34,7 @@ Reducing the replicas of an index can potentially reduce search throughput and d 5. For each index you chose, click on its name, then on the panel that appears click `Edit settings`, reduce the value of the `index.number_of_replicas` to the desired value and then click `Save`. - :::{image} ../../images/elasticsearch-reference-reduce_replicas.png + :::{image} /troubleshoot/images/elasticsearch-reference-reduce_replicas.png :alt: Reducing replicas :screenshot: ::: diff --git a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md index 7622b52d3c..18731a468b 100644 --- a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md +++ b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md @@ -28,7 +28,7 @@ In order to diagnose the unassigned shards, follow the next steps: 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md index bc2bf3e0d9..6d54151da1 100644 --- a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md +++ b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md @@ -26,7 +26,7 @@ First mark the repository as read-only on the secondary deployments: 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Stack Management > Snapshot and Restore > Repositories**. - :::{image} ../../images/elasticsearch-reference-repositories.png + :::{image} /troubleshoot/images/elasticsearch-reference-repositories.png :alt: {{kib}} Console :screenshot: ::: @@ -39,7 +39,7 @@ Note that we’re now configuring the primary (current) deployment. 1. Open the primary deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Stack Management > Snapshot and Restore > Repositories**. - :::{image} ../../images/elasticsearch-reference-repositories.png + :::{image} /troubleshoot/images/elasticsearch-reference-repositories.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/elasticsearch-client-net-api/logging-with-fiddler.md b/troubleshoot/elasticsearch/elasticsearch-client-net-api/logging-with-fiddler.md index d44cbb9c77..24336e4753 100644 --- a/troubleshoot/elasticsearch/elasticsearch-client-net-api/logging-with-fiddler.md +++ b/troubleshoot/elasticsearch/elasticsearch-client-net-api/logging-with-fiddler.md @@ -11,7 +11,7 @@ A web debugging proxy such as [Fiddler](http://www.telerik.com/fiddler) is a use To capture traffic against a remote cluster is as simple as launching Fiddler! You may want to also filter traffic to only show requests to the remote cluster by using the filters tab -:::{image} ../../../images/elasticsearch-client-net-api-capture-requests-remotehost.png +:::{image} /troubleshoot/images/elasticsearch-client-net-api-capture-requests-remotehost.png :alt: Capturing requests to a remote host ::: @@ -37,13 +37,13 @@ var client = new ElasticClient(connectionSettings); With Fiddler running, the requests and responses will now be captured and can be inspected in the Inspectors tab -:::{image} ../../../images/elasticsearch-client-net-api-inspect-requests.png +:::{image} /troubleshoot/images/elasticsearch-client-net-api-inspect-requests.png :alt: Inspecting requests and responses ::: As before, you may also want to filter traffic to only show requests to `ipv4.fiddler` on the port on which you are running Elasticsearch. -:::{image} ../../../images/elasticsearch-client-net-api-capture-requests-localhost.png +:::{image} /troubleshoot/images/elasticsearch-client-net-api-capture-requests-localhost.png :alt: Capturing requests to localhost ::: diff --git a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md index c3e620c9fa..d97ba2d915 100644 --- a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md @@ -17,7 +17,7 @@ mapped_pages: 2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment. 3. Go to `Actions > Edit deployment` and then go to the `Master instances` section: - :::{image} ../../images/elasticsearch-reference-increase-disk-capacity-master-node.png + :::{image} /troubleshoot/images/elasticsearch-reference-increase-disk-capacity-master-node.png :alt: Increase disk capacity of master nodes :screenshot: ::: diff --git a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md index 8cfd8d38b4..4a04d05f27 100644 --- a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md @@ -17,7 +17,7 @@ mapped_pages: 2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment. 3. Go to `Actions > Edit deployment` and then go to the `Coordinating instances` or the `Machine Learning instances` section depending on the roles listed in the diagnosis: - :::{image} ../../images/elasticsearch-reference-increase-disk-capacity-other-node.png + :::{image} /troubleshoot/images/elasticsearch-reference-increase-disk-capacity-other-node.png :alt: Increase disk capacity of other nodes :screenshot: ::: diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index c7b24c34bb..e46cbf92e6 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -15,28 +15,28 @@ In order to increase the disk capacity of the data nodes in your cluster: 2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment. 3. If autoscaling is available but not enabled, please enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below: - :::{image} ../../images/elasticsearch-reference-autoscaling_banner.png + :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png :alt: Autoscaling banner :screenshot: ::: Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page. - :::{image} ../../images/elasticsearch-reference-enable_autoscaling.png + :::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png :alt: Enabling autoscaling :screenshot: ::: 4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, please check if autoscaling has reached its limits. You will be notified about this by the following banner: - :::{image} ../../images/elasticsearch-reference-autoscaling_limits_banner.png + :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png :alt: Autoscaling banner :screenshot: ::: or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below: - :::{image} ../../images/elasticsearch-reference-reached_autoscaling_limits.png + :::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png :alt: Autoscaling limits reached :screenshot: ::: diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index 5eee8caf06..4655919eb0 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -30,7 +30,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index 20e8f68c55..c470e4f8bd 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -30,7 +30,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index 501d13d2dc..da59b5ec79 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -28,7 +28,7 @@ One way to get the replica shards assigned is to add an availability zone. This 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: @@ -57,7 +57,7 @@ The response will look like this: Now that you know the tier, you want to increase the number of nodes in that tier so that the replicas can be allocated. To do this you can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. Go back to the deployment’s landing page by clicking on the three horizontal bars on the top left of the screen and choosing **Manage this deployment**. On that page click the **Manage** button, and choose **Edit deployment**. Note that you must be logged in to [https://cloud.elastic.co/](https://cloud.elastic.co/) in order to do this. In the {{es}} section, find the tier where the replica shards could not be assigned. -:::{image} ../../images/elasticsearch-reference-ess-advanced-config-data-tiers.png +:::{image} /troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/repeated-snapshot-failures.md b/troubleshoot/elasticsearch/repeated-snapshot-failures.md index e7226e6c65..3a35176a1b 100644 --- a/troubleshoot/elasticsearch/repeated-snapshot-failures.md +++ b/troubleshoot/elasticsearch/repeated-snapshot-failures.md @@ -28,7 +28,7 @@ In order to check the status of failing {{slm}} policies we need to go to Kibana 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/restore-from-snapshot.md b/troubleshoot/elasticsearch/restore-from-snapshot.md index 5b211ea1b4..94f2fef18b 100644 --- a/troubleshoot/elasticsearch/restore-from-snapshot.md +++ b/troubleshoot/elasticsearch/restore-from-snapshot.md @@ -28,7 +28,7 @@ In order to restore the indices and data streams that are missing data: 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/start-ilm.md b/troubleshoot/elasticsearch/start-ilm.md index f288ac5007..50530da522 100644 --- a/troubleshoot/elasticsearch/start-ilm.md +++ b/troubleshoot/elasticsearch/start-ilm.md @@ -39,7 +39,7 @@ In order to start {{ilm}} we need to go to Kibana and execute the [start command 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: @@ -127,7 +127,7 @@ In order to start {{slm}} we need to go to Kibana and execute the [start command 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md index be66f2fa6f..678032e346 100644 --- a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md +++ b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md @@ -30,7 +30,7 @@ In order to get the shards assigned we need to call the [migrate to data tiers r 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md index a085b1a5e0..a50ead528c 100644 --- a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md +++ b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md @@ -31,7 +31,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/elasticsearch-reference-kibana-console.png + :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console :screenshot: ::: @@ -241,7 +241,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp 3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - :::{image} ../../images/kibana-console.png + :::{image} /troubleshoot/images/kibana-console.png :alt: {{kib}} Console :screenshot: ::: diff --git a/troubleshoot/images/apm-agent-dotnet-classic-pipeline.png b/troubleshoot/images/apm-agent-dotnet-classic-pipeline.png new file mode 100644 index 0000000000..3b18fa0fc3 Binary files /dev/null and b/troubleshoot/images/apm-agent-dotnet-classic-pipeline.png differ diff --git a/troubleshoot/images/apm-agent-dotnet-integrated-pipeline.png b/troubleshoot/images/apm-agent-dotnet-integrated-pipeline.png new file mode 100644 index 0000000000..22b280e922 Binary files /dev/null and b/troubleshoot/images/apm-agent-dotnet-integrated-pipeline.png differ diff --git a/troubleshoot/images/cloud-ec-ce-configuration-change-failure.png b/troubleshoot/images/cloud-ec-ce-configuration-change-failure.png new file mode 100644 index 0000000000..0e9c1c29bc Binary files /dev/null and b/troubleshoot/images/cloud-ec-ce-configuration-change-failure.png differ diff --git a/troubleshoot/images/cloud-ec-ce-deployment-health-warning.png b/troubleshoot/images/cloud-ec-ce-deployment-health-warning.png new file mode 100644 index 0000000000..319c4bcd2a Binary files /dev/null and b/troubleshoot/images/cloud-ec-ce-deployment-health-warning.png differ diff --git a/troubleshoot/images/cloud-ec-deployment-instances-config.png b/troubleshoot/images/cloud-ec-deployment-instances-config.png new file mode 100644 index 0000000000..a00b187e98 Binary files /dev/null and b/troubleshoot/images/cloud-ec-deployment-instances-config.png differ diff --git a/troubleshoot/images/cloud-ec-full-disk-multiple-nodes.png b/troubleshoot/images/cloud-ec-full-disk-multiple-nodes.png new file mode 100644 index 0000000000..3c638a83e4 Binary files /dev/null and b/troubleshoot/images/cloud-ec-full-disk-multiple-nodes.png differ diff --git a/troubleshoot/images/cloud-ec-full-disk-single-node.png b/troubleshoot/images/cloud-ec-full-disk-single-node.png new file mode 100644 index 0000000000..99178869af Binary files /dev/null and b/troubleshoot/images/cloud-ec-full-disk-single-node.png differ diff --git a/troubleshoot/images/cloud-ec-increase-size-per-zone.png b/troubleshoot/images/cloud-ec-increase-size-per-zone.png new file mode 100644 index 0000000000..6416fd23fc Binary files /dev/null and b/troubleshoot/images/cloud-ec-increase-size-per-zone.png differ diff --git a/troubleshoot/images/cloud-ec-instances-filtered-by-master-id.png b/troubleshoot/images/cloud-ec-instances-filtered-by-master-id.png new file mode 100644 index 0000000000..dfd707ef74 Binary files /dev/null and b/troubleshoot/images/cloud-ec-instances-filtered-by-master-id.png differ diff --git a/troubleshoot/images/cloud-ec-logs-metrics-page.png b/troubleshoot/images/cloud-ec-logs-metrics-page.png new file mode 100644 index 0000000000..f96b1a6a22 Binary files /dev/null and b/troubleshoot/images/cloud-ec-logs-metrics-page.png differ diff --git a/troubleshoot/images/cloud-ec-node-attribute.png b/troubleshoot/images/cloud-ec-node-attribute.png new file mode 100644 index 0000000000..ad4614634b Binary files /dev/null and b/troubleshoot/images/cloud-ec-node-attribute.png differ diff --git a/troubleshoot/images/cloud-ec-node-types-data-tiers.png b/troubleshoot/images/cloud-ec-node-types-data-tiers.png new file mode 100644 index 0000000000..8c3100d8ad Binary files /dev/null and b/troubleshoot/images/cloud-ec-node-types-data-tiers.png differ diff --git a/troubleshoot/images/cloud-ec-quick-link-to-deployment.png b/troubleshoot/images/cloud-ec-quick-link-to-deployment.png new file mode 100644 index 0000000000..6dae4dbc31 Binary files /dev/null and b/troubleshoot/images/cloud-ec-quick-link-to-deployment.png differ diff --git a/troubleshoot/images/cloud-ec-red-yellow-indices.png b/troubleshoot/images/cloud-ec-red-yellow-indices.png new file mode 100644 index 0000000000..d59b85d841 Binary files /dev/null and b/troubleshoot/images/cloud-ec-red-yellow-indices.png differ diff --git a/troubleshoot/images/cloud-ec-unhealthy-deployment.png b/troubleshoot/images/cloud-ec-unhealthy-deployment.png new file mode 100644 index 0000000000..5e40e109f9 Binary files /dev/null and b/troubleshoot/images/cloud-ec-unhealthy-deployment.png differ diff --git a/troubleshoot/images/cloud-enterprise-deployment-filter.png b/troubleshoot/images/cloud-enterprise-deployment-filter.png new file mode 100644 index 0000000000..56aaf7fbeb Binary files /dev/null and b/troubleshoot/images/cloud-enterprise-deployment-filter.png differ diff --git a/troubleshoot/images/cloud-enterprise-ece-aws-private-ip.png b/troubleshoot/images/cloud-enterprise-ece-aws-private-ip.png new file mode 100644 index 0000000000..1623a83098 Binary files /dev/null and b/troubleshoot/images/cloud-enterprise-ece-aws-private-ip.png differ diff --git a/troubleshoot/images/cloud-enterprise-ece-aws-public-ip.png b/troubleshoot/images/cloud-enterprise-ece-aws-public-ip.png new file mode 100644 index 0000000000..b3010ab2b4 Binary files /dev/null and b/troubleshoot/images/cloud-enterprise-ece-aws-public-ip.png differ diff --git a/troubleshoot/images/cloud-metrics-cpu.png b/troubleshoot/images/cloud-metrics-cpu.png new file mode 100644 index 0000000000..0ff171e7f5 Binary files /dev/null and b/troubleshoot/images/cloud-metrics-cpu.png differ diff --git a/troubleshoot/images/cloud-metrics-credits.png b/troubleshoot/images/cloud-metrics-credits.png new file mode 100644 index 0000000000..7c77e5857d Binary files /dev/null and b/troubleshoot/images/cloud-metrics-credits.png differ diff --git a/troubleshoot/images/cloud-metrics-high-memory-pressure.png b/troubleshoot/images/cloud-metrics-high-memory-pressure.png new file mode 100644 index 0000000000..2c5b7fc8d0 Binary files /dev/null and b/troubleshoot/images/cloud-metrics-high-memory-pressure.png differ diff --git a/troubleshoot/images/cloud-metrics-high-response-times.png b/troubleshoot/images/cloud-metrics-high-response-times.png new file mode 100644 index 0000000000..1427637f76 Binary files /dev/null and b/troubleshoot/images/cloud-metrics-high-response-times.png differ diff --git a/troubleshoot/images/cloud-metrics-memory-pressure-sawtooth.png b/troubleshoot/images/cloud-metrics-memory-pressure-sawtooth.png new file mode 100644 index 0000000000..d11b24aa3f Binary files /dev/null and b/troubleshoot/images/cloud-metrics-memory-pressure-sawtooth.png differ diff --git a/troubleshoot/images/cloud-metrics-response-times.png b/troubleshoot/images/cloud-metrics-response-times.png new file mode 100644 index 0000000000..021e6c4de0 Binary files /dev/null and b/troubleshoot/images/cloud-metrics-response-times.png differ diff --git a/troubleshoot/images/elasticsearch-client-net-api-capture-requests-localhost.png b/troubleshoot/images/elasticsearch-client-net-api-capture-requests-localhost.png new file mode 100644 index 0000000000..3730681e5e Binary files /dev/null and b/troubleshoot/images/elasticsearch-client-net-api-capture-requests-localhost.png differ diff --git a/troubleshoot/images/elasticsearch-client-net-api-capture-requests-remotehost.png b/troubleshoot/images/elasticsearch-client-net-api-capture-requests-remotehost.png new file mode 100644 index 0000000000..26a913d2cb Binary files /dev/null and b/troubleshoot/images/elasticsearch-client-net-api-capture-requests-remotehost.png differ diff --git a/troubleshoot/images/elasticsearch-client-net-api-inspect-requests.png b/troubleshoot/images/elasticsearch-client-net-api-inspect-requests.png new file mode 100644 index 0000000000..8b4a79b409 Binary files /dev/null and b/troubleshoot/images/elasticsearch-client-net-api-inspect-requests.png differ diff --git a/troubleshoot/images/elasticsearch-reference-autoscaling_banner.png b/troubleshoot/images/elasticsearch-reference-autoscaling_banner.png new file mode 100644 index 0000000000..cffe323851 Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-autoscaling_banner.png differ diff --git a/troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png b/troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png new file mode 100644 index 0000000000..6eabffe81a Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png differ diff --git a/troubleshoot/images/elasticsearch-reference-enable_autoscaling.png b/troubleshoot/images/elasticsearch-reference-enable_autoscaling.png new file mode 100644 index 0000000000..2a31cfda4b Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-enable_autoscaling.png differ diff --git a/troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png b/troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png new file mode 100644 index 0000000000..03333bfc89 Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png differ diff --git a/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-master-node.png b/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-master-node.png new file mode 100644 index 0000000000..5ebbd4e93e Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-master-node.png differ diff --git a/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-other-node.png b/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-other-node.png new file mode 100644 index 0000000000..e6e98bee45 Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-increase-disk-capacity-other-node.png differ diff --git a/troubleshoot/images/elasticsearch-reference-kibana-console.png b/troubleshoot/images/elasticsearch-reference-kibana-console.png new file mode 100644 index 0000000000..ac0c39049a Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-kibana-console.png differ diff --git a/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png b/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png new file mode 100644 index 0000000000..28dde50d0a Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png differ diff --git a/troubleshoot/images/elasticsearch-reference-reduce_replicas.png b/troubleshoot/images/elasticsearch-reference-reduce_replicas.png new file mode 100644 index 0000000000..ae3d490244 Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-reduce_replicas.png differ diff --git a/troubleshoot/images/elasticsearch-reference-repositories.png b/troubleshoot/images/elasticsearch-reference-repositories.png new file mode 100644 index 0000000000..70b1613525 Binary files /dev/null and b/troubleshoot/images/elasticsearch-reference-repositories.png differ diff --git a/troubleshoot/images/fleet-collect-agent-diagnostics1.png b/troubleshoot/images/fleet-collect-agent-diagnostics1.png new file mode 100644 index 0000000000..b0c39fadc5 Binary files /dev/null and b/troubleshoot/images/fleet-collect-agent-diagnostics1.png differ diff --git a/troubleshoot/images/fleet-collect-agent-diagnostics2.png b/troubleshoot/images/fleet-collect-agent-diagnostics2.png new file mode 100644 index 0000000000..b49d0734ab Binary files /dev/null and b/troubleshoot/images/fleet-collect-agent-diagnostics2.png differ diff --git a/troubleshoot/images/kibana-console.png b/troubleshoot/images/kibana-console.png new file mode 100644 index 0000000000..84e4c4f3a7 Binary files /dev/null and b/troubleshoot/images/kibana-console.png differ diff --git a/troubleshoot/images/kibana-index-threshold-chart.png b/troubleshoot/images/kibana-index-threshold-chart.png new file mode 100644 index 0000000000..f83fa9476e Binary files /dev/null and b/troubleshoot/images/kibana-index-threshold-chart.png differ diff --git a/troubleshoot/images/kibana-kibana-status-page-7_14_0.png b/troubleshoot/images/kibana-kibana-status-page-7_14_0.png new file mode 100644 index 0000000000..db172d87ee Binary files /dev/null and b/troubleshoot/images/kibana-kibana-status-page-7_14_0.png differ diff --git a/troubleshoot/images/kibana-requests_inspector.png b/troubleshoot/images/kibana-requests_inspector.png new file mode 100644 index 0000000000..14bde6ac7c Binary files /dev/null and b/troubleshoot/images/kibana-requests_inspector.png differ diff --git a/troubleshoot/images/kibana-rule-details-alerts-inactive.png b/troubleshoot/images/kibana-rule-details-alerts-inactive.png new file mode 100644 index 0000000000..fc82cf465e Binary files /dev/null and b/troubleshoot/images/kibana-rule-details-alerts-inactive.png differ diff --git a/troubleshoot/images/kibana-rules-management-health.png b/troubleshoot/images/kibana-rules-management-health.png new file mode 100644 index 0000000000..54029d7d33 Binary files /dev/null and b/troubleshoot/images/kibana-rules-management-health.png differ diff --git a/troubleshoot/images/kibana-vector_tile_inspector.png b/troubleshoot/images/kibana-vector_tile_inspector.png new file mode 100644 index 0000000000..94914802d5 Binary files /dev/null and b/troubleshoot/images/kibana-vector_tile_inspector.png differ diff --git a/troubleshoot/images/observability-exploratory-view-uptime.png b/troubleshoot/images/observability-exploratory-view-uptime.png new file mode 100644 index 0000000000..60d7039d67 Binary files /dev/null and b/troubleshoot/images/observability-exploratory-view-uptime.png differ diff --git a/troubleshoot/images/observability-exploratory-view-ux-page-load-time.png b/troubleshoot/images/observability-exploratory-view-ux-page-load-time.png new file mode 100644 index 0000000000..c1e7e62c53 Binary files /dev/null and b/troubleshoot/images/observability-exploratory-view-ux-page-load-time.png differ diff --git a/troubleshoot/images/observability-exploratory-view.png b/troubleshoot/images/observability-exploratory-view.png new file mode 100644 index 0000000000..75f6db4895 Binary files /dev/null and b/troubleshoot/images/observability-exploratory-view.png differ diff --git a/troubleshoot/images/observability-help-icon.png b/troubleshoot/images/observability-help-icon.png new file mode 100644 index 0000000000..49eefac61a Binary files /dev/null and b/troubleshoot/images/observability-help-icon.png differ diff --git a/troubleshoot/images/observability-inspect-enable.png b/troubleshoot/images/observability-inspect-enable.png new file mode 100644 index 0000000000..c85c0a9b62 Binary files /dev/null and b/troubleshoot/images/observability-inspect-enable.png differ diff --git a/troubleshoot/images/observability-inspect-flyout-dropdown.png b/troubleshoot/images/observability-inspect-flyout-dropdown.png new file mode 100644 index 0000000000..28f0b01b73 Binary files /dev/null and b/troubleshoot/images/observability-inspect-flyout-dropdown.png differ diff --git a/troubleshoot/images/observability-inspect-flyout-request.png b/troubleshoot/images/observability-inspect-flyout-request.png new file mode 100644 index 0000000000..52d6fac844 Binary files /dev/null and b/troubleshoot/images/observability-inspect-flyout-request.png differ diff --git a/troubleshoot/images/observability-inspect-flyout-response.png b/troubleshoot/images/observability-inspect-flyout-response.png new file mode 100644 index 0000000000..62bba8804c Binary files /dev/null and b/troubleshoot/images/observability-inspect-flyout-response.png differ diff --git a/troubleshoot/images/observability-inspect-flyout-statistics.png b/troubleshoot/images/observability-inspect-flyout-statistics.png new file mode 100644 index 0000000000..58ba8e4808 Binary files /dev/null and b/troubleshoot/images/observability-inspect-flyout-statistics.png differ diff --git a/troubleshoot/images/observability-inspect-flyout.png b/troubleshoot/images/observability-inspect-flyout.png new file mode 100644 index 0000000000..badb67807f Binary files /dev/null and b/troubleshoot/images/observability-inspect-flyout.png differ diff --git a/troubleshoot/images/observability-turn-on-system-metrics.png b/troubleshoot/images/observability-turn-on-system-metrics.png new file mode 100644 index 0000000000..8393970b36 Binary files /dev/null and b/troubleshoot/images/observability-turn-on-system-metrics.png differ diff --git a/troubleshoot/images/security-endpoints-transform-failed.png b/troubleshoot/images/security-endpoints-transform-failed.png new file mode 100644 index 0000000000..1b46dd539f Binary files /dev/null and b/troubleshoot/images/security-endpoints-transform-failed.png differ diff --git a/troubleshoot/images/security-field-warning-icon.png b/troubleshoot/images/security-field-warning-icon.png new file mode 100644 index 0000000000..b77960d73e Binary files /dev/null and b/troubleshoot/images/security-field-warning-icon.png differ diff --git a/troubleshoot/images/security-rules-ts-ml-job-stopped.png b/troubleshoot/images/security-rules-ts-ml-job-stopped.png new file mode 100644 index 0000000000..7fba6ed8f9 Binary files /dev/null and b/troubleshoot/images/security-rules-ts-ml-job-stopped.png differ diff --git a/troubleshoot/images/security-timestamp-override.png b/troubleshoot/images/security-timestamp-override.png new file mode 100644 index 0000000000..615a7db008 Binary files /dev/null and b/troubleshoot/images/security-timestamp-override.png differ diff --git a/troubleshoot/images/security-transforms-start.png b/troubleshoot/images/security-transforms-start.png new file mode 100644 index 0000000000..1dcc9735df Binary files /dev/null and b/troubleshoot/images/security-transforms-start.png differ diff --git a/troubleshoot/images/security-unhealthy-agent-fleet.png b/troubleshoot/images/security-unhealthy-agent-fleet.png new file mode 100644 index 0000000000..ea140f2993 Binary files /dev/null and b/troubleshoot/images/security-unhealthy-agent-fleet.png differ diff --git a/troubleshoot/images/security-warning-icon-message.png b/troubleshoot/images/security-warning-icon-message.png new file mode 100644 index 0000000000..07e6fded6a Binary files /dev/null and b/troubleshoot/images/security-warning-icon-message.png differ diff --git a/troubleshoot/images/security-warning-type-conflicts.png b/troubleshoot/images/security-warning-type-conflicts.png new file mode 100644 index 0000000000..45058f5e54 Binary files /dev/null and b/troubleshoot/images/security-warning-type-conflicts.png differ diff --git a/troubleshoot/images/security-warning-unmapped-fields.png b/troubleshoot/images/security-warning-unmapped-fields.png new file mode 100644 index 0000000000..2b167c0029 Binary files /dev/null and b/troubleshoot/images/security-warning-unmapped-fields.png differ diff --git a/troubleshoot/ingest/fleet/common-problems.md b/troubleshoot/ingest/fleet/common-problems.md index f5cbfe8afb..ca1080f0b6 100644 --- a/troubleshoot/ingest/fleet/common-problems.md +++ b/troubleshoot/ingest/fleet/common-problems.md @@ -371,14 +371,14 @@ If you want to omit the raw events from the diagnostic, add the flag `--exclude- 2. In the **Host** column, click the agent’s name. 3. Select the **Diagnostics** tab and click the **Request diagnostics .zip** button. - :::{image} ../../../images/fleet-collect-agent-diagnostics1.png + :::{image} /troubleshoot/images/fleet-collect-agent-diagnostics1.png :alt: Collect agent diagnostics under agent details :screenshot: ::: 4. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you’d like detailed CPU data. - :::{image} ../../../images/fleet-collect-agent-diagnostics2.png + :::{image} /troubleshoot/images/fleet-collect-agent-diagnostics2.png :alt: Collect agent diagnostics confirmation pop-up :screenshot: ::: diff --git a/troubleshoot/kibana/access.md b/troubleshoot/kibana/access.md index f9c70faace..34573b2065 100644 --- a/troubleshoot/kibana/access.md +++ b/troubleshoot/kibana/access.md @@ -43,7 +43,7 @@ The status page displays information about the server resource usage and install To view the {{kib}} status page, use the status endpoint. For example, `localhost:5601/status`. -:::{image} ../../images/kibana-kibana-status-page-7_14_0.png +:::{image} /troubleshoot/images/kibana-kibana-status-page-7_14_0.png :alt: Kibana server status page :screenshot: ::: diff --git a/troubleshoot/kibana/alerts.md b/troubleshoot/kibana/alerts.md index 25ed4bb59e..4dd304ab79 100644 --- a/troubleshoot/kibana/alerts.md +++ b/troubleshoot/kibana/alerts.md @@ -32,7 +32,7 @@ The following debugging tools are available: **{{rules-ui}}** in **{{stack-manage-app}}** lists the rules available in the space you’re currently in. When you click a rule name, you are navigated to the [details page](../../explore-analyze/alerts-cases/alerts/create-manage-rules.md#rule-details) for the rule, where you can see currently active alerts. The start date on this page indicates when a rule is triggered, and for what alerts. In addition, the duration of the condition indicates how long the instance is active. -:::{image} ../../images/kibana-rule-details-alerts-inactive.png +:::{image} /troubleshoot/images/kibana-rule-details-alerts-inactive.png :alt: Alerting management details :screenshot: ::: @@ -42,7 +42,7 @@ The following debugging tools are available: When creating or editing an index threshold rule, you see a graph of the data the rule will operate against, from some date in the past until now, updated every 5 seconds. -:::{image} ../../images/kibana-index-threshold-chart.png +:::{image} /troubleshoot/images/kibana-index-threshold-chart.png :alt: Index Threshold chart :screenshot: ::: @@ -81,7 +81,7 @@ The result of this HTTP request (and printed to stdout by [kbn-action](https://g The **{{stack-manage-app}}** > **{{rules-ui}}** page contains an error banner that helps to identify the errors for the rules: -:::{image} ../../images/kibana-rules-management-health.png +:::{image} /troubleshoot/images/kibana-rules-management-health.png :alt: Rule management page with the errors banner :screenshot: ::: diff --git a/troubleshoot/kibana/maps.md b/troubleshoot/kibana/maps.md index 75eda3184e..364ec6ecc8 100644 --- a/troubleshoot/kibana/maps.md +++ b/troubleshoot/kibana/maps.md @@ -16,12 +16,12 @@ Use the information in this section to inspect Elasticsearch requests and find s Maps uses the [{{es}} vector tile search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search-mvt) and the [{{es}} search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) to get documents and aggregation results from {{es}}. Use **Vector tiles** inspector to view {{es}} vector tile search API requests. Use **Requests** inspector to view {{es}} search API requests. -:::{image} ../../images/kibana-vector_tile_inspector.png +:::{image} /troubleshoot/images/kibana-vector_tile_inspector.png :alt: vector tile inspector :screenshot: ::: -:::{image} ../../images/kibana-requests_inspector.png +:::{image} /troubleshoot/images/kibana-requests_inspector.png :alt: requests inspector :screenshot: ::: diff --git a/troubleshoot/monitoring/cluster-response-time.md b/troubleshoot/monitoring/cluster-response-time.md index a0ed0fdb56..955bdedae7 100644 --- a/troubleshoot/monitoring/cluster-response-time.md +++ b/troubleshoot/monitoring/cluster-response-time.md @@ -11,7 +11,7 @@ Your {{es}} cluster is humming along nicely with good performance until you sudd Initially, you think that perhaps memory pressure is to blame, because you already know that [high memory pressure can cause performance issues](/troubleshoot/monitoring/high-memory-pressure.md). You look at the **Cluster Performance Metrics** section of the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) and, after some zooming in to the right time frame, you get these metrics: -:::{image} /images/cloud-metrics-response-times.png +:::{image} /troubleshoot/images/cloud-metrics-response-times.png :alt: Cluster performance metrics ::: diff --git a/troubleshoot/monitoring/deployment-health-warnings.md b/troubleshoot/monitoring/deployment-health-warnings.md index 69315d135e..d4f646dbe1 100644 --- a/troubleshoot/monitoring/deployment-health-warnings.md +++ b/troubleshoot/monitoring/deployment-health-warnings.md @@ -10,7 +10,7 @@ mapped_pages: The {{ecloud}} [Deployments](https://cloud.elastic.co/deployments) page shows the current status of your active deployments. From time to time you may get one or more health warnings, such as the following: -:::{image} /images/cloud-ec-ce-deployment-health-warning.png +:::{image} /troubleshoot/images/cloud-ec-ce-deployment-health-warning.png :alt: A screen capture of the deployment page showing a typical warning: Deployment health warning: Latest change to {{es}} configuration failed. ::: diff --git a/troubleshoot/monitoring/high-availability.md b/troubleshoot/monitoring/high-availability.md index 463b6306df..eeea64512d 100644 --- a/troubleshoot/monitoring/high-availability.md +++ b/troubleshoot/monitoring/high-availability.md @@ -13,7 +13,7 @@ You created a new {{ech}} deployment that uses three availability zones and inde To answer this question, let’s take a look at CPU usage in the **Cluster Performance Metrics** section in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body): -:::{image} /images/cloud-metrics-cpu.png +:::{image} /troubleshoot/images/cloud-metrics-cpu.png :alt: CPU usage over time ::: diff --git a/troubleshoot/monitoring/high-memory-pressure.md b/troubleshoot/monitoring/high-memory-pressure.md index 10b832e6bf..6ecf176c87 100644 --- a/troubleshoot/monitoring/high-memory-pressure.md +++ b/troubleshoot/monitoring/high-memory-pressure.md @@ -9,19 +9,19 @@ mapped_pages: When you load up an {{es}} cluster with an indexing and search workload that matches the size of the cluster well, you typically get the classic JVM heap sawtooth pattern as memory gets used and then gets freed up again by the garbage collector. Memory usage increases until it reaches 75% and then drops again as memory is freed up: -:::{image} /images/cloud-metrics-memory-pressure-sawtooth.png +:::{image} /troubleshoot/images/cloud-metrics-memory-pressure-sawtooth.png :alt: The classic JVM sawtooth pattern that shows memory usage ::: Now let’s suppose you have a cluster with three nodes and much higher memory pressure overall. In this example, two of the three nodes are maxing out very regularly for extended periods and one node is consistently hovering around the 75% mark. -:::{image} /images/cloud-metrics-high-memory-pressure.png +:::{image} /troubleshoot/images/cloud-metrics-high-memory-pressure.png :alt: High memory pressure ::: High memory pressure works against cluster performance in two ways: As memory pressure rises to 75% and above, less memory remains available, but your cluster now also needs to spend some CPU resources to reclaim memory through garbage collection. These CPU resources are not available to handle user requests while garbage collection is going on. As a result, response times for user requests increases as the system becomes more and more resource constrained. If memory pressure continues to rise and reaches near 100%, a much more aggressive form of garbage collection is used, which will in turn affect cluster response times dramatically. -:::{image} /images/cloud-metrics-high-response-times.png +:::{image} /troubleshoot/images/cloud-metrics-high-response-times.png :alt: High response times ::: diff --git a/troubleshoot/monitoring/node-bootlooping.md b/troubleshoot/monitoring/node-bootlooping.md index aa27cf87f7..952ac7a46c 100644 --- a/troubleshoot/monitoring/node-bootlooping.md +++ b/troubleshoot/monitoring/node-bootlooping.md @@ -10,7 +10,7 @@ mapped_pages: When you attempt to apply a configuration change to a deployment, the attempt may fail with an error indicating that the change could not be applied, and deployment resources may be unable to restart. In some cases, bootlooping may result, where the deployment resources cycle through a continual reboot process. -:::{image} /images/cloud-ec-ce-configuration-change-failure.png +:::{image} /troubleshoot/images/cloud-ec-ce-configuration-change-failure.png :alt: A screen capture of the deployment page showing an error: Latest change to {{es}} configuration failed. ::: diff --git a/troubleshoot/monitoring/performance.md b/troubleshoot/monitoring/performance.md index 779d858005..93341acdc5 100644 --- a/troubleshoot/monitoring/performance.md +++ b/troubleshoot/monitoring/performance.md @@ -11,7 +11,7 @@ You have a smaller {{es}} cluster and you’ve noticed that performance seems to When you look in the **Cluster Performance Metrics** section of the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), you get the following metrics: -:::{image} /images/cloud-metrics-credits.png +:::{image} /troubleshoot/images/cloud-metrics-credits.png :alt: CPU usage versus CPU credits over time ::: diff --git a/troubleshoot/monitoring/unavailable-nodes.md b/troubleshoot/monitoring/unavailable-nodes.md index 89e15245da..aa72aafda4 100644 --- a/troubleshoot/monitoring/unavailable-nodes.md +++ b/troubleshoot/monitoring/unavailable-nodes.md @@ -1,6 +1,6 @@ --- navigation_title: "Unavailable nodes" -applies: +applies: hosted: mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-scenario_why_is_my_node_unavailable.html @@ -53,7 +53,7 @@ If your issue is not addressed here, then [contact Elastic support for help](/tr 3. On your deployment page, scroll down to **Instances** and check if the disk allocation for your {{es}} instance is over 90%. - :::{image} /images/cloud-ec-full-disk-single-node.png + :::{image} /troubleshoot/images/cloud-ec-full-disk-single-node.png :alt: Full disk on single-node deployment ::: @@ -78,7 +78,7 @@ If your {{es}} cluster is unhealthy and reports a status of red, then increasing From your deployment menu, go to the **Edit** page and increase the **Size per zone** for your Hot data and Content tiers. - :::{image} /images/cloud-ec-increase-size-per-zone.png + :::{image} /troubleshoot/images/cloud-ec-increase-size-per-zone.png :alt: Increase size per zone ::: @@ -94,13 +94,13 @@ If your {{es}} cluster is unhealthy and reports a status of red, then increasing 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. From the {{es}} Service panel, click the **Quick link** icon corresponding to the deployment that you want to manage. - :::{image} /images/cloud-ec-quick-link-to-deployment.png + :::{image} /troubleshoot/images/cloud-ec-quick-link-to-deployment.png :alt: Quick link to the deployment page ::: 3. On your deployment page, scroll down to **Instances** and check if the disk allocation for any of your {{es}} instances is over 90%. - :::{image} /images/cloud-ec-full-disk-multiple-nodes.png + :::{image} /troubleshoot/images/cloud-ec-full-disk-multiple-nodes.png :alt: Full disk on multiple-nodes deployment ::: @@ -125,13 +125,13 @@ If your {{es}} cluster is unhealthy and reports a status of red, the scale up co 1. On your deployment page, scroll down to **Instances** and identify the node attribute of the instances that are running out of disk space. - :::{image} /images/cloud-ec-node-attribute.png + :::{image} /troubleshoot/images/cloud-ec-node-attribute.png :alt: Instance node attribute ::: 2. Use the node types identified at step 1 to find out the corresponding data tier. - :::{image} /images/cloud-ec-node-types-data-tiers.png + :::{image} /troubleshoot/images/cloud-ec-node-types-data-tiers.png :alt: Node type and corresponding attribute ::: @@ -159,7 +159,7 @@ If your {{es}} cluster is unhealthy and reports a status of red, the scale up co 3. On your deployment page, scroll down to **Instances** and check if the JVM memory pressure for your {{es}} instances is high. - :::{image} /images/cloud-ec-deployment-instances-config.png + :::{image} /troubleshoot/images/cloud-ec-deployment-instances-config.png :alt: Deployment instances configuration ::: @@ -225,7 +225,7 @@ By default, the allowed CPU usage threshold is set at 85%. 3. Identify the IDs of your master nodes. On your deployment page, scroll down to **Instances** and filter your instance configuration by master. The IDs of your master nodes are in the title. In this example, the IDs are 21, 26 and 27: - :::{image} /images/cloud-ec-instances-filtered-by-master-id.png + :::{image} /troubleshoot/images/cloud-ec-instances-filtered-by-master-id.png :alt: Instances configuration filtered by master nodes ID ::: diff --git a/troubleshoot/monitoring/unavailable-shards.md b/troubleshoot/monitoring/unavailable-shards.md index ab3049bbc0..79140904da 100644 --- a/troubleshoot/monitoring/unavailable-shards.md +++ b/troubleshoot/monitoring/unavailable-shards.md @@ -28,7 +28,7 @@ A formerly-healthy cluster might have unassigned shards because nodes have dropp If a cluster has unassigned shards, you might see an error message such as this on the Elastic Cloud console: -:::{image} /images/cloud-ec-unhealthy-deployment.png +:::{image} /troubleshoot/images/cloud-ec-unhealthy-deployment.png :alt: Unhealthy deployment error message ::: @@ -177,13 +177,13 @@ If you are shipping logs and metrics to a monitoring deployment, go through the 4. Click **Save**. It might take a few minutes to apply the configuration changes. 5. Click **View** to open the Kibana UI and get more details on metrics and logs. -:::{image} /images/cloud-ec-logs-metrics-page.png +:::{image} /troubleshoot/images/cloud-ec-logs-metrics-page.png :alt: Log and metrics page ::: The unhealthy indices appear with a red or yellow status. -:::{image} /images/cloud-ec-red-yellow-indices.png +:::{image} /troubleshoot/images/cloud-ec-red-yellow-indices.png :alt: Unhealthy indices in red or yellow status ::: diff --git a/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md b/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md index 99d7c3de2b..d82ff142c3 100644 --- a/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md +++ b/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md @@ -235,13 +235,13 @@ When debugging applications using Visual Studio and IISExpress, the same require For example: -:::{image} /images/apm-agent-dotnet-classic-pipeline.png +:::{image} /troubleshoot/images/apm-agent-dotnet-classic-pipeline.png :alt: Classic Managed Pipeline Mode in Properties ::: Should be changed to: -:::{image} /images/apm-agent-dotnet-integrated-pipeline.png +:::{image} /troubleshoot/images/apm-agent-dotnet-integrated-pipeline.png :alt: Integrated Managed Pipeline Mode in Properties ::: diff --git a/troubleshoot/observability/explore-data.md b/troubleshoot/observability/explore-data.md index 67ce0cf775..dd9e5f13d2 100644 --- a/troubleshoot/observability/explore-data.md +++ b/troubleshoot/observability/explore-data.md @@ -13,7 +13,7 @@ The **Explore data** view in {{kib}} enables you to select and filter result dat Based on your synthetic monitoring, user experience, and mobile experience data, you can create multi-series visualizations of performance and device distributions, key performance indicators (KPI) over time, and core web vitals of your web applications. -:::{image} ../../images/observability-exploratory-view.png +:::{image} /troubleshoot/images/observability-exploratory-view.png :alt: Explore {{data-source}} for Monitor duration :screenshot: ::: @@ -73,7 +73,7 @@ To create a multi-series visualization: Based on the Uptime data you are sending to your deployment, you can create various visualizations relating to monitor durations, pings over time, or any of the [available data types](#data-types). -:::{image} ../../images/observability-exploratory-view-uptime.png +:::{image} /troubleshoot/images/observability-exploratory-view-uptime.png :alt: Explore data for Uptime :screenshot: ::: @@ -88,7 +88,7 @@ Based on the Uptime data you are sending to your deployment, you can create vari Based on the {{user-experience}} data from your instrumented applications, you can create detailed visualizations for performance distributions, key performance indicators (KPI) over time, and for core web vitals of your web applications. -:::{image} ../../images/observability-exploratory-view-ux-page-load-time.png +:::{image} /troubleshoot/images/observability-exploratory-view-ux-page-load-time.png :alt: Explore data for {{user-experience}} (page load time) :screenshot: ::: diff --git a/troubleshoot/observability/inspect.md b/troubleshoot/observability/inspect.md index 62498c045f..5dab62d6a7 100644 --- a/troubleshoot/observability/inspect.md +++ b/troubleshoot/observability/inspect.md @@ -11,7 +11,7 @@ applies_to: The **Inspect** view in {{kib}} allows you to view information about all requests that were made to collect the data displayed on the current page. -:::{image} ../../images/observability-inspect-flyout.png +:::{image} /troubleshoot/images/observability-inspect-flyout.png :alt: Inspector flyout in the {{uptime-app}} :screenshot: ::: @@ -34,7 +34,7 @@ To enable inspect across apps: 3. Turn on the **Inspect ES queries** option. 4. Click **Save changes**. -:::{image} ../../images/observability-inspect-enable.png +:::{image} /troubleshoot/images/observability-inspect-enable.png :alt: {{kib}} Advanced Settings {{observability}} section with Inspect ES queries enabled :screenshot: ::: @@ -46,7 +46,7 @@ Open the inspect flyout by clicking **Inspect** in the top bar. Click the **Request** dropdown to see all the requests used to make the current page work. Select one to see information about the request below. -:::{image} ../../images/observability-inspect-flyout-dropdown.png +:::{image} /troubleshoot/images/observability-inspect-flyout-dropdown.png :alt: Inspector flyout dropdown for selecting a request to inspect :screenshot: ::: @@ -76,21 +76,21 @@ Query time Request timestamp : Time when the start of the request has been logged. -:::{image} ../../images/observability-inspect-flyout-statistics.png +:::{image} /troubleshoot/images/observability-inspect-flyout-statistics.png :alt: Inspector flyout Statistics tab :screenshot: ::: The **Request** tab shows the exact syntax used in the request. You can click **Copy to clipboard** to copy the request or **Open in Console** to open it in the [{{kib}} console](../../explore-analyze/query-filter/tools/console.md). -:::{image} ../../images/observability-inspect-flyout-request.png +:::{image} /troubleshoot/images/observability-inspect-flyout-request.png :alt: Inspector flyout Request tab with exact syntax :screenshot: ::: The **Response** tab shows the exact response used in the visualizations on the page. You can click **Copy to clipboard** to copy the response. -:::{image} ../../images/observability-inspect-flyout-response.png +:::{image} /troubleshoot/images/observability-inspect-flyout-response.png :alt: Inspector flyout Response tab with exact response :screenshot: ::: diff --git a/troubleshoot/observability/troubleshoot-logs.md b/troubleshoot/observability/troubleshoot-logs.md index 05d5ea27aa..f8e6ff2aea 100644 --- a/troubleshoot/observability/troubleshoot-logs.md +++ b/troubleshoot/observability/troubleshoot-logs.md @@ -96,7 +96,7 @@ Failed to connect to {host} port {port} after 0 ms: Connection refused #### Solution [logs-troubleshooting-kib-not-accessible-solution] -The host needs access to {{kib}} or your project. Port `443` must be open and the deployment’s {{es}} endpoint must be reachable. Locate your project’s endpoint from **Help menu (![help icon](../../images/observability-help-icon.png "")) → Connection details**. +The host needs access to {{kib}} or your project. Port `443` must be open and the deployment’s {{es}} endpoint must be reachable. Locate your project’s endpoint from **Help menu (![help icon](/troubleshoot/images/observability-help-icon.png "")) → Connection details**. Run the following command, replacing the URL with your endpoint, and you should get an authentication error with more details on resolving your issue: diff --git a/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md b/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md index 967bfa6463..f2c48393f2 100644 --- a/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md +++ b/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md @@ -36,7 +36,7 @@ This could be for any of these reasons: * You have explicitly chosen not to send these metrics. You may choose to limit the metrics sent to Elastic to save on space and improve cluster performance. For example, the System integration has options to choose which metrics you want to send. You can [edit the integration policy](/reference/ingestion-tools/fleet/edit-delete-integration-policy.md) to begin collecting the missing metrics. For example: - :::{image} ../../../images/observability-turn-on-system-metrics.png + :::{image} /troubleshoot/images/observability-turn-on-system-metrics.png :alt: Screenshot showing system cpu and diskio metrics selected for collection :screenshot: ::: diff --git a/troubleshoot/security/detection-rules.md b/troubleshoot/security/detection-rules.md index 5096316c8b..174dbed557 100644 --- a/troubleshoot/security/detection-rules.md +++ b/troubleshoot/security/detection-rules.md @@ -26,7 +26,7 @@ If a {{ml}} rule is failing, check to make sure the required {{ml}} jobs are run 1. Go to **Rules** → **Detection rules (SIEM)**, then select the {{ml}} rule. The required {{ml}} jobs and their statuses are listed in the **Definition** section. - :::{image} ../../images/security-rules-ts-ml-job-stopped.png + :::{image} /troubleshoot/images/security-rules-ts-ml-job-stopped.png :alt: Rule details page with ML job stopped :screenshot: ::: @@ -87,14 +87,14 @@ Turning off `autocomplete:useTimeRange` could cause performance issues if the da :::::{dropdown} Warning about type conflicts and unmapped fields :name: rule-exceptions-field-conflicts -A warning icon (![Field conflict warning icon](../../images/security-field-warning-icon.png "title =20x20")) and message appear for fields with [type conflicts](#fields-with-conflicting-types) across multiple indices or fields that are [unmapped](#unmapped-field-conflict). You can learn more about the conflict by clicking the warning message. +A warning icon (![Field conflict warning icon](/troubleshoot/images/security-field-warning-icon.png "title =20x20")) and message appear for fields with [type conflicts](#fields-with-conflicting-types) across multiple indices or fields that are [unmapped](#unmapped-field-conflict). You can learn more about the conflict by clicking the warning message. ::::{note} A field can have type conflicts *and* be unmapped in specified indices. :::: -:::{image} ../../images/security-warning-icon-message.png +:::{image} /troubleshoot/images/security-warning-icon-message.png :alt: Shows the warning icon and message :screenshot: ::: @@ -106,7 +106,7 @@ Type conflicts occur when a field is mapped to different types across multiple i In the following example, the selected field has been defined as different types across five indices. -:::{image} ../../images/security-warning-type-conflicts.png +:::{image} /troubleshoot/images/security-warning-type-conflicts.png :alt: Warning for fields with type conflicts :screenshot: ::: @@ -118,7 +118,7 @@ Unmapped fields are undefined within an index’s mapping definition. Using unma In the following example, the selected field is unmapped across two indices. -:::{image} ../../images/security-warning-unmapped-fields.png +:::{image} /troubleshoot/images/security-warning-unmapped-fields.png :alt: Warning for unmapped fields :screenshot: ::: @@ -179,7 +179,7 @@ You can reduce the number of missed alerts due to ingestion pipeline delay by sp For example, say an event occurred at 10:00 but wasn’t ingested into {{es}} until 10:10 due to an ingestion pipeline delay. If you created a rule to detect that event with an interval + additional look-back time of 6 minutes, and the rule executes at 10:12, it would still detect the event because the `event.ingested` timestamp was from 10:10, only 2 minutes before the rule executed and well within the rule’s 6-minute interval + additional look-back time. -:::{image} ../../images/security-timestamp-override.png +:::{image} /troubleshoot/images/security-timestamp-override.png :alt: timestamp override :screenshot: ::: diff --git a/troubleshoot/security/elastic-defend.md b/troubleshoot/security/elastic-defend.md index 419f88a915..d5315d27c4 100644 --- a/troubleshoot/security/elastic-defend.md +++ b/troubleshoot/security/elastic-defend.md @@ -27,7 +27,7 @@ Integration policy response information is also available from the **Endpoints** :::: -:::{image} ../../images/security-unhealthy-agent-fleet.png +:::{image} /troubleshoot/images/security-unhealthy-agent-fleet.png :alt: Agent details page in {{fleet}} with Unhealthy status and integration failures :screenshot: ::: @@ -78,7 +78,7 @@ To resolve the potential system deadlock error: If you encounter a `“Required transform failed”` notice on the Endpoints page, you can usually resolve the issue by restarting the transform. Refer to [Transforming data](../../explore-analyze/transforms.md) for more information about transforms. -:::{image} ../../images/security-endpoints-transform-failed.png +:::{image} /troubleshoot/images/security-endpoints-transform-failed.png :alt: Endpoints page with Required transform failed notice :screenshot: ::: @@ -92,7 +92,7 @@ To restart a transform that’s not running: * `stopped`: Select **Start** to restart the transform. * `failed`: Select **Stop** to first stop the transform, and then select **Start** to restart it. - :::{image} ../../images/security-transforms-start.png + :::{image} /troubleshoot/images/security-transforms-start.png :alt: Transforms page with Start option selected :screenshot: :::