diff --git a/blog-service/2021/12-31.md b/blog-service/2021/12-31.md
index 09568a971b..83612a2e3c 100644
--- a/blog-service/2021/12-31.md
+++ b/blog-service/2021/12-31.md
@@ -649,7 +649,7 @@ Update - We have updated our [Enterprise Audit - Security Management App](/docs
---
## March 4, 2021 (Observability)
-Update - We're delighted to announce several enhancements to [Root Cause Explorer](/docs/observability/root-cause-explorer "Root Cause Explorer"). Root Cause Explorer now supports two additional AWS namespaces, as well as Events of Interest detection on Kubernetes and Trace metrics. Cause-impact analysis is now informed by Sumo Logic Tracing's Service Map, AWS X-ray, Kubernetes entities, and AWS inventory relationships. You'll also notice new filters and search builders at the top of the page to correlate Events of Interests at the service, orchestrator, AWS infrastructure, and host levels to speed up the identification of root causes. You can use the Infrastructure tab for an Event of Interest to pivot to dashboards, logs, metrics and, trace searches to take the next steps in root cause analysis.
+Update - We're delighted to announce several enhancements to Root Cause Explorer. Root Cause Explorer now supports two additional AWS namespaces, as well as Events of Interest detection on Kubernetes and Trace metrics. Cause-impact analysis is now informed by Sumo Logic Tracing's Service Map, AWS X-ray, Kubernetes entities, and AWS inventory relationships. You'll also notice new filters and search builders at the top of the page to correlate Events of Interests at the service, orchestrator, AWS infrastructure, and host levels to speed up the identification of root causes. You can use the Infrastructure tab for an Event of Interest to pivot to dashboards, logs, metrics and, trace searches to take the next steps in root cause analysis.
---
## March 1, 2021 (Metrics)
diff --git a/blog-service/2024/12-31.md b/blog-service/2024/12-31.md
index 0c4c5aa4e8..27dd4dc0d4 100644
--- a/blog-service/2024/12-31.md
+++ b/blog-service/2024/12-31.md
@@ -294,7 +294,7 @@ We're excited to announce the general availability of AI-driven alerts for metri
#### Deprecation Notice - Root Cause Explorer
-As part of our ongoing evaluation of the Sumo Logic service, our product team is deprecating [Root Cause Explorer](/docs/observability/root-cause-explorer), and it will no longer be available as of 30 April 2025.
+As part of our ongoing evaluation of the Sumo Logic service, our product team is deprecating Root Cause Explorer, and it will no longer be available as of 3 June 2025.
Learn more [here](/docs/observability/root-cause-explorer-deprecation).
diff --git a/blog-service/2025-06-03-observability.md b/blog-service/2025-06-03-observability.md
new file mode 100644
index 0000000000..e51a407703
--- /dev/null
+++ b/blog-service/2025-06-03-observability.md
@@ -0,0 +1,14 @@
+---
+title: End-of-Life Notice - Root Cause Explorer (Observability)
+image: https://help.sumologic.com/img/sumo-square.png
+keywords:
+ - apps
+ - sumo-collection
+hide_table_of_contents: true
+---
+
+import useBaseUrl from '@docusaurus/useBaseUrl';
+
+Previously, we announced that Root Cause Explorer [was deprecated](/release-notes-service/2024/12/31/#november-01-2024-observability). As of 3 June 2025, Root Cause Explorer has reached its end of life and is no longer available.
+
+Learn more [here](/docs/observability/root-cause-explorer-deprecation/).
\ No newline at end of file
diff --git a/cid-redirects.json b/cid-redirects.json
index 9e77ad0308..f2bf6eca0f 100644
--- a/cid-redirects.json
+++ b/cid-redirects.json
@@ -1642,7 +1642,7 @@
"/cid/6029": "/docs/integrations/saas-cloud/kaltura",
"/cid/6030": "/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/snowflake-logs-source",
"/cid/10112": "/docs/integrations/app-development/jfrog-xray",
- "/cid/10113": "/docs/observability/root-cause-explorer",
+ "/cid/10113": "/docs/observability/root-cause-explorer-deprecation",
"/cid/10116": "/docs/manage/fields",
"/cid/10117": "/docs/metrics/metrics-transformation-rules",
"/cid/10118": "/docs/metrics/metric-rules-editor",
@@ -3535,7 +3535,7 @@
"/Observability_Solution/Reliability_Management/Creating_SLOs_and_Monitors": "/docs/observability/reliability-management-slo",
"/Observability_Solution/Reliability_Management/SLO_Dashboards": "/docs/observability/reliability-management-slo",
"/docs/observability/reliability-management-slo/use-cases": "/docs/observability/reliability-management-slo",
- "/Observability_Solution/Root_Cause_Explorer": "/docs/observability/root-cause-explorer",
+ "/Observability_Solution/Root_Cause_Explorer": "/docs/observability/root-cause-explorer-deprecation",
"/Other_Solutions": "/docs/observability",
"/Other_Solutions/Software_Development_Optimization_Solution/01_About_the_Software_Development_Optimization_Solution": "/docs/observability/sdo/about-sdo",
"/Other_Solutions/Software_Development_Optimization_Solution/02_Supported_Tools_and_Schema": "/docs/observability/sdo/supported-tools-schema",
@@ -3920,8 +3920,8 @@
"/Metrics/Metrics-Sources/03Graphite-Source-for-Metrics": "/docs/send-data/installed-collectors/sources/host-metrics-source",
"/Metrics/Working-with-Metrics/03-Create-a-Metrics-Visualization": "/docs/metrics/metrics-queries/metrics-explorer",
"/Observability_Solution/AWS_Observability_Solution/01_Deploy_and_Use_AWS_Observability/11Configure_Alerts": "/docs/observability/aws/deploy-use-aws-observability",
- "/Observability_Solution/AWS_Observability_Solution/01_Deploy_and_Use_AWS_Observability/12Root_Cause_Explorer": "/docs/observability/root-cause-explorer",
- "/Observability_Solution/AWS_Observability_Solution/01_Deploy_and_Use_AWS_Observability/Root_Cause_Explorer": "/docs/observability/root-cause-explorer",
+ "/Observability_Solution/AWS_Observability_Solution/01_Deploy_and_Use_AWS_Observability/12Root_Cause_Explorer": "/docs/observability/root-cause-explorer-deprecation",
+ "/Observability_Solution/AWS_Observability_Solution/01_Deploy_and_Use_AWS_Observability/Root_Cause_Explorer": "/docs/observability/root-cause-explorer-deprecation",
"/Observability_Solution/Kubernetes_Solution/01Set_up_collection_for_Kubernetes": "/docs/observability/kubernetes/collection-setup",
"/Observability_Solution/Kubernetes_Solution/Global_Intelligence_for_Kubernetes_DevOps_App": "/docs/integrations/global-intelligence/kubernetes-devops",
"/Observability_Solution/Kubernetes_Solution/Navigate_your_Kubernetes_environment": "/docs/observability/kubernetes",
@@ -4140,7 +4140,7 @@
"/Solutions/AWS_Observability_Solution/05_Monitor_Control_Tower-Managed_Accounts": "/docs/observability/aws/other-configurations-tools/integrate-control-tower-accounts",
"/Solutions/AWS_Observability_Solution/AWS_Observability_Application_Load_Balancer": "/docs/observability/aws/integrations/aws-application-load-balancer",
"/Solutions/AWS_Observability_Solution/View_AWS_Observability_Solution_Dashboards": "/docs/observability/aws/deploy-use-aws-observability/view-dashboards",
- "/Solutions/AWS_Observability_Solution/Root_Cause_Explorer": "/docs/observability/root-cause-explorer",
+ "/Solutions/AWS_Observability_Solution/Root_Cause_Explorer": "/docs/observability/root-cause-explorer-deprecation",
"/Solutions/AWS_Observability_Solution/03_Set_Up_the_AWS_Observability_Solution": "/docs/observability/aws/about",
"/Solutions/AWS_Observability_Solution/About_the_AWS_Observability_Solution": "/docs/observability/aws/about",
"/Solutions/AWS_Observability_Solution/Set_Up_the_AWS_Observability_Solution": "/docs/observability/aws",
diff --git a/docs/alerts/monitors/alert-response-faq.md b/docs/alerts/monitors/alert-response-faq.md
index 1b2ff343fa..6e8826bf7a 100644
--- a/docs/alerts/monitors/alert-response-faq.md
+++ b/docs/alerts/monitors/alert-response-faq.md
@@ -107,7 +107,7 @@ Anomaly cards only work if we are able to infer an entity from the alerting quer
## Where are Anomaly cards for metrics-based alerts?
-Alert response anomaly detection only detects anomalies for metrics data coming from Kubernetes or specific sources within AWS ([learn more](../../observability/root-cause-explorer.md)). If you are setting up alerts on metrics that don’t belong to either one of these categories, anomalies will not be detected.
+Alert response anomaly detection only detects anomalies for metrics data coming from Kubernetes or specific sources within AWS. If you are setting up alerts on metrics that don’t belong to either one of these categories, anomalies will not be detected.
Use the [Sumo Logic Kubernetes collection](https://github.com/SumoLogic/sumologic-kubernetes-collection#sumologic-kubernetes-collection) or the [Sumo Logic AWS observability collection](/docs/observability/aws) for this to work properly.
diff --git a/docs/alerts/monitors/alert-response.md b/docs/alerts/monitors/alert-response.md
index 3e74e4f8dc..357cfe0fe7 100644
--- a/docs/alerts/monitors/alert-response.md
+++ b/docs/alerts/monitors/alert-response.md
@@ -202,14 +202,14 @@ The **Log Fluctuations** context card, available for logs monitors, detects diff
### Anomalies
-This card detects time series anomalies for entities related to the alert. These insights are powered by the [Root Cause Explorer](../../observability/root-cause-explorer.md).
+This card detects time series anomalies for entities related to the alert.
Anomalies are grouped into [golden signals](https://sre.google/sre-book/monitoring-distributed-systems/). Anomalies are also presented on a timeline; the length of the anomaly represents its duration.

* **A**. Name and description of the context card.
* **B**. Count of anomalies belonging to each golden signal type.
* **C**. A timeline view of anomalies with their start time and duration, the domain (e.g. AWS, Kubernetes), and the entity on which it was detected. Anomalies may be grouped based on connections between entities and similarity of metrics. For example, anomalies on EC2 instances that are members of an AutoScaling group may be grouped together. The count shown in each anomaly refers to the number of grouped anomalies.
-* **D**. A link to view the anomalies in the **Root Cause Explorer**.
+* **D**. A link to view the anomalies.
:::note
Only anomalies with a start time around 30 minutes before or after the alert was created show up in the card.
diff --git a/docs/apm/index.md b/docs/apm/index.md
index 5c1e29e81d..432cd0d64a 100644
--- a/docs/apm/index.md
+++ b/docs/apm/index.md
@@ -50,6 +50,3 @@ Monitor user activity, span analytics, service maps, and transaction traces betw
-:::tip
-Use our [Root Cause Explorer](/docs/observability/root-cause-explorer) to investigate usage and issues.
-:::
diff --git a/docs/cse/administration/inventory-sources-and-data.md b/docs/cse/administration/inventory-sources-and-data.md
index 2dad3b91a9..46a94744ca 100644
--- a/docs/cse/administration/inventory-sources-and-data.md
+++ b/docs/cse/administration/inventory-sources-and-data.md
@@ -34,7 +34,7 @@ Some of the inventory sources are strictly for collecting inventory data—such
Some inventory sources provide user inventory information, some provide computer inventory information, and some provide both. The table below lists currently available inventory sources.
:::note
-The AWS Inventory Source collects the inventory of AWS resources in your AWS account, but is usable only by the Root Cause Explorer. See [AWS Inventory Source](/docs/observability/root-cause-explorer/#aws-inventory-source).
+The AWS Inventory Source collects the inventory of AWS resources in your AWS account.
:::
| Inventory source | Type of source | Inventory data collected |
diff --git a/docs/dashboards/drill-down-to-discover-root-causes.md b/docs/dashboards/drill-down-to-discover-root-causes.md
index deda42f7aa..d6d2028432 100644
--- a/docs/dashboards/drill-down-to-discover-root-causes.md
+++ b/docs/dashboards/drill-down-to-discover-root-causes.md
@@ -7,10 +7,6 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
When you see a spike of interest on a dashboard that requires further investigation, you can easily drill into the related content to discover the root cause. This page shows you how you can easily discover related dashboards and corresponding logs searches that pertain to an issue in your environment.
-:::note
-If you're looking for our Root Cause Explorer observability tool, [click here](/docs/observability/root-cause-explorer).
-:::
-
## Drilling into related content
Sumo Logic provides relevant log searches and dashboards to consider investigating, as well as other locations with relevant content. This facilitates quickly discovering the root cause and devising a plan of action.
diff --git a/docs/get-started/ai-machine-learning.md b/docs/get-started/ai-machine-learning.md
index f2304cc2d8..0eebd91b08 100644
--- a/docs/get-started/ai-machine-learning.md
+++ b/docs/get-started/ai-machine-learning.md
@@ -77,12 +77,6 @@ Sumo Logic offers seamless integrations with various AI-driven platforms to enab
* [Criminal IP](/docs/platform-services/automation-service/app-central/integrations/criminal-ip)
* [Arcanna](/docs/platform-services/automation-service/app-central/integrations/arcanna)
-
-
## Security
Our Sumo Logic AI for Security functionality empowers SOC analysts and threat hunters to effectively safeguard their technology stack against evolving threats. By integrating advanced tools for discovery, detection, investigation, response, and protection, we minimize dwell time, reduce false positives, accelerate incident resolution, and proactively prevent future incidents, ensuring robust security and resilience for your cloud, container, and on-prem resources.
diff --git a/docs/get-started/sumo-logic-ui-classic.md b/docs/get-started/sumo-logic-ui-classic.md
index 1e8a0f542d..986e0137fc 100644
--- a/docs/get-started/sumo-logic-ui-classic.md
+++ b/docs/get-started/sumo-logic-ui-classic.md
@@ -133,7 +133,6 @@ To launch a search, metrics visualization, or live tail session, do the followin
* [Live Tail](/docs/search/live-tail). View a real-time live feed of log events associated with a Source or Collector.
* [Explore](/docs/dashboards/explore-view). See an intuitive visual hierarchy of your environment.
* [Dashboard](/docs/dashboards/). Analyze metrics and log data on the same dashboard, in a streamlined user experience.
- * [Root Cause](/docs/observability/root-cause-explorer). Accelerate troubleshooting and isolate root causes for incidents in your apps and microservices.
### View recent dashboards and searches
diff --git a/docs/manage/manage-subscription/fedramp-capabilities.md b/docs/manage/manage-subscription/fedramp-capabilities.md
index 4f5c9d6ed3..ff70e5a489 100644
--- a/docs/manage/manage-subscription/fedramp-capabilities.md
+++ b/docs/manage/manage-subscription/fedramp-capabilities.md
@@ -49,7 +49,6 @@ The following table shows the capabilities included with Sumo Logic’s FedRAMP
| Collection - Amazon Web Services | [AWS Kinesis Firehose for Metrics](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source/) |||
| Collection - Amazon Web Services | [AWS Inventory](/docs/observability/aws/deploy-use-aws-observability/resources/) || |
| Collection - Amazon Web Services | [AWS Metadata](/docs/send-data/hosted-collectors/amazon-aws/aws-metadata-tag-source/) |||
-| Collection - Amazon Web Services | [AWS XRay](/docs/observability/root-cause-explorer/#aws-x-ray-source) |||
| Collection - Amazon Web Services | [CSE AWS EC2 Inventory](/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/cse-aws-ec-inventory-source/) || |
| Collection - Archive | [AWS S3 archive](/docs/manage/data-archiving/archive) |||
| Collection - Cloud APIs | [Akamai SIEM API](/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/akamai-siem-api-source/) ||
*Available upon request within 5 business days.* |
diff --git a/docs/manage/manage-subscription/sumo-logic-credits-accounts.md b/docs/manage/manage-subscription/sumo-logic-credits-accounts.md
index c37363d3de..8d34b0e497 100644
--- a/docs/manage/manage-subscription/sumo-logic-credits-accounts.md
+++ b/docs/manage/manage-subscription/sumo-logic-credits-accounts.md
@@ -106,7 +106,6 @@ The following table provides a summary list of key features by Credits package a
| Partitions |  |  |  |  |  |  |
| PCI Compliance App | |  | |  |  |  |
| Real User Monitoring (RUM) | |  |  |  |  |  |
-| Root Cause Explorer | | | |  | |  |
| SAML |  |  |  |  |  |  |
| Scheduled Views |  |  |  |  |  |  |
| Search Job API | |  | |  |  |  |
diff --git a/docs/manage/manage-subscription/sumo-logic-flex-accounts.md b/docs/manage/manage-subscription/sumo-logic-flex-accounts.md
index e3aca6715f..7ff5a8d86b 100644
--- a/docs/manage/manage-subscription/sumo-logic-flex-accounts.md
+++ b/docs/manage/manage-subscription/sumo-logic-flex-accounts.md
@@ -130,7 +130,6 @@ The following table provides a summary list of key features by Flex package acco
| Real User Monitoring (RUM) | |  |  | |
| Reliability Management (SLIs/SLOs) | | | | |
| Risk Assessment | |  | | |
-| Root Cause Explorer | | | |  |
| Scheduled Alert Muting | | | | |
| Scheduled Views |  |  |  |  |
| Service Maps | |  | | |
diff --git a/docs/observability/about.md b/docs/observability/about.md
index 747eb22a7f..dcaf61a810 100644
--- a/docs/observability/about.md
+++ b/docs/observability/about.md
@@ -74,4 +74,4 @@ The solution also offers features and capabilities that support each step of the
* **Monitor** your systems effectively with new and improved alerting and dashboarding capabilities. The Observability Solution includes rich pre-built content that you can leverage to quickly start monitoring specific services.
* **Diagnose** issues quickly using features like the Entity Explorer, trace analytics, and the Metrics Explorer.
-* **Troubleshoot** issues and find root causes through Behavior insights, Root Cause Explorer, and log search.
+* **Troubleshoot** issues and find root causes through behavior insights and log search.
diff --git a/docs/observability/index.md b/docs/observability/index.md
index cb23d6bc0a..0e04741e2a 100644
--- a/docs/observability/index.md
+++ b/docs/observability/index.md
@@ -31,12 +31,6 @@ In this section, we'll introduce the following concepts:
Set alerts that notify you about system state changes.
-Troubleshoot app and microservice incidents and isolate root causes.
-
-
-
-:::warning
-Root Cause Explorer has been [deprecated](/docs/observability/root-cause-explorer-deprecation). It is available only in the [Classic UI](/docs/get-started/sumo-logic-ui-classic/). We recommend using other [Observability](/docs/observability/) tools instead.
-:::
-
-
-**Root Cause Explorer** (RCE) helps on-call staff, DevOps, and infrastructure engineers accelerate troubleshooting and root cause isolation for incidents in their apps and microservices running on AWS, public cloud hosts, and Kubernetes.
-
-Root Cause Explorer helps you correlate unusual spikes, referred to as *Events of Interest (EOIs)*, in AWS CloudWatch metrics, OpenTelemetry trace metrics, host metrics, and Kubernetes metrics using the context associated with the incident. Such incident context includes timeline, stack (for example, AWS, Kubernetes, Application/Services), namespaces, resource identifiers, tags, metric type, metric name and more.
-
-Given an alert, for instance, a microservice in AWS us-west-2 experiencing unusual user response times, an on-call user can use Root Cause Explorer to correlate EOIs on over 500 AWS CloudWatch metrics over 15 AWS service namespaces (such as EC2, RDS, and so on), Kubernetes metrics, and trace data, to isolate the probable cause to a specific set of EC2 instances, serving the given microservice in AWS us-west-2 that may be overloaded.
-
-Root Cause Explorer supports the following AWS namespaces by processing CloudWatch metrics data and computing EOIs:
-
-* AWS/EC2
-* AWS/EBS
-* AWS/ELB
-* AWS/ApplicationELB
-* AWS/NetworkELB
-* AWS/Lambda
-* AWS/RDS
-* AWS/Dynamodb
-* AWS/API Gateway
-* AWS/ECS
-* AWS/ElastiCache
-* AWS/SQS
-* AWS/SNS
-* AWS X-ray (for service metrics and service topology)
-* AWS Auto Scaling (for topology data)
-
-Root Cause Explorer can also work with EC2 and EBS metrics collected by Host Metrics Sources configured on installed collectors that run on your hosts. In addition, Root Cause Explorer can leverage AWS X-Ray to correlate spikes in service metrics to AWS infrastructure metric spikes.
-
-Root Cause Explorer also supports:
-
-* Kubernetes metrics (from customer-managed Kubernetes, EKS, GKE, or Azure Kubernetes Engine) and associated Events of Interest. Given the ephemeral nature of container resources, Root Cause Explorer uses proprietary algorithms to aggregate Events of Interest to stable entities in a Kubernetes cluster.
-* Metrics derived from OpenTelemetry data from Sumo Logic Tracing for applications and services
-
-The screenshot below shows the Root Cause Explorer UI.
-
-
-
-## Availability
-
-| Account Type | Account Level |
-|:--------------|:------------------------------------------------|
-| Cloud Flex | Trial, Enterprise |
-| Credits | Trial, Enterprise Operations, Enterprise Suite |
-
-## Troubleshooting concepts
-
-Root Cause Explorer is built to enable six concepts that accelerate troubleshooting and issue resolution. These concepts should be familiar to on-call staff, DevOps, and infrastructure engineers.
-
-### Concept 1: Abnormal spikes are symptoms of an underlying problem
-
-A spike in a metric on a resource is a sign of an underlying problem. Larger spikes compared to the expected baseline and longer-lasting spikes require closer attention than other spikes.
-
-An abnormal spike in a metric is a statistical anomaly. Root Cause Explorer leverages spikes and adds additional context to them to compute Events of Interest (EOIs).
-
-EOIs are constructed based on modeling the periodicity of the underlying AWS CloudWatch, Kubernetes, or Tracing metrics on each resource in your account to create resource-specific baselines. The periodicity of a metric can be daily, weekly, or none.
-
-EOIs also leverage proprietary noise reduction rules curated by subject matter experts. One example of a rule is how long the system watches an anomalous metric before detecting an EOI. Similarly, EOIs on metrics that have an upper bound (for example, CPU utilization cannot exceed 100%) are subject to additional curation rules.
-
-
-
-### Concept 2: Context and correlation of spikes are essential strategies for root cause exploration
-
-In a complex system, many resources may behave anomalously within the short time range of an incident. In the figure below, an application that is experiencing throttling at the DynamoDB level is likely to exhibit symptoms in the same time range at the EC2, ELB Target Group, and ELB levels. Root Cause Explorer leverages this insight to correlate EOIs based on the following dimensions:
-
-* AWS account (account ID)
-* Time range
-* AWS region
-* AWS Namespace
-* Entity (resource identifier)
-:::note
-If an AWS X-Ray source is configured, services show as entities in the entity dimension.
-:::
-* AWS tags
-* Golden signals: error, latency, throughput, bottleneck
-* Metric name
- :::note
- If an X-Ray source is configured, throughput, load, latency, and error metrics corresponding to service entities are shown in this dimension.
- :::
-* Advanced filters
- * Metric periodicity
- * Metric stability
- * Intensity—the extent of drift from baseline
- * Duration of EOI
- * Positive or negative drift. Negative drifts can lead to an incident. Positive drifts typically relate to metrics that have bounced back from an abnormal level, indicating recovery. However, not all positive drifts are good: for example, a down-spike in CPU utilization on an EC2 instance may be the result of a breakage in connected upstream resources.
-
-In large deployments, thousands of AWS CloudWatch, Tracing, or Kubernetes metrics may be anomalous over the course of an outage or incident, making it impossible for an on-call user to deduce which resource(s) may be related to the root cause. With the ability to correlate EOIs based on context, Root Cause Explorer can significantly accelerate incident triage and root cause isolation.
-
-
-
-### Concept 3: Connections between resources and services help pinpoint root cause
-
-In a complex system, knowing the connections between resources and the services they serve can help a user trace problems from top-level symptoms to upstream root causes. Root Cause Explorer uses connectivity data from Sumo Logic Tracing Service Maps, AWS X-ray, AWS Inventory data and Kubernetes parent-child relationships, augmented by subject matter expertise, to construct cause-impact and topology-induced groupings of Events of Interest.
-
-In the figure below, an application that is experiencing throttling at the DynamoDB level will likely exhibit symptoms, in the form of abnormal spikes, at connected EC2 instances, ELB Target Group, and ELB levels. Root Cause Explorer discovers the topology of your AWS infrastructure using its AWS inventory source. This topology helps Root Cause Explorer group anomalous metrics, for example:
-
-* A single abnormal spike on a single resource, like an unusual CPU spike on an EC2 instance.
-* A disparate group of abnormal spikes on a single resource, like an unusual "Network In" spike and an unusual "Network Out" traffic spike on a single EC2 instance.
-* Spikes are also grouped based on statistics for a given metric on a single entity. For example, if there are anomalies for Average, Max and Sum on a certain metric (provided they occur in the same time range) on an EC2 instance, they are grouped together.
-* A group of similar unusual spikes on a collection of resources that are members of an EC2 autoscaling group or ELB target group.
-* Using Tracing’s Service Map and AWS X-ray, Root Cause Explorer groups Events of Interest on services observed by these systems because a spike in metrics of one service will likely result in a spike in a connected service.
-
-For resources like API Gateway and Application Load Balancers, special notation and logic is used to drive grouping of EOIs, given that these are parent entities that enclose other layers in an AWS stack. For API Gateway, Events of Interest are computed for the following combinations:
-
-* `API Name`
-* `API Name` and `Stage`
-* `API Name`, `Stage`, `Resource`, and `API Method`
-
-So, an EOI grouped on an API Gateway entity might consist of EOIs on entities derived from any of the following entity hierarchies:
-
-* `API Name` only, for example `OrderManagement`
-* `API Name::stage`, for example `OrderManagement::Prod `
-* `API Name::stage::resource::method`, for example,
-* `OrderManagement`::`Prod::/::POST`
-
-In such a case, the three EOIs would be grouped together, in conjunction with the entity/topology derived grouping.
-
-
-
-
-### Concept 4: Earlier spikes are closer to root cause
-
-In a complex system, resources or services that break at the early stages of an incident are closer to the probable cause than resources that experience spikes later. Root Cause Explorer exploits this insight to display spikes on a timeline.
-
-
-
-### Concept 5: Real root cause requires exploration of log and trace data
-
-Root Cause Explorer helps triage the first level of root cause which can then drive quick recovery. However, it is also important to understand what caused the system to get into the state that caused an incident. This often requires exploring logs associated with an application or microservice. In the example in the figure below, the real root cause for DynamoDB throttling spikes is a change in the Provisioned IOPS setting on a table. Lowering this setting, while lowering AWS costs, can also lead to throttling. Such a configuration change might be evident in AWS CloudTrail logs associated with DynamoDB. Likewise, services experiencing spikes may require the user to explore Traces associated with them to diagnose and troubleshoot further.
-
-
-
-### Concept 6: Golden signals help organize root cause exploration
-
-If you've read the [Google SRE handbook](https://landing.google.com/sre/sre-book/chapters/preface/), you'll be familiar with the golden signals of load, latency, bottleneck and errors. In a nutshell, errors and latency are signals that most affect users because your service is either inaccessible or slow. Bottleneck and load signals are likely early symptoms (and probable root causes) that may lead to latency and errors. Root Cause Explorer classifies each AWS CloudWatch and Kubernetes metric into one of the golden signals to help users navigate spikes using golden signals and arrive at the root cause.
-
-## Set up Root Cause Explorer
-
-Before you begin, ensure that your organization is entitled to the appropriate features. The account types and levels that support Root Cause Explorer are listed in [Availability](#availability), above. The [AWS Observability Solution](/docs/observability/aws/) is a prerequisite for AWS customers. If you have Kubernetes and tracing metrics, collection should be configured. For information about collecting Kubernetes and and Tracing metrics, see [Set up collection for Kubernetes](kubernetes/collection-setup.md) and Getting Started with Transaction Tracing.
-
-You set up Root Cause Explorer using an [AWS CloudFormation template](https://sumologic-appdev-aws-sam-apps.s3.amazonaws.com/sumologic_observability.master.template.yaml). The template installs the AWS Inventory Source and optionally, the AWS X-Ray source, in your Sumo Logic account. The AWS Inventory Source collects metadata and topology relationships for resources belonging to the namespaces listed below:
-
-* AWS/EC2
-* AWS/EBS
-* AWS/ELB
-* AWS/ApplicationELB
-* AWS/NetworkELB
-* AWS/Lambda
-* AWS/RDS
-* AWS/Dynamodb
-* AWS/API Gateway
-* AWS/ECS
-* AWS/ElastiCache
-* AWS/Autoscaling. Auto Scaling data is used only for topology inference. CloudWatch metrics related to Auto Scaling groups are not supported at this time.
-
-If you don’t already have the Sumo Logic CloudWatch Source for Metrics configured, the template will install the source to collect AWS CloudWatch metrics from the account permissioned by the credential provided in the template. The CloudFormation template gives you the option to configure an AWS X-Ray source, if required.
-
-The CloudFormation template relies on the IAM role policies listed in the [Appendix](#appendix) below.
-
-## Root Cause Explorer features
-
-Root Cause Explorer provides filters that you can use to narrow your EOIs. For more information, see [Search Filters for AWS and hosts](#search-filters-for-aws-and-hosts).
-
-You can also adjust the timeline to match the context—for example, if you know that an incident happened in the last 60 minutes, pick that duration in the duration picker. If you are concerned about errors, pick the Error legend in the EOI panel to filter EOIs by metric type. Click an error EOI to view its details.
-
-In the screenshot below, an EOI on a DynamoDB is shown. Click an EOI bubble to view its details and the details of the underlying time series in the right-hand panel. Next, click the namespace filter and view the list of impacted namespaces with their count of EOIs. Pick the top namespaces based on EOI count—these represent the prime suspects with respect to the incident.
-
-
-
-Among the search filters in Root Cause Explorer, the Advanced Filters provide five dimensions you can use to narrow down EOIs, as shown below. Each dimension indicates the number of associated EOIs. The dimensions are:
-
-* Impact. The EOI is positive (for example, a decrease in latency, errors, bottleneck metrics) or negative (for example, an increase in latency, errors, bottleneck metrics). Note that a positive impact is not necessarily a good thing: a CPU metric that has dropped significantly may imply problems in microservices that are upstream of the node experiencing the drop in CPU utilization.
-* Intensity. The extent of drift from the expected value of a metric—classified as High, Medium or Low. Other things being equal, high intensity EOIs require more attention than others.
-* Duration. How long a metric has been anomalous.
-* Seasonality. Seasonality of the metric, on a 0 (low) to 100 (high) scale. This adds context and eliminates false positives in time series data that may otherwise look anomalous due to the presence of periodicity.
-* Stability. Stability of the metric, based on a proprietary algorithm, on a 0 (low) to 100 (high) scale. EOIs on metrics that are usually stable are probably more indicative of a root cause than other metrics.
-
-## About the Root Cause Explorer UI
-
-This section describes the Root Cause Explorer UI and how to interact with it.
-
-### Accessing Root Cause Explorer
-
-To open the Root Cause Explorer:
-
-1. Go to the **Home** screen and select **Root Cause**.