diff --git a/docs/en/serverless/ai-assistant/ai-assistant.mdx b/docs/en/serverless/ai-assistant/ai-assistant.mdx
deleted file mode 100644
index 5350e44906..0000000000
--- a/docs/en/serverless/ai-assistant/ai-assistant.mdx
+++ /dev/null
@@ -1,322 +0,0 @@
----
-slug: /serverless/observability/ai-assistant
-title: AI Assistant
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
-
-The AI Assistant uses generative AI to provide:
-
-* **Chat**: Have conversations with the AI Assistant. Chat uses function calling to request, analyze, and visualize your data.
-* **Contextual insights**: Open prompts throughout ((observability)) that explain errors and messages and suggest remediation.
-
-
-
-The AI Assistant integrates with your large language model (LLM) provider through our supported Elastic connectors:
-
-* [OpenAI connector](((kibana-ref))/openai-action-type.html) for OpenAI or Azure OpenAI Service.
-* [Amazon Bedrock connector](((kibana-ref))/bedrock-action-type.html) for Amazon Bedrock, specifically for the Claude models.
-* [Google Gemini connector](((kibana-ref))/gemini-action-type.html) for Google Gemini.
-
-
-The AI Assistant is powered by an integration with your large language model (LLM) provider.
-LLMs are known to sometimes present incorrect information as if it's correct.
-Elastic supports configuration and connection to the LLM provider and your knowledge base,
-but is not responsible for the LLM's responses.
-
-
-
-Also, the data you provide to the Observability AI assistant is _not_ anonymized, and is stored and processed by the third-party AI provider. This includes any data used in conversations for analysis or context, such as alert or event data, detection rule configurations, and queries. Therefore, be careful about sharing any confidential or sensitive details while using this feature.
-
-
-## Requirements
-
-The AI assistant requires the following:
-
-* An account with a third-party generative AI provider that preferably supports function calling.
-If your AI provider does not support function calling, you can configure AI Assistant settings under **Project settings** → **Management** → **AI Assistant for Observability Settings** to simulate function calling, but this might affect performance.
-
- Refer to the [connector documentation](((kibana-ref))/action-types.html) for your provider to learn about supported and default models.
-* The knowledge base requires a 4 GB ((ml)) node.
-
-
- The free tier offered by third-party generative AI providers may not be sufficient for the proper functioning of the AI assistant.
- In most cases, a paid subscription to one of the supported providers is required.
- The Observability AI assistant doesn't support connecting to a private LLM.
- Elastic doesn't recommend using private LLMs with the Observability AI assistant.
-
-
-## Your data and the AI Assistant
-
-Elastic does not use customer data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to the AI Assistant will be processed by the third-party provider you chose when setting up the OpenAI connector as part of the assistant setup.
-
-Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.
-
-## Set up the AI Assistant
-
-To set up the AI Assistant:
-
-1. Create an authentication key with your AI provider to authenticate requests from the AI Assistant. You'll use this in the next step. Refer to your provider's documentation for information about creating authentication keys:
- * [OpenAI API keys](https://platform.openai.com/docs/api-reference)
- * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference)
- * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html)
- * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get)
-1. From **Project settings** → **Management** → **Connectors**, create a connector for your AI provider:
- * [OpenAI](((kibana-ref))/openai-action-type.html)
- * [Amazon Bedrock](((kibana-ref))/bedrock-action-type.html)
- * [Google Gemini](((kibana-ref))/gemini-action-type.html)
-1. Authenticate communication between ((observability)) and the AI provider by providing the following information:
- 1. In the **URL** field, enter the AI provider's API endpoint URL.
- 1. Under **Authentication**, enter the key or secret you created in the previous step.
-
-## Add data to the AI Assistant knowledge base
-
-
-**If you started using the AI Assistant in technical preview**,
-any knowledge base articles you created using ELSER v1 will need to be reindexed or upgraded before they can be used.
-Going forward, you must create knowledge base articles using ELSER v2.
-You can either:
-
-* Clear all old knowledge base articles manually and reindex them.
-* Upgrade all knowledge base articles indexed with ELSER v1 to ELSER v2 using a [Python script](https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/model-upgrades/upgrading-index-to-use-elser.ipynb).
-
-
-The AI Assistant uses [ELSER](((ml-docs))/ml-nlp-elser.html), Elastic's semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.
-
-
-Your AI provider may collect telemetry when using the AI Assistant. Contact your AI provider for information on how data is collected.
-
-
-You can add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.
-
-You can also add external data to the knowledge base either in the Project Settings UI or using the ((es)) Index API.
-
-### Use the UI
-
-To add external data to the knowledge base in the Project Settings UI:
-
-1. Go to **Project Settings**.
-1. In the _Other_ section, click **AI assistant for Observability settings**.
-1. Then select the **Elastic AI Assistant for Observability**.
-1. Switch to the **Knowledge base** tab.
-1. Click the **New entry** button, and choose either:
-
- * **Single entry**: Write content for a single entry in the UI.
- * **Bulk import**: Upload a newline delimited JSON (`ndjson`) file containing a list of entries to add to the knowledge base.
- Each object should conform to the following format:
-
- ```json
- {
- "id": "a_unique_human_readable_id",
- "text": "Contents of item",
- }
- ```
-
-### Use the ((es)) Index API
-
-1. Ingest external data (GitHub issues, Markdown files, Jira tickets, text files, etc.) into ((es)) using the ((es)) [Index API](((ref))/docs-index_.html).
-1. Reindex your data into the AI Assistant's knowledge base index by completing the following query in **Developer Tools** → **Console**. Update the following fields before reindexing:
- * `InternalDocsIndex`: Name of the index where your internal documents are stored.
- * `text_field`: Name of the field containing your internal documents' text.
- * `timestamp`: Name of the timestamp field in your internal documents.
- * `public`: If `true`, the document is available to all users with access to your Observability project. If `false`, the document is restricted to the user indicated in the following `user.name` field.
- * `user.name` (optional): If defined, restricts the internal document's availability to a specific user.
- * You can add a query filter to index specific documents.
-
-```console
-POST _reindex
-{
- "source": {
- "index": "",
- "_source": [
- "",
- "",
- "namespace",
- "is_correction",
- "public",
- "confidence"
- ]
- },
- "dest": {
- "index": ".kibana-observability-ai-assistant-kb-000001",
- "pipeline": ".kibana-observability-ai-assistant-kb-ingest-pipeline"
- },
- "script": {
- "inline": "ctx._source.text = ctx._source.remove(\"\");ctx._source.namespace=\"\";ctx._source.is_correction=false;ctx._source.public=;ctx._source.confidence=\"high\";ctx._source['@timestamp'] = ctx._source.remove(\"\");ctx._source['user.name'] = \"\""
- }
-}
-```
-
-## Interact with the AI Assistant
-
-You can chat with the AI Assistant or interact with contextual insights located throughout ((observability)).
-See the following sections for more on interacting with the AI Assistant.
-
-
-After every answer the LLM provides, let us know if the answer was helpful.
-Your feedback helps us improve the AI Assistant!
-
-
-### Chat with the assistant
-
-Click **AI Assistant** in the upper-right corner where available to start the chat:
-
-
-
-This opens the AI Assistant flyout, where you can ask the assistant questions about your instance:
-
-
-
-
- Asking questions about your data requires function calling, which enables LLMs to reliably interact with third-party generative AI providers to perform searches or run advanced functions using customer data.
-
- When the Observability AI Assistant performs searches in the cluster, the queries are run with the same level of permissions as the user.
-
-
-### Suggest functions
-
-
-
-The AI Assistant uses several functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant's function suggestions and inspect function responses. For example, you could use the `kibana` function to call a ((kib)) API on your behalf.
-
-You can suggest the following functions:
-
-
-
- `alerts`
- Get alerts for ((observability)).
-
-
- `elasticsearch`
- Call ((es)) APIs on your behalf.
-
-
- `kibana`
- Call ((kib)) APIs on your behalf.
-
-
- `summarize`
- Summarize parts of the conversation.
-
-
- `visualize_query`
- Visualize charts for ES|QL queries.
-
-
-
-Additional functions are available when your cluster has APM data:
-
-
-
- `get_apm_correlations`
- Get field values that are more prominent in the foreground set than the background set. This can be useful in determining which attributes (such as `error.message`, `service.node.name`, or `transaction.name`) are contributing to, for instance, a higher latency. Another option is a time-based comparison, where you compare before and after a change point.
-
-
- `get_apm_downstream_dependencies`
- Get the downstream dependencies (services or uninstrumented backends) for a service. Map the downstream dependency name to a service by returning both `span.destination.service.resource` and `service.name`. Use this to drill down further if needed.
-
-
- `get_apm_error_document`
- Get a sample error document based on the grouping name. This also includes the stacktrace of the error, which might hint to the cause.
-
-
- `get_apm_service_summary`
- Get a summary of a single service, including the language, service version, deployments, the environments, and the infrastructure that it is running in. For example, the number of pods and a list of their downstream dependencies. It also returns active alerts and anomalies.
-
-
- `get_apm_services_list`
- Get the list of monitored services, their health statuses, and alerts.
-
-
- `get_apm_timeseries`
- Display different APM metrics (such as throughput, failure rate, or latency) for any service or all services and any or all of their dependencies. Displayed both as a time series and as a single statistic. Additionally, the function returns any changes, such as spikes, step and trend changes, or dips. You can also use it to compare data by requesting two different time ranges, or, for example, two different service versions.
-
-
-
-### Use contextual prompts
-
-AI Assistant contextual prompts throughout ((observability)) provide the following information:
-
-- **Alerts**: Provides possible causes and remediation suggestions for log rate changes.
-- **Application performance monitoring (APM)**: Explains APM errors and provides remediation suggestions.
-- **Logs**: Explains log messages and generates search patterns to find similar issues.
-
-{/* Not included in initial serverless launch */}
-{/* - **Universal Profiling**: explains the most expensive libraries and functions in your fleet and provides optimization suggestions. */}
-{/* - **Infrastructure Observability**: explains the processes running on a host. */}
-
-For example, in the log details, you'll see prompts for **What's this message?** and **How do I find similar log messages?**:
-
-
-
-Clicking a prompt generates a message specific to that log entry.
-You can continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat.
-
-
-
-### Add the AI Assistant connector to alerting workflows
-
-You can use the [Observability AI Assistant connector](((kibana-ref))/obs-ai-assistant-action-type.html) to add AI-generated insights and custom actions to your alerting workflows.
-To do this:
-
-1. and specify the conditions that must be met for the alert to fire.
-1. Under **Actions**, select the **Observability AI Assistant** connector type.
-1. In the **Connector** list, select the AI connector you created when you set up the assistant.
-1. In the **Message** field, specify the message to send to the assistant:
-
-
-
-You can ask the assistant to generate a report of the alert that fired,
-recall any information or potential resolutions of past occurrences stored in the knowledge base,
-provide troubleshooting guidance and resolution steps,
-and also include other active alerts that may be related.
-As a last step, you can ask the assistant to trigger an action,
-such as sending the report (or any other message) to a Slack webhook.
-
-
- Currently you can only send messages to Slack, email, Jira, PagerDuty, or a webhook.
- Additional actions will be added in the future.
-
-
-When the alert fires, contextual details about the event—such as when the alert fired,
-the service or host impacted, and the threshold breached—are sent to the AI Assistant,
-along with the message provided during configuration.
-The AI Assistant runs the tasks requested in the message and creates a conversation you can use to chat with the assistant:
-
-
-
-
- Conversations created by the AI Assistant are public and accessible to every user with permissions to use the assistant.
-
-
-It might take a minute or two for the AI Assistant to process the message and create the conversation.
-
-Note that overly broad prompts may result in the request exceeding token limits.
-For more information, refer to .
-Also, attempting to analyze several alerts in a single connector execution may cause you to exceed the function call limit.
-If this happens, modify the message specified in the connector configuration to avoid exceeding limits.
-
-When asked to send a message to another connector, such as Slack,
-the AI Assistant attempts to include a link to the generated conversation.
-
-
-
-The Observability AI Assistant connector is called when the alert fires and when it recovers.
-
-To learn more about alerting, actions, and connectors, refer to .
-
-## Known issues
-
-
-
-### Token limits
-
-Most LLMs have a set number of tokens they can manage in single a conversation.
-When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error.
-The exact number of tokens that the LLM can support depends on the LLM provider and model you're using.
-If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard.
-For more information, refer to the [OpenAI Connector documentation](((kibana-ref))/openai-action-type.html#openai-connector-token-dashboard).
diff --git a/docs/en/serverless/aiops/aiops-analyze-spikes.mdx b/docs/en/serverless/aiops/aiops-analyze-spikes.mdx
deleted file mode 100644
index 7cd105ec1d..0000000000
--- a/docs/en/serverless/aiops/aiops-analyze-spikes.mdx
+++ /dev/null
@@ -1,75 +0,0 @@
----
-slug: /serverless/observability/aiops-analyze-spikes
-title: Analyze log spikes and drops
-description: Find and investigate the causes of unusual spikes or drops in log rates.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-
-
-{/* */}
-
-((observability)) provides built-in log rate analysis capabilities,
-based on advanced statistical methods,
-to help you find and investigate the causes of unusual spikes or drops in log rates.
-
-To analyze log spikes and drops:
-
-1. In your ((observability)) project, go to **AIOps** → **Log rate analysis**.
-1. Choose a data view or saved search to access the log data you want to analyze.
-1. In the histogram chart, click a spike (or drop) to start the analysis.
-
- 
-
- When the analysis runs, it identifies statistically significant field-value combinations that contribute to the spike or drop,
- and then displays them in a table:
-
- 
-
- Notice that you can optionally turn on **Smart grouping** to summarize the results into groups.
- You can also click **Filter fields** to remove fields that are not relevant.
-
- The table shows an indicator of the level of impact and a sparkline showing the shape of the impact in the chart.
-1. Select a row to display the impact of the field on the histogram chart.
-1. From the **Actions** menu in the table, you can choose to view the field in **Discover**,
-view it in ,
-or copy the table row information to the clipboard as a query filter.
-
-To pin a table row, click the row, then move the cursor to the histogram chart.
-It displays a tooltip with exact count values for the pinned field which enables closer investigation.
-
-Brushes in the chart show the baseline time range and the deviation in the analyzed data.
-You can move the brushes to redefine both the baseline and the deviation and rerun the analysis with the modified values.
-
-
-
-
-## Log pattern analysis
-
-{/* */}
-
-Use log pattern analysis to find patterns in unstructured log messages and examine your data.
-When you run a log pattern analysis, it performs categorization analysis on a selected field,
-creates categories based on the data, and then displays them together in a chart.
-The chart shows the distribution of each category and an example document that matches the category.
-Log pattern analysis is useful when you want to examine how often different types of logs appear in your data set.
-It also helps you group logs in ways that go beyond what you can achieve with a terms aggregation.
-
-To run log pattern analysis:
-
-1. Follow the steps under to run a log rate analysis.
-1. From the **Actions** menu, choose **View in Log Pattern Analysis**.
-1. Select a category field and optionally apply any filters that you want.
-1. Click **Run pattern analysis**.
-
- The results of the analysis are shown in a table:
-
- 
-
-1. From the **Actions** menu, click the plus (or minus) icon to open **Discover** and show (or filter out) the given category there, which helps you to further examine your log messages.
-
-{/* TODO: Question: Is the log pattern analysis only available through the log rate analysis UI? */}
-
-{/* TODO: Add some good examples to this topic taken from existing docs or recommendations from reviewers. */}
diff --git a/docs/en/serverless/aiops/aiops-detect-anomalies.mdx b/docs/en/serverless/aiops/aiops-detect-anomalies.mdx
deleted file mode 100644
index 99af3b07ea..0000000000
--- a/docs/en/serverless/aiops/aiops-detect-anomalies.mdx
+++ /dev/null
@@ -1,264 +0,0 @@
----
-slug: /serverless/observability/aiops-detect-anomalies
-title: Detect anomalies
-description: Detect anomalies by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-The anomaly detection feature in ((observability)) automatically models the normal behavior of your time series data — learning trends,
-periodicity, and more — in real time to identify anomalies, streamline root cause analysis, and reduce false positives.
-
-To set up anomaly detection, you create and run anomaly detection jobs.
-Anomaly detection jobs use proprietary ((ml)) algorithms to detect anomalous events or patterns, such as:
-
-* Anomalies related to temporal deviations in values, counts, or frequencies
-* Anomalies related to unusual locations in geographic data
-* Statistical rarity
-* Unusual behaviors for a member of a population
-
-To learn more about anomaly detection algorithms, refer to the [((ml))](((ml-docs))/ml-ad-algorithms.html) documentation.
-Note that the ((ml)) documentation may contain details that are not valid when using a serverless project.
-
-
-
-A _datafeed_ retrieves time series data from ((es)) and provides it to an
-anomaly detection job for analysis.
-
-The job uses _buckets_ to divide the time series into batches for processing.
-For example, a job may use a bucket span of 1 hour.
-
-Each ((anomaly-job)) contains one or more _detectors_, which define the type of
-analysis that occurs (for example, `max`, `average`, or `rare` analytical
-functions) and the fields that are analyzed. Some of the analytical functions
-look for single anomalous data points. For example, `max` identifies the maximum
-value that is seen within a bucket. Others perform some aggregation over the
-length of the bucket. For example, `mean` calculates the mean of all the data
-points seen within the bucket.
-
-To learn more about anomaly detection, refer to the [((ml))](((ml-docs))/ml-ad-overview.html) documentation.
-
-
-
-
-
-## Create and run an anomaly detection job
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. Click **Create anomaly detection job** (or **Create job** if other jobs exist).
-1. Choose a data view or saved search to access the data you want to analyze.
-1. Select the wizard for the type of job you want to create.
-The following wizards are available.
-You might also see specialized wizards based on the type of data you are analyzing.
-
- In general, it is a good idea to start with single metric anomaly detection jobs for your key performance indicators.
- After you examine these simple analysis results, you will have a better idea of what the influencers might be.
- Then you can create multi-metric jobs and split the data or create more complex analysis functions as necessary.
-
-
-
- Single metric
-
- Creates simple jobs that have a single detector. A _detector_ applies an analytical function to specific fields in your data. In addition to limiting the number of detectors, the single metric wizard omits many of the more advanced configuration options.
-
- Multi-metric
-
- Creates jobs that can have more than one detector, which is more efficient than running multiple jobs against the same data.
-
- Population
-
- Creates jobs that detect activity that is unusual compared to the behavior of the population.
-
- Advanced
-
- Creates jobs that can have multiple detectors and enables you to configure all job settings.
-
- Categorization
-
- Creates jobs that group log messages into categories and use `count` or `rare` functions to detect anomalies within them.
-
- Rare
-
- Creates jobs that detect rare occurrences in time series data. Rare jobs use the `rare` or `freq_rare` functions and also detect rare occurrences in populations.
-
- Geo
-
- Creates jobs that detect unusual occurrences in the geographic locations of your data. Your data set must contain geo data.
-
-
-
- For more information about job types, refer to the [((ml))](((ml-docs))/ml-anomaly-detection-job-types.html) documentation.
-
-
-
- Before selecting a wizard, click **Data Visualizer** to explore the fields and metrics in your data.
- To get the best results, you must understand your data, including its data types and the range and distribution of values.
-
- In the **Data Visualizer**, use the time filter to select a time period that you’re interested in exploring,
- or click **Use full data** to view the full time range of data.
- Expand the fields to see details about the range and distribution of values.
- When you're done, go back to the first step and create your job.
-
-5. Step through the instructions in the job creation wizard to configure your job.
-You can accept the default settings for most settings now and later.
-1. If you want the job to start immediately when the job is created, make sure that option is selected on the summary page.
-1. When you're done, click **Create job**.
-When the job runs, the ((ml)) features analyze the input stream of data, model its behavior, and perform analysis based on the detectors in each job.
-When an event occurs outside of the baselines of normal behavior, that event is identified as an anomaly.
-1. After the job is started, click **View results**.
-
-## View the results
-
-After the anomaly detection job has processed some data,
-you can view the results in ((observability)).
-
-
-Depending on the capacity of your machine,
-you might need to wait a few seconds for the analysis to generate initial results.
-
-
-If you clicked **View results** after creating the job, the results open in either the **Single Metric Viewer** or **Anomaly Explorer**.
-To switch between these tools, click the icons in the upper-left corner of each tool.
-
-Read the following sections to learn more about these tools:
-
-*
-*
-
-
-
-## View single metric job results
-
-The **Single Metric Viewer** contains a chart that represents the actual and expected values over time:
-
-
-
-* The line in the chart represents the actual data values.
-* The shaded area represents the bounds for the expected values.
-* The area between the upper and lower bounds are the most likely values for the model, using a 95% confidence level.
-That is to say, there is a 95% chance of the actual value falling within these bounds.
-If a value is outside of this area then it will usually be identified as anomalous.
-
-
- Expected values are available only if **Enable model plot** was selected under Job Details
- when you created the job.
-
-
-To explore your data:
-
-1. If the **Single Metric Explorer** is not already open, go to **AIOps** → **Anomaly detection** and click the Single Metric Explorer icon next to the job you created.
-Note that the Single Metric Explorer icon will be grayed out for advanced or multi-metric jobs.
-1. In the time filter, specify a time range that covers the majority of the analyzed data points.
-1. Notice that the model improves as it processes more data.
-At the beginning, the expected range of values is pretty broad, and the model is not capturing the periodicity in the data.
-But it quickly learns and begins to reflect the patterns in your data.
-The duration of the learning process heavily depends on the characteristics and complexity of the input data.
-1. Look for anomaly data points, depicted by colored dots or cross symbols, and hover over a data point to see more details about the anomaly.
-Note that anomalies with medium or high multi-bucket impact are depicted with a cross symbol instead of a dot.
-
- Any data points outside the range that was predicted by the model are marked
- as anomalies. In order to provide a sensible view of the results, an
- _anomaly score_ is calculated for each bucket time interval. The anomaly score
- is a value from 0 to 100, which indicates the significance of the anomaly
- compared to previously seen anomalies. The highly anomalous values are shown in
- red and the low scored values are shown in blue. An interval with a high
- anomaly score is significant and requires investigation.
- For more information about anomaly scores, refer to the [((ml))](((ml-docs))/ml-ad-explain.html) documentation.
-
-1. (Optional) Annotate your job results by drag-selecting a period of time and entering annotation text.
-Annotations are notes that refer to events in a specific time period.
-They can be created by the user or generated automatically by the anomaly detection job to reflect model changes and noteworthy occurrences.
-1. Under **Anomalies**, expand each anomaly to see key details, such as the time, the actual and expected ("typical") values, and their probability.
-The **Anomaly explanation** section gives you further insights about each anomaly, such as its type and impact, to make it easier to interpret the job results:
-
- 
-
- By default, the **Anomalies** table contains all anomalies that have a severity of "warning" or higher in the selected section of the timeline.
- If you are only interested in critical anomalies, for example, you can change the severity threshold for this table.
-
-1. (Optional) From the **Actions** menu in the **Anomalies** table, you can choose to view relevant documents in **Discover** or create a job rule.
-Job rules instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide.
-To learn more, refer to
-
-After you have identified anomalies, often the next step is to try to determine
-the context of those situations. For example, are there other factors that are
-contributing to the problem? Are the anomalies confined to particular
-applications or servers? You can begin to troubleshoot these situations by
-layering additional jobs or creating multi-metric jobs.
-
-
-
-## View advanced or multi-metric job results
-
-Conceptually, you can think of _multi-metric anomaly detection jobs_ as running multiple independent single metric jobs.
-By bundling them together in a multi-metric job, however,
-you can see an overall score and shared influencers for all the metrics and all the entities in the job.
-Multi-metric jobs therefore scale better than having many independent single metric jobs.
-They also provide better results when you have influencers that are shared across the detectors.
-
-
-When you create an anomaly detection job, you can identify fields as _influencers_.
-These are fields that you think contain information about someone or something that influences or contributes to anomalies.
-As a best practice, do not pick too many influencers.
-For example, you generally do not need more than three.
-If you pick many influencers, the results can be overwhelming, and there is some overhead to the analysis.
-
-To learn more about influencers, refer to the [((ml))](((ml-docs))/ml-ad-run-jobs.html#ml-ad-influencers) documentation.
-
-
-
-You can also configure your anomaly detection jobs to split a single time series into multiple time series based on a categorical field.
-For example, you could create a job for analyzing response code rates that has a single detector that splits the data based on the `response.keyword`,
-and uses the `count` function to determine when the number of events is anomalous.
-You might use a job like this if you want to look at both high and low request rates partitioned by response code.
-
-To view advanced or multi-metric results in the
-**Anomaly Explorer**:
-
-1. If the **Anomaly Explorer** is not already open, go to **AIOps** → **Anomaly detection** and click the Anomaly Explorer icon next to the job you created.
-1. In the time filter, specify a time range that covers the majority of the analyzed data points.
-1. If you specified influencers during job creation, the view includes a list of the top influencers for all of the detected anomalies in that same time period.
-The list includes maximum anomaly scores, which in this case are aggregated for each influencer, for each bucket, across all detectors.
-There is also a total sum of the anomaly scores for each influencer.
-Use this list to help you narrow down the contributing factors and focus on the most anomalous entities.
-1. Under **Anomaly timeline**, click a section in the swim lanes to obtain more information about the anomalies in that time period.
-
- 
-
- You can see exact times when anomalies occurred.
- If there are multiple detectors or metrics in the job, you can see which caught the anomaly.
- You can also switch to viewing this time series in the **Single Metric Viewer** by selecting **View series** in the **Actions** menu.
-1. Under **Anomalies** (in the **Anomaly Explorer**), expand an anomaly to see key details, such as the time,
-the actual and expected ("typical") values, and the influencers that contributed to the anomaly:
-
- 
-
- By default, the **Anomalies** table contains all anomalies that have a severity of "warning" or higher in the selected section of the timeline.
- If you are only interested in critical anomalies, for example, you can change the severity threshold for this table.
-
- If your job has multiple detectors, the table aggregates the anomalies to show the highest severity anomaly per detector and entity,
- which is the field value that is displayed in the **found for** column.
-
- To view all the anomalies without any aggregation, set the **Interval** to **Show all**.
-
-
- The anomaly scores that you see in each section of the **Anomaly Explorer** might differ slightly.
- This disparity occurs because for each job there are bucket results, influencer results, and record results.
- Anomaly scores are generated for each type of result.
- The anomaly timeline uses the bucket-level anomaly scores.
- The list of top influencers uses the influencer-level anomaly scores.
- The list of anomalies uses the record-level anomaly scores.
-
-
-## Next steps
-
-After setting up an anomaly detection job, you may want to:
-
-*
-*
-*
diff --git a/docs/en/serverless/aiops/aiops-detect-change-points.mdx b/docs/en/serverless/aiops/aiops-detect-change-points.mdx
deleted file mode 100644
index af5f54923b..0000000000
--- a/docs/en/serverless/aiops/aiops-detect-change-points.mdx
+++ /dev/null
@@ -1,68 +0,0 @@
----
-slug: /serverless/observability/aiops-detect-change-points
-title: Detect change points
-description: Detect distribution changes, trend changes, and other statistically significant change points in a metric of your time series data.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-{/* */}
-
-The change point detection feature in ((observability)) detects distribution changes,
-trend changes, and other statistically significant change points in time series data.
-Unlike anomaly detection, change point detection does not require you to configure a job or generate a model.
-Instead you select a metric and immediately see a visual representation that splits the time series into two parts, before and after the change point.
-
-((observability)) uses a [change point aggregation](((ref))/search-aggregations-change-point-aggregation.html)
-to detect change points. This aggregation can detect change points when:
-
-* a significant dip or spike occurs
-* the overall distribution of values has changed significantly
-* there was a statistically significant step up or down in value distribution
-* an overall trend change occurs
-
-To detect change points:
-
-1. In your ((observability)) project, go to **AIOps** → **Change point detection**.
-1. Choose a data view or saved search to access the data you want to analyze.
-1. Select a function: **avg**, **max**, **min**, or **sum**.
-1. In the time filter, specify a time range over which you want to detect change points.
-1. From the **Metric field** list, select a field you want to check for change points.
-1. (Optional) From the **Split field** list, select a field to split the data by.
-If the cardinality of the split field exceeds 10,000, only the first 10,000 values, sorted by document count, are analyzed.
-Use this option when you want to investigate the change point across multiple instances, pods, clusters, and so on.
-For example, you may want to view CPU utilization split across multiple instances without having to jump across multiple dashboards and visualizations.
-
-
- You can configure a maximum of six combinations of a function applied to a metric field, partitioned by a split field, to identify change points.
-
-
-The change point detection feature automatically dissects the time series into multiple points within the given time window,
-tests whether the behavior is statistically different before and after each point in time, and then detects a change point if one exists:
-
- 
-
-The resulting view includes:
-
-* The timestamp of the change point
-* A preview chart
-* The type of change point and its p-value. The p-value indicates the magnitude of the change; lower values indicate more significant changes.
-* The name and value of the split field, if used.
-
-If the analysis is split by a field, a separate chart is shown for every partition that has a detected change point.
-The chart displays the type of change point, its value, and the timestamp of the bucket where the change point has been detected.
-
-On the **Change point detection page**, you can also:
-
-* Select a subset of charts and click **View selected** to view only the selected charts.
-
- 
-
-* Filter the results by specific types of change points by using the change point type selector:
-
- 
-
-* Attach change points to a chart or dashboard by using the context menu:
-
- 
diff --git a/docs/en/serverless/aiops/aiops-forecast-anomaly.mdx b/docs/en/serverless/aiops/aiops-forecast-anomaly.mdx
deleted file mode 100644
index a07067f542..0000000000
--- a/docs/en/serverless/aiops/aiops-forecast-anomaly.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-slug: /serverless/observability/aiops-forecast-anomalies
-title: Forecast future behavior
-description: Predict future behavior of your data by creating a forecast for an anomaly detection job.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-In addition to detecting anomalous behavior in your data,
-you can use the ((ml)) features to predict future behavior.
-
-You can use a forecast to estimate a time series value at a specific future date.
-For example, you might want to determine how much disk usage to expect
-next Sunday at 09:00.
-
-You can also use a forecast to estimate the probability of a time series value occurring at a future date.
-For example, you might want to determine how likely it is that your disk utilization will reach 100% before the end of next week.
-
-To create a forecast:
-
-1. and view the results in the **Single Metric Viewer**.
-1. Click **Forecast**.
-1. Specify a duration for your forecast.
-This value indicates how far to extrapolate beyond the last record that was processed.
-You must use time units, for example 1w, 1d, 1h, and so on.
-1. Click **Run**.
-1. View the forecast in the **Single Metric Viewer**:
-
- 
-
- * The line in the chart represents the predicted data values.
- * The shaded area represents the bounds for the predicted values, which also gives an indication of the confidence of the predictions.
- * Note that the bounds generally increase with time (that is to say, the confidence levels decrease),
- since you are forecasting further into the future.
- Eventually if the confidence levels are too low, the forecast stops.
-
-1. (Optional) After the job has processed more data, click the **Forecast** button again to compare the forecast to actual data.
-
- The resulting chart will contain the actual data values, the bounds for the expected values, the anomalies, the forecast data values, and the bounds for the forecast.
- This combination of actual and forecast data gives you an indication of how well the ((ml)) features can extrapolate the future behavior of the data.
diff --git a/docs/en/serverless/aiops/aiops-tune-anomaly-detection-job.mdx b/docs/en/serverless/aiops/aiops-tune-anomaly-detection-job.mdx
deleted file mode 100644
index 26b7bc801a..0000000000
--- a/docs/en/serverless/aiops/aiops-tune-anomaly-detection-job.mdx
+++ /dev/null
@@ -1,177 +0,0 @@
----
-slug: /serverless/observability/aiops-tune-anomaly-detection-job
-title: Tune your anomaly detection job
-description: Tune your job by creating calendars, adding job rules, and defining custom URLs.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-After you run an anomaly detection job and view the results,
-you might find that you need to alter the job configuration or settings.
-
-To further tune your job, you can:
-
-* that contain a list of scheduled events for which you do not want to generate anomalies, such as planned system outages or public holidays.
-* that instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide.
-Your job rules can use filter lists, which contain values that you can use to include or exclude events from the ((ml)) analysis.
-* to make dashboards and other resources readily available when viewing job results.
-
-For more information about tuning your job,
-refer to the how-to guides in the [((ml))](((ml-docs))/anomaly-how-tos.html) documentation.
-Note that the ((ml)) documentation may contain details that are not valid when using a fully-managed Elastic project.
-
-
- You can also create calendars and add URLs when configuring settings during job creation,
- but generally it's easier to start with a simple job and add complexity later.
-
-
-
-
-## Create calendars
-
-Sometimes there are periods when you expect unusual activity to take place,
-such as bank holidays, "Black Friday", or planned system outages.
-If you identify these events in advance, no anomalies are generated during that period.
-The ((ml)) model is not ill-affected, and you do not receive spurious results.
-
-To create a calendar and add scheduled events:
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. Click **Settings**.
-1. Under **Calendars**, click **Create**.
-1. Enter an ID and description for the calendar.
-1. Select the jobs you want to apply the calendar to, or turn on **Apply calendar to all jobs**.
-1. Under **Events**, click **New event** or click **Import events** to import events from an iCalendar (ICS) file:
-
- 
-
- A scheduled event must have a start time, end time, and calendar ID.
- In general, scheduled events are short in duration (typically lasting from a few hours to a day) and occur infrequently.
- If you have regularly occurring events, such as weekly maintenance periods,
- you do not need to create scheduled events for these circumstances;
- they are already handled by the ((ml)) analytics.
- If your ICS file contains recurring events, only the first occurrence is imported.
-
-1. When you're done adding events, save your calendar.
-
-You must identify scheduled events *before* your anomaly detection job analyzes the data for that time period.
-((ml-cap)) results are not updated retroactively.
-Bucket results are generated during scheduled events, but they have an anomaly score of zero.
-
-
- If you use long or frequent scheduled events,
- it might take longer for the ((ml)) analytics to learn to model your data,
- and some anomalous behavior might be missed.
-
-
-
-
-## Create job rules and filters
-
-By default, anomaly detection is unsupervised,
-and the ((ml)) models have no awareness of the domain of your data.
-As a result, anomaly detection jobs might identify events that are statistically significant but are uninteresting when you know the larger context.
-
-You can customize anomaly detection by creating custom job rules.
-*Job rules* instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide.
-When you create a rule, you can specify conditions, scope, and actions.
-When the conditions of a rule are satisfied, its actions are triggered.
-
-
-If you have an anomaly detector that is analyzing CPU usage,
-you might decide you are only interested in anomalies where the CPU usage is greater than a certain threshold.
-You can define a rule with conditions and actions that instruct the detector to refrain from generating ((ml)) results when there are anomalous events related to low CPU usage.
-You might also decide to add a scope for the rule so that it applies only to certain machines.
-The scope is defined by using ((ml)) filters.
-
-
-*Filters* contain a list of values that you can use to include or exclude events from the ((ml)) analysis.
-You can use the same filter in multiple anomaly detection jobs.
-
-
-If you are analyzing web traffic, you might create a filter that contains a list of IP addresses.
-The list could contain IP addresses that you trust to upload data to your website or to send large amounts of data from behind your firewall.
-You can define the rule's scope so that the action triggers only when a specific field in your data matches (or doesn't match) a value in the filter.
-This gives you much greater control over which anomalous events affect the ((ml)) model and appear in the ((ml)) results.
-
-
-To create a job rule, first create any filter lists you want to use in the rule, then configure the rule:
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. (Optional) Create one or more filter lists:
- 1. Click **Settings**.
- 1. Under **Filter lists**, click **Create**.
- 1. Enter the filter list ID. This is the ID you will select when you want to use the filter list in a job rule.
- 1. Click **Add item** and enter one item per line.
- 1. Click **Add** then save the filter list:
-
- 
-
-1. Open the job results in the **Single Metric Viewer** or **Anomaly Explorer**.
-1. From the **Actions** menu in the **Anomalies** table, select **Configure job rules**.
-
- 
-
-1. Choose which actions to take when the job rule matches the anomaly: **Skip result**, **Skip model update**, or both.
-1. Under **Conditions**, add one or more conditions that must be met for the action to be triggered.
-1. Under **Scope** (if available), add one or more filter lists to limit where the job rule applies.
-1. Save the job rule.
-Note that changes to job rules take effect for new results only.
-To apply these changes to existing results, you must clone and rerun the job.
-
-
-
-## Define custom URLs
-
-You can optionally attach one or more custom URLs to your anomaly detection jobs.
-Links for these URLs will appear in the **Actions** menu of the anomalies table when viewing job results in the **Single Metric Viewer** or **Anomaly Explorer**.
-Custom URLs can point to dashboards, the Discover app, or external websites.
-For example, you can define a custom URL that enables users to drill down to the source data from the results set.
-
-To add a custom URL to the **Actions** menu:
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. From the **Actions** menu in the job list, select **Edit job**.
-1. Select the **Custom URLs** tab, then click **Add custom URL**.
-1. Enter the label to use for the link text.
-1. Choose the type of resource you want to link to:
-
-
- ((kib)) dashboard
- Select the dashboard you want to link to.
-
-
- Discover
- Select the data view to use.
-
-
- Other
- Specify the URL for the external website.
-
-
-1. Click **Test** to test your link.
-1. Click **Add**, then save your changes.
-
-Now when you view job results in **Single Metric Viewer** or **Anomaly Explorer**,
-the **Actions** menu includes the custom link:
-
- 
-
-
-It is also possible to use string substitution in custom URLs.
-For example, you might have a **Raw data** URL defined as:
-
-`discover#/?_g=(time:(from:'$earliest$',mode:absolute,to:'$latest$'))&_a=(index:ff959d40-b880-11e8-a6d9-e546fe2bba5f,query:(language:kuery,query:'customer_full_name.keyword:"$customer_full_name.keyword$"'))`.
-
-The value of the `customer_full_name.keyword` field is passed to the target page when the link is clicked.
-
-For more information about using string substitution,
-refer to the [((ml))](((ml-docs))/ml-configuring-url.html#ml-configuring-url-strings) documentation.
-Note that the ((ml)) documentation may contain details that are not valid when using a fully-managed Elastic project.
-
-
diff --git a/docs/en/serverless/aiops/aiops.mdx b/docs/en/serverless/aiops/aiops.mdx
deleted file mode 100644
index dc278a718a..0000000000
--- a/docs/en/serverless/aiops/aiops.mdx
+++ /dev/null
@@ -1,27 +0,0 @@
----
-slug: /serverless/observability/aiops
-title: AIOps
-description: Automate anomaly detection and accelerate root cause analysis with AIOps.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-The AIOps capabilities available in ((observability)) enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents.
-Built on predictive analytics and ((ml)), our AIOps capabilities require no prior experience with ((ml)).
-DevOps engineers, SREs, and security analysts can get started right away using these AIOps features with little or no advanced configuration:
-
-
-
-
- Detect anomalies by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
-
-
-
- Find and investigate the causes of unusual spikes or drops in log rates.
-
-
-
- Detect distribution changes, trend changes, and other statistically significant change points in a metric of your time series data.
-
-
diff --git a/docs/en/serverless/alerting/aggregation-options.mdx b/docs/en/serverless/alerting/aggregation-options.mdx
deleted file mode 100644
index 7b9542df58..0000000000
--- a/docs/en/serverless/alerting/aggregation-options.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-slug: /serverless/observability/aggregationOptions
-title: Aggregation options
-description: Learn about aggregations available in alerting rules.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-Aggregations summarize your data to make it easier to analyze.
-In some alerting rules, you can specify aggregations to gather data for the rule.
-
-The following aggregations are available in some rules:
-
-
-
- Average
- Average value of a numeric field.
-
-
- Cardinality
- Approximate number of unique values in a field.
-
-
- Document count
- Number of documents in the selected dataset.
-
-
- Max
- Highest value of a numeric field.
-
-
- Min
- Lowest value of a numeric field.
-
-
- Percentile
- Numeric value which represents the point at which n% of all values in the selected dataset are lower (choices are 95th or 99th).
-
-
- Rate
- Rate at which a specific field changes over time. To learn about how the rate is calculated, refer to .
-
-
- Sum
- Total of a numeric field in the selected dataset.
-
-
diff --git a/docs/en/serverless/alerting/aiops-generate-anomaly-alerts.mdx b/docs/en/serverless/alerting/aiops-generate-anomaly-alerts.mdx
deleted file mode 100644
index 069dc91cc3..0000000000
--- a/docs/en/serverless/alerting/aiops-generate-anomaly-alerts.mdx
+++ /dev/null
@@ -1,216 +0,0 @@
----
-slug: /serverless/observability/aiops-generate-anomaly-alerts
-title: Create an anomaly detection rule
-description: Get alerts when anomalies match specific conditions.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-import FeatureBeta from '../partials/feature-beta.mdx'
-
-
-
-Create an anomaly detection rule to check for anomalies in one or more anomaly detection jobs.
-If the conditions of the rule are met, an alert is created, and any actions specified in the rule are triggered.
-For example, you can create a rule to check every fifteen minutes for critical anomalies and then alert you by email when they are detected.
-
-To create an anomaly detection rule:
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. In the list of anomaly detection jobs, find the job you want to check for anomalies.
-Haven't created a job yet? .
-1. From the **Actions** menu next to the job, select **Create alert rule**.
-1. Specify a name and optional tags for the rule. You can use these tags later to filter alerts.
-1. Verify that the correct job is selected and configure the alert details:
-
- 
-
-1. For the result type:
-
-
-
- **Bucket**
- How unusual the anomaly was within the bucket of time
-
-
- **Record**
- What individual anomalies are present in a time range
-
-
- **Influencer**
- The most unusual entities in a time range
-
-
-
-1. Adjust the **Severity** to match the anomaly score that will trigger the action.
-The anomaly score indicates the significance of a given anomaly compared to previous anomalies.
-The default severity threshold is 75, which means every anomaly with an anomaly score of 75 or higher will trigger the associated action.
-
-1. (Optional) Turn on **Include interim results** to include results that are created by the anomaly detection job _before_ a bucket is finalized. These results might disappear after the bucket is fully processed.
-Include interim results if you want to be notified earlier about a potential anomaly even if it might be a false positive.
-
-1. (Optional) Expand and change **Advanced settings**:
-
-
-
- **Lookback interval**
- The interval used to query previous anomalies during each condition check. Setting the lookback interval lower than the default value might result in missed anomalies.
-
-
- **Number of latest buckets**
- The number of buckets to check to obtain the highest anomaly from all the anomalies that are found during the Lookback interval. An alert is created based on the anomaly with the highest anomaly score from the most anomalous bucket.
-
-
-1. (Optional) Under **Check the rule condition with an interval**, specify an interval, then click **Test** to check the rule condition with the interval specified.
-The button is grayed out if the datafeed is not started.
-To test the rule, start the data feed.
-1. (Optional) If you want to change how often the condition is evaluated, adjust the **Check every** setting.
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-
- Anomaly detection rules are defined as part of a job.
- Alerts generated by these rules do not appear on the **Alerts** page.
-
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs when the the anomaly score matched the condition or was recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the anomaly score was matched and also when it recovers.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.anomalyExplorerUrl`
-
- URL to open in the Anomaly Explorer.
-
- `context.isInterim`
-
- Indicate if top hits contain interim results.
-
- `context.jobIds`
-
- List of job IDs that triggered the alert.
-
- `context.message`
-
- Alert info message.
-
- `context.score`
-
- Anomaly score at the time of the notification action.
-
- `context.timestamp`
-
- The bucket timestamp of the anomaly.
-
- `context.timestampIso8601`
-
- The bucket timestamp of the anomaly in ISO8601 format.
-
- `context.topInfluencers`
-
- The list of top influencers. Properties include:
-
- `influencer_field_name`
-
- The field name of the influencer.
-
- `influencer_field_value`
-
- The entity that influenced, contributed to, or was to blame for the anomaly.
-
- `score`
-
- The influencer score. A normalized score between 0-100 which shows the influencer’s overall contribution to the anomalies.
-
-
-
- `context.topRecords`
-
- The list of top records. Properties include:
-
- `actual`
-
- The actual value for the bucket.
-
- `by_field_value`
-
- The value of the by field.
-
- `field_name`
-
- Certain functions require a field to operate on, for example, `sum()`. For those functions, this value is the name of the field to be analyzed.
-
- `function`
-
- The function in which the anomaly occurs, as specified in the detector configuration. For example, `max`.
-
- `over_field_name`
-
- The field used to split the data.
-
- `partition_field_value`
-
- The field used to segment the analysis.
-
- `score`
-
- A normalized score between 0-100, which is based on the probability of the anomalousness of this record.
-
- `typical`
-
- The typical value for the bucket, according to analytical modeling.
-
-
-
-
-
-
-
-## Edit an anomaly detection rule
-
-To edit an anomaly detection rule:
-
-1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**.
-1. Expand the job that uses the rule you want to edit.
-1. On the **Job settings** tab, under **Alert rules**, click the rule to edit it.
diff --git a/docs/en/serverless/alerting/alerting-connectors.mdx b/docs/en/serverless/alerting/alerting-connectors.mdx
deleted file mode 100644
index d6f1e4db02..0000000000
--- a/docs/en/serverless/alerting/alerting-connectors.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
-* [Cases](((kibana-ref))/cases-action-type.html)
-* [D3 Security](((kibana-ref))/d3security-action-type.html)
-* [Email](((kibana-ref))/email-action-type.html)
-* [((ibm-r))](((kibana-ref))/resilient-action-type.html)
-* [Index](((kibana-ref))/index-action-type.html)
-* [Jira](((kibana-ref))/jira-action-type.html)
-* [Microsoft Teams](((kibana-ref))/teams-action-type.html)
-* [Observability AI Assistant](((kibana-ref))/obs-ai-assistant-action-type.html)
-* [((opsgenie))](((kibana-ref))/opsgenie-action-type.html)
-* [PagerDuty](((kibana-ref))/pagerduty-action-type.html)
-* [Server log](((kibana-ref))/server-log-action-type.html)
-* [((sn-itom))](((kibana-ref))/servicenow-itom-action-type.html)
-* [((sn-itsm))](((kibana-ref))/servicenow-action-type.html)
-* [((sn-sir))](((kibana-ref))/servicenow-sir-action-type.html)
-* [Slack](((kibana-ref))/slack-action-type.html)
-* [((swimlane))](((kibana-ref))/swimlane-action-type.html)
-* [Torq](((kibana-ref))/torq-action-type.html)
-* [((webhook))](((kibana-ref))/webhook-action-type.html)
-* [xMatters](((kibana-ref))/xmatters-action-type.html)
-
-
- Some connector types are paid commercial features, while others are free.
- For a comparison of the Elastic subscription levels, go to
- [the subscription page](https://www.elastic.co/subscriptions).
-
diff --git a/docs/en/serverless/alerting/alerting.mdx b/docs/en/serverless/alerting/alerting.mdx
deleted file mode 100644
index e867edffe0..0000000000
--- a/docs/en/serverless/alerting/alerting.mdx
+++ /dev/null
@@ -1,31 +0,0 @@
----
-slug: /serverless/observability/alerting
-title: Alerting
-description: Get alerts based on rules you define for detecting complex conditions in your applications and services.
-tags: [ 'serverless', 'observability', 'overview', 'alerting' ]
----
-
-
-
-Alerting enables you to define _rules_, which detect complex conditions within different apps and trigger actions when those conditions are met. Alerting provides a set of built-in connectors and rules for you to use. This page describes all of these elements and how they operate together.
-
-## Important concepts
-
-Alerting works by running checks on a schedule to detect conditions defined by a rule. You can define rules at different levels (service, environment, transaction) or use custom KQL queries. When a condition is met, the rule tracks it as an _alert_ and responds by triggering one or more _actions_.
-
-Actions typically involve interaction with Elastic services or third-party integrations. enable actions to talk to these services and integrations.
-
-Once you've defined your rules, you can monitor any alerts triggered by these rules in real time, with detailed dashboards that help you quickly identify and troubleshoot any issues that may arise. You can also extend your alerts with notifications via services or third-party incident management systems.
-
-## Alerts page
-
-On the **Alerts** page, the Alerts table provides a snapshot of alerts occurring within the specified time frame. The table includes the alert status, when it was last updated, the reason for the alert, and more.
-
-
-
-You can filter this table by alert status or time period, customize the visible columns, and search for specific alerts (for example, alerts related to a specific service or environment) using KQL. Select **View alert detail** from the **More actions** menu , or click the Reason link for any alert to in detail, and you can then either **View in app** or **View rule details**.
-
-## Next steps
-
-*
-*
diff --git a/docs/en/serverless/alerting/create-anomaly-alert-rule.mdx b/docs/en/serverless/alerting/create-anomaly-alert-rule.mdx
deleted file mode 100644
index f55fa7de7c..0000000000
--- a/docs/en/serverless/alerting/create-anomaly-alert-rule.mdx
+++ /dev/null
@@ -1,110 +0,0 @@
----
-slug: /serverless/observability/create-anomaly-alert-rule
-title: Create an APM anomaly rule
-description: Get alerts when either the latency, throughput, or failed transaction rate of a service is abnormal.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-You can create an anomaly rule to alert you when either the latency, throughput, or failed transaction rate of a service is abnormal. Anomaly rules can be set at different levels: environment, service, and/or transaction type. Add actions to raise alerts via services or third-party integrations (for example, send an email or create a Jira issue).
-
-
-
-
-These steps show how to use the **Alerts** UI.
-You can also create an anomaly rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create anomaly rule**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
-
-
-To create your anomaly rule:
-
-1. In your ((observability)) project, go to **Alerts**.
-1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
-1. Enter a **Name** for your rule, and any optional **Tags** for more granular reporting (leave blank if unsure).
-1. Select the **APM Anomaly** rule type.
-1. Select the appropriate **Service**, **Type**, and **Environment** (or leave **ALL** to include all options).
-1. Select the desired severity (critical, major, minor, warning) from **Has anomaly with severity**.
-1. Define the interval to check the rule (for example, check every 1 minute).
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when the alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.environment`
-
- The transaction type the alert is created for.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.serviceName`
-
- The service the alert is created for.
-
- `context.threshold`
-
- Any trigger value above this value will cause the alert to fire.
-
- `context.transactionType`
-
- The transaction type the alert is created for.
-
- `context.triggerValue`
-
- The value that breached the threshold and triggered the alert.
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
diff --git a/docs/en/serverless/alerting/create-custom-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-custom-threshold-alert-rule.mdx
deleted file mode 100644
index fdd3d70a2b..0000000000
--- a/docs/en/serverless/alerting/create-custom-threshold-alert-rule.mdx
+++ /dev/null
@@ -1,235 +0,0 @@
----
-slug: /serverless/observability/create-custom-threshold-alert-rule
-title: Create a custom threshold rule
-description: Get alerts when an Observability data type reach a given value.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Create a custom threshold rule to trigger an alert when an ((observability)) data type reaches or exceeds a given value.
-
-1. To access this page, from your project go to **Alerts**.
-1. Click **Manage Rules** -> **Create rule**.
-1. Under **Select rule type**, select **Custom threshold**.
-
-
-
-
-
-## Define rule data
-
-Specify the following settings to define the data the rule applies to:
-
-* **Select a data view:** Click the data view field to search for and select a data view that points to the indices or data streams that you're creating a rule for. You can also create a _new_ data view by clicking **Create a data view**. Refer to [Create a data view](((kibana-ref))/data-views.html) for more on creating data views.
-* **Define query filter (optional):** Use a query filter to narrow down the data that the rule applies to. For example, set a query filter to a specific host name using the query filter `host.name:host-1` to only apply the rule to that host.
-
-
-
-## Set rule conditions
-
-Set the conditions for the rule to detect using aggregations, an equation, and a threshold.
-
-
-
-### Set aggregations
-
-Aggregations summarize your data to make it easier to analyze.
-Set any of the following aggregation types to gather data to create your rule:
-`Average`, `Max`, `Min`, `Cardinality`, `Count`, `Sum,` `Percentile`, or `Rate`.
-For more information about these options, refer to .
-
-For example, to gather the total number of log documents with a log level of `warn`:
-
-1. Set the **Aggregation** to **Count**, and set the **KQL Filter** to `log.level: "warn"`.
-1. Set the threshold to `IS ABOVE 100` to trigger an alert when the number of log documents with a log level of `warn` reaches 100.
-
-
-
-### Set the equation and threshold
-
-Set an equation using your aggregations. Based on the results of your equation, set a threshold to define when to trigger an alert. The equations use basic math or boolean logic. Refer to the following examples for possible use cases.
-
-
-
-### Basic math equation
-
-Add, subtract, multiply, or divide your aggregations to define conditions for alerting.
-
-**Example:**
-Set an equation and threshold to trigger an alert when a metric is above a threshold. For this example, we'll use average CPU usage—the percentage of CPU time spent in states other than `idle` or `IOWait` normalized by the number of CPU cores—and trigger an alert when CPU usage is above a specific percentage. To do this, set the following aggregations, equation, and threshold:
-
-1. Set the following aggregations:
- * **Aggregation A:** Average `system.cpu.user.pct`
- * **Aggregation B:** Average `system.cpu.system.pct`
- * **Aggregation C:** Max `system.cpu.cores`.
-1. Set the equation to `(A + B) / C * 100`
-1. Set the threshold to `IS ABOVE 95` to alert when CPU usage is above 95%.
-
-
-
-### Boolean logic
-
-Use conditional operators and comparison operators with you aggregations to define conditions for alerting.
-
-**Example:**
-Set an equation and threshold to trigger an alert when the number of stateful pods differs from the number of desired pods. For this example, we'll use `kubernetes.statefulset.ready` and `kubernetes.statefulset.desired`, and trigger an alert when their values differ. To do this, set the following aggregations, equation, and threshold:
-
-1. Set the following aggregations:
- * **Aggregation A:** Sum `kubernetes.statefulset.ready`
- * **Aggregation B:** Sum `kubernetes.statefulset.desired`
-1. Set the equation to `A == B ? 1 : 0`. If A and B are equal, the result is `1`. If they're not equal, the result is `0`.
-1. Set the threshold to `IS BELOW 1` to trigger an alert when the result is `0` and the field values do not match.
-
-
-
-## Preview chart
-
-The preview chart provides a visualization of how many entries match your configuration.
-The shaded area shows the threshold you've set.
-
-
-
-## Group alerts by (optional)
-
-Set one or more **group alerts by** fields for custom threshold rules to perform a composite aggregation against the selected fields.
-When any of these groups match the selected rule conditions, an alert is triggered _per group_.
-
-When you select multiple groupings, the group name is separated by commas.
-
-For example, if you group alerts by the `host.name` and `host.architecture` fields, and there are two hosts (`Host A` and `Host B`) and two architectures (`Architecture A` and `Architecture B`), the composite aggregation forms multiple groups.
-
-If the `Host A, Architecture A` group matches the rule conditions, but the `Host B, Architecture B` group doesn't, one alert is triggered for `Host A, Architecture A`.
-
-If you select one field—for example, `host.name`—and `Host A` matches the conditions but `Host B` doesn't, one alert is triggered for `Host A`.
-If both groups match the conditions, alerts are triggered for both groups.
-
-## Trigger "no data" alerts (optional)
-
-Optionally configure the rule to trigger an alert when:
-
-* there is no data, or
-* a group that was previously detected stops reporting data.
-
-To do this, select **Alert me if there's no data**.
-
-The behavior of the alert depends on whether any **group alerts by** fields are specified:
-
-* **No "group alerts by" fields**: (Default) A "no data" alert is triggered if the condition fails to report data over the expected time period, or the rule fails to query ((es)). This alert means that something is wrong and there is not enough data to evaluate the related threshold.
-
-* **Has "group alerts by" fields**: If a previously detected group stops reporting data, a "no data" alert is triggered for the missing group.
-
- For example, consider a scenario where `host.name` is the **group alerts by** field for CPU usage above 80%. The first time the rule runs, two hosts report data: `host-1` and `host-2`. The second time the rule runs, `host-1` does not report any data, so a "no data" alert is triggered for `host-1`. When the rule runs again, if `host-1` starts reporting data again, there are a couple possible scenarios:
-
- * If `host-1` reports data for CPU usage and it is above the threshold of 80%, no new alert is triggered.
- Instead the existing alert changes from "no data" to a triggered alert that breaches the threshold.
- Keep in mind that no notifications are sent in this case because there is still an ongoing issue.
- * If `host-1` reports CPU usage below the threshold of 80%, the alert status is changed to recovered.
-
-
- If a host (for example, `host-1`) is decommissioned, you probably no longer want to see "no data" alerts about it.
- To mark an alert as untracked:
- Go to the Alerts table, click the icon to expand the "More actions" menu, and click *Mark as untracked*.
-
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency.
-You can choose to create a summary of alerts on each check interval or on a custom interval.
-Alternatively, you can set the action frequency such that you choose how often the action runs (for example,
-at each check interval, only when the alert status changes, or at a custom action interval).
-In this case, you must also select the specific threshold condition that affects when actions run: `Alert`, `No Data`, or `Recovered`.
-
-
-
-You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame:
-
-- **If alert matches query**: Enter a KQL query that defines field-value pairs or query conditions that must be met for notifications to send. The query only searches alert documents in the indices specified for the rule.
-- **If alert is generated during timeframe**: Set timeframe details. Notifications are only sent if alerts are generated within the timeframe you define.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.cloud`
-
- The cloud object defined by ECS if available in the source.
-
- `context.container`
-
- The container object defined by ECS if available in the source.
-
- `context.group`
-
- The object containing groups that are reporting data.
-
- `context.host`
-
- The host object defined by ECS if available in the source.
-
- `context.labels`
-
- List of labels associated with the entity where this alert triggered.
-
- `context.orchestrator`
-
- The orchestrator object defined by ECS if available in the source.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.tags`
-
- List of tags associated with the entity where this alert triggered.
-
- `context.timestamp`
-
- A timestamp of when the alert was detected.
-
- `context.value`
-
- List of the condition values.
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
diff --git a/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx b/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx
deleted file mode 100644
index fd7f84b5d5..0000000000
--- a/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx
+++ /dev/null
@@ -1,265 +0,0 @@
----
-slug: /serverless/observability/create-elasticsearch-query-rule
-title: Create an Elasticsearch query rule
-description: Get alerts when matches are found during the latest query run.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-The ((es)) query rule type runs a user-configured query, compares the number of
-matches to a configured threshold, and schedules actions to run when the
-threshold condition is met.
-
-1. To access this page, from your project go to **Alerts**.
-1. Click **Manage Rules** → **Create rule**.
-1. Under **Select rule type**, select **((es)) query**.
-
-An ((es)) query rule can be defined using ((es)) Query Domain Specific Language (DSL), ((es)) Query Language (ES|QL), ((kib)) Query Language (KQL), or Lucene.
-
-## Define the conditions
-
-When you create an ((es)) query rule, your choice of query type affects the information you must provide.
-For example:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-1. Define your query
-
- If you use [query DSL](((ref))/query-dsl.html), you must select an index and time field then provide your query.
- Only the `query`, `fields`, `_source` and `runtime_mappings` fields are used, other DSL fields are not considered.
- For example:
-
- ```sh
- {
- "query":{
- "match_all" : {}
- }
- }
- ```
-
- If you use [KQL](((kibana-ref))/kuery-query.html) or [Lucene](((kibana-ref))/lucene-query.html), you must specify a data view then define a text-based query.
- For example, `http.request.referrer: "https://example.com"`.
-
- If you use [ES|QL](((ref))/esql.html), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|).
- For example:
-
- ```sh
- FROM kibana_sample_data_logs
- | STATS total_bytes = SUM(bytes) BY host
- | WHERE total_bytes > 200000
- | SORT total_bytes DESC
- | LIMIT 10
- ```
-
-1. If you use query DSL, KQL, or Lucene, set the group and theshold.
-
- When
- : Specify how to calculate the value that is compared to the threshold. The value is calculated by aggregating a numeric field within the time window. The aggregation options are: `count`, `average`, `sum`, `min`, and `max`. When using `count` the document count is used and an aggregation field is not necessary.
-
- Over or Grouped Over
- : Specify whether the aggregation is applied over all documents or split into groups using up to four grouping fields.
- If you choose to use grouping, it's a [terms](((ref))/search-aggregations-bucket-terms-aggregation.html) or [multi terms aggregation](((ref))/search-aggregations-bucket-multi-terms-aggregation.html); an alert will be created for each unique set of values when it meets the condition.
- To limit the number of alerts on high cardinality fields, you must specify the number of groups to check against the threshold.
- Only the top groups are checked.
-
- Threshold
- : Defines a threshold value and a comparison operator (`is above`,
- `is above or equals`, `is below`, `is below or equals`, or `is between`). The value
- calculated by the aggregation is compared to this threshold.
-
-1. Set the time window, which defines how far back to search for documents.
-
-1. If you use query DSL, KQL, or Lucene, set the number of documents to send to the configured actions when the threshold condition is met.
-
-1. If you use query DSL, KQL, or Lucene, choose whether to avoid alert duplication by excluding matches from the previous run.
- This option is not available when you use a grouping field.
-
-1. Set the check interval, which defines how often to evaluate the rule conditions.
- Generally this value should be set to a value that is smaller than the time window, to avoid gaps in
- detection.
-
-## Test your query
-
-Use the **Test query** feature to verify that your query is valid.
-
-If you use query DSL, KQL, or Lucene, the query runs against the selected indices using the configured time window.
-The number of documents that match the query is displayed.
-For example:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
- If you use an ES|QL query, a table is displayed. For example:
-
-
-
-If the query is not valid, an error occurs.
-
-## Add actions
-
-{/* TODO: Decide whether to use boiler plate text, or the text from the source docs for this rule. */}
-
-You can optionally send notifications when the rule conditions are met and when they are no longer met.
-In particular, this rule type supports:
-
-* alert summaries
-* actions that run when the query is matched
-* recovery actions that run when the rule conditions are no longer met
-
-For each action, you must choose a connector, which provides connection information for a service or third party integration.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts at a custom interval:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run.
-
-With the **Run when** menu you can choose how often the action runs (at each check interval, only when the alert status changes, or at a custom action interval).
-You must also choose an action group, which indicates whether the action runs when the query is matched or when the alert is recovered.
-Each connector supports a specific set of actions for each action group.
-For example:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-You can further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame.
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.conditions`
-
- A string that describes the threshold condition. Example:
- `count greater than 4`.
-
- `context.date`
-
- The date, in ISO format, that the rule met the condition.
- Example: `2022-02-03T20:29:27.732Z`.
-
- `context.hits`
-
- The most recent documents that matched the query. Using the
- [Mustache](https://mustache.github.io/) template array syntax, you can iterate
- over these hits to get values from the ((es)) documents into your actions.
-
- For example, the message in an email connector action might contain:
-
- ```txt
- Elasticsearch query rule '{{rule.name}}' is active:
-
- {{#context.hits}}
- Document with {{_id}} and hostname {{_source.host.name}} has
- {{_source.system.memory.actual.free}} bytes of memory free
- {{/context.hits}}
- ```
-
- The documents returned by `context.hits` include the [`_source`](((ref))/mapping-source-field.html) field.
- If the ((es)) query search API's [`fields`](((ref))/search-fields.html#search-fields-param) parameter is used, documents will also return the `fields` field,
- which can be used to access any runtime fields defined by the [`runtime_mappings`](((ref))/runtime-search-request.html) parameter.
- For example:
-
- {/* NOTCONSOLE */}
- ```txt
- {{#context.hits}}
- timestamp: {{_source.@timestamp}}
- day of the week: {{fields.day_of_week}} [^1]
- {{/context.hits}}
- ```
- [^1]: The `fields` parameter here is used to access the `day_of_week` runtime field.
-
- As the [`fields`](((ref))/search-fields.html#search-fields-response) response always returns an array of values for each field,
- the [Mustache](https://mustache.github.io/) template array syntax is used to iterate over these values in your actions.
- For example:
-
- ```txt
- {{#context.hits}}
- Labels:
- {{#fields.labels}}
- - {{.}}
- {{/fields.labels}}
- {{/context.hits}}
- ```
- {/* NOTCONSOLE */}
-
- `context.link`
-
- Link to Discover and show the records that triggered the alert.
-
- `context.message`
-
- A message for the alert. Example:
- `rule 'my es-query' is active:`
- `- Value: 2`
- `- Conditions Met: Number of matching documents is greater than 1 over 5m`
- `- Timestamp: 2022-02-03T20:29:27.732Z`
-
- `context.title`
-
- A title for the alert. Example:
- `rule term match alert query matched`.
-
- `context.value`
-
- The value that met the threshold condition.
-
-
-
-
-
-
-
-## Handling multiple matches of the same document
-
-By default, **Exclude matches from previous run** is turned on and the rule checks
-for duplication of document matches across multiple runs. If you configure the
-rule with a schedule interval smaller than the time window and a document
-matches a query in multiple runs, it is alerted on only once.
-
-The rule uses the timestamp of the matches to avoid alerting on the same match
-multiple times. The timestamp of the latest match is used for evaluating the
-rule conditions when the rule runs. Only matches between the latest timestamp
-from the previous run and the current run are considered.
-
-Suppose you have a rule configured to run every minute. The rule uses a time
-window of 1 hour and checks if there are more than 99 matches for the query. The
-((es)) query rule type does the following:
-
-{/* [cols="3*<"] */}
-| | | |
-|---|---|---|
-| `Run 1 (0:00)` | Rule finds 113 matches in the last hour: `113 > 99` | Rule is active and user is alerted. |
-| `Run 2 (0:01)` | Rule finds 127 matches in the last hour. 105 of the matches are duplicates that were already alerted on previously, so you actually have 22 matches: `22 !> 99` | No alert. |
-| `Run 3 (0:02)` | Rule finds 159 matches in the last hour. 88 of the matches are duplicates that were already alerted on previously, so you actually have 71 matches: `71 !> 99` | No alert. |
-| `Run 4 (0:03)` | Rule finds 190 matches in the last hour. 71 of them are duplicates that were already alerted on previously, so you actually have 119 matches: `119 > 99` | Rule is active and user is alerted. |
diff --git a/docs/en/serverless/alerting/create-error-count-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-error-count-threshold-alert-rule.mdx
deleted file mode 100644
index 4d95d27d9b..0000000000
--- a/docs/en/serverless/alerting/create-error-count-threshold-alert-rule.mdx
+++ /dev/null
@@ -1,161 +0,0 @@
----
-slug: /serverless/observability/create-error-count-threshold-alert-rule
-title: Create an error count threshold rule
-description: Get alerts when the number of errors in a service exceeds a defined threshold.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Create an error count threshold rule to alert you when the number of errors in a service exceeds a defined threshold. Threshold rules can be set at different levels: environment, service, transaction type, and/or transaction name.
-
-
-
-
-These steps show how to use the **Alerts** UI.
-You can also create an error count threshold rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create error count rule**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
-
-
-To create your error count threshold rule:
-
-1. In your ((observability)) project, go to **Alerts**.
-1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
-1. Enter a **Name** for your rule, and any optional **Tags** for more granular reporting (leave blank if unsure).
-1. Select the **Error count threshold** rule type from the APM use case.
-1. Select the appropriate **Service**, **Environment**, and **Error Grouping Key** (or leave **ALL** to include all options). Alternatively, you can select **Use KQL Filter** and enter a KQL expression to limit the scope of your rule.
-1. Enter the error threshold in **Is Above** (defaults to 25 errors).
-1. Define the period to be assessed in **For the last** (defaults to last 5 minutes).
-1. Choose how to **Group alerts by**. Every unique value will create an alert.
-1. Define the interval to check the rule (for example, check every 1 minute).
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when the alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.environment`
-
- The transaction type the alert is created for.
-
- `context.errorGroupingKey`
-
- The error grouping key the alert is created for.
-
- `context.errorGroupingName`
-
- The error grouping name the alert is created for.
-
- `context.interval`
-
- The length and unit of time period where the alert conditions were met.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.serviceName`
-
- The service the alert is created for.
-
- `context.threshold`
-
- Any trigger value above this value will cause the alert to fire.
-
- `context.transactionName`
-
- The transaction name the alert is created for.
-
- `context.triggerValue`
-
- The value that breached the threshold and triggered the alert.
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
-
-
-## Example
-
-The error count threshold alert triggers when the number of errors in a service exceeds a defined threshold. Because some errors are more important than others, this guide will focus a specific error group ID.
-
-Before continuing, identify the service name, environment name, and error group ID that you’d like to create an error count threshold rule for.
-{/* The easiest way to find an error group ID is to select the service that you’re interested in and navigating to the Errors tab. // is there a Serverless equivalent? */}
-
-This guide will create an alert for an error group ID based on the following criteria:
-
-* Service: `{your_service.name}`
-* Environment: `{your_service.environment}`
-* Error Grouping Key: `{your_error.ID}`
-* Error count is above 25 errors for the last five minutes
-* Group alerts by `service.name` and `service.environment`
-* Check every 1 minute
-* Send the alert via email to the site reliability team
-
-From any page in **Applications**, select **Alerts and rules** → **Create threshold rule** → **Error count rule**. Change the name of the alert (if you wish), but do not edit the tags.
-
-Based on the criteria above, define the following rule details:
-
-* **Service**: `{your_service.name}`
-* **Environment**: `{your_service.environment}`
-* **Error Grouping Key**: `{your_error.ID}`
-* **Is above:** `25 errors`
-* **For the last:** `5 minutes`
-* **Group alerts by:** `service.name` `service.environment`
-* **Check every:** `1 minute`
-
-Next, select the **Email** connector and click **Create a connector**. Fill out the required details: sender, host, port, etc., and select **Save**.
-
-A default message is provided as a starting point for your alert. You can use the Mustache template syntax (`{{variable}}`) to pass additional alert values at the time a condition is detected to an action. A list of available variables can be accessed by clicking the Add variable icon .
-
-Select **Save**. The alert has been created and is now active!
-
diff --git a/docs/en/serverless/alerting/create-failed-transaction-rate-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-failed-transaction-rate-threshold-alert-rule.mdx
deleted file mode 100644
index a4f8c2a405..0000000000
--- a/docs/en/serverless/alerting/create-failed-transaction-rate-threshold-alert-rule.mdx
+++ /dev/null
@@ -1,156 +0,0 @@
----
-slug: /serverless/observability/create-failed-transaction-rate-threshold-alert-rule
-title: Create a failed transaction rate threshold rule
-description: Get alerts when the rate of transaction errors in a service exceeds a defined threshold.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-You can create a failed transaction rate threshold rule to alert you when the rate of transaction errors in a service exceeds a defined threshold. Threshold rules can be set at different levels: environment, service, transaction type, and/or transaction name. Add actions to raise alerts via services or third-party integrations e.g. mail, Slack, Jira.
-
-
-
-
-These steps show how to use the **Alerts** UI.
-You can also create a failed transaction rate threshold rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create threshold rule** and then **Failed transaction rate**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
-
-
-To create your failed transaction rate threshold rule:
-
-1. In your ((observability)) project, go to **Alerts**.
-1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
-1. Enter a **Name** for your rule, and any optional **Tags** for more granular reporting (leave blank if unsure).
-1. Select the **Failed transaction rate threshold** rule type from the APM use case.
-1. Select the appropriate **Service**, **Type**, **Environment** and **Name** (or leave **ALL** to include all options). Alternatively, you can select **Use KQL Filter** and enter a KQL expression to limit the scope of your rule.
-1. Enter a fail rate in the **Is Above** (defaults to 30%).
-1. Define the period to be assessed in **For the last** (defaults to last 5 minutes).
-1. Choose how to **Group alerts by**. Every unique value will create an alert.
-1. Define the interval to check the rule (for example, check every 1 minute).
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when the alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.environment`
-
- The transaction type the alert is created for.
-
- `context.interval`
-
- The length and unit of time period where the alert conditions were met.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.serviceName`
-
- The service the alert is created for.
-
- `context.threshold`
-
- Any trigger value above this value will cause the alert to fire.
-
- `context.transactionName`
-
- The transaction name the alert is created for.
-
- `context.transactionType`
-
- The transaction type the alert is created for.
-
- `context.triggerValue`
-
- The value that breached the threshold and triggered the alert.
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
-
-## Example
-
-The failed transaction rate threshold alert triggers when the number of transaction errors in a service exceeds a defined threshold.
-
-Before continuing, identify the service name, environment name, and transaction type that you’d like to create a failed transaction rate threshold rule for.
-
-This guide will create an alert for an error group ID based on the following criteria:
-
-* Service: `{your_service.name}`
-* Transaction: `{your_transaction.name}`
-* Environment: `{your_service.environment}`
-* Error rate is above 30% for the last five minutes
-* Group alerts by `service.name` and `service.environment`
-* Check every 1 minute
-* Send the alert via email to the site reliability team
-
-From any page in **Applications**, select **Alerts and rules** → **Create threshold rule** → **Failed transaction rate**. Change the name of the alert (if you wish), but do not edit the tags.
-
-Based on the criteria above, define the following rule details:
-
-* **Service**: `{your_service.name}`
-* **Type**: `{your_transaction.name}`
-* **Environment**: `{your_service.environment}`
-* **Is above:** `30%`
-* **For the last:** `5 minutes`
-* **Group alerts by:** `service.name` `service.environment`
-* **Check every:** `1 minute`
-
-Next, select the **Email** connector and click **Create a connector**. Fill out the required details: sender, host, port, etc., and select **Save**.
-
-A default message is provided as a starting point for your alert. You can use the Mustache template syntax (`{{variable}}`) to pass additional alert values at the time a condition is detected to an action. A list of available variables can be accessed by clicking the Add variable icon .
-
-Select **Save**. The alert has been created and is now active!
-
-
diff --git a/docs/en/serverless/alerting/create-inventory-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-inventory-threshold-alert-rule.mdx
deleted file mode 100644
index 7d057f4014..0000000000
--- a/docs/en/serverless/alerting/create-inventory-threshold-alert-rule.mdx
+++ /dev/null
@@ -1,186 +0,0 @@
----
-slug: /serverless/observability/create-inventory-threshold-alert-rule
-title: Create an inventory rule
-description: Get alerts when the infrastructure inventory exceeds a defined threshold.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-Based on the resources listed on the **Inventory** page within the ((infrastructure-app)),
-you can create a threshold rule to notify you when a metric has reached or exceeded a value for a specific
-resource or a group of resources within your infrastructure.
-
-Additionally, each rule can be defined using multiple
-conditions that combine metrics and thresholds to create precise notifications and reduce false positives.
-
-1. To access this page, go to **((observability))** -> **Infrastructure**.
-1. On the **Inventory** page or the **Metrics Explorer** page, click **Alerts and rules** -> **Infrastructure**.
-1. Select **Create inventory rule**.
-
-
-
-When you select **Create inventory alert**, the parameters you configured on the **Inventory** page will automatically
-populate the rule. You can use the Inventory first to view which nodes in your infrastructure you'd
-like to be notified about and then quickly create a rule in just a few clicks.
-
-
-
-
-
-## Inventory conditions
-
-Conditions for each rule can be applied to specific metrics relating to the inventory type you select.
-You can choose the aggregation type, the metric, and by including a warning threshold value, you can be
-alerted on multiple threshold values based on severity scores. When creating the rule, you can still get
-notified if no data is returned for the specific metric or if the rule fails to query ((es)).
-
-In this example, Kubernetes Pods is the selected inventory type. The conditions state that you will receive
-a critical alert for any pods within the `ingress-nginx` namespace with a memory usage of 95% or above
-and a warning alert if memory usage is 90% or above.
-The chart shows the results of applying the rule to the last 20 minutes of data.
-Note that the chart time range is 20 times the value of the look-back window specified in the `FOR THE LAST` field.
-
-
-
-
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a summary of alerts on each check interval or on a custom interval. For example, send email notifications that summarize the new, ongoing, and recovered alerts each hour:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you define precisely when the alert is triggered by selecting a specific
-threshold condition: `Alert`, `Warning`, or `Recovered` (a value that was once above a threshold has now dropped below it).
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame:
-
-- **If alert matches query**: Enter a KQL query that defines field-value pairs or query conditions that must be met for notifications to send. The query only searches alert documents in the indices specified for the rule.
-- **If alert is generated during timeframe**: Set timeframe details. Notifications are only sent if alerts are generated within the timeframe you define.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.alertState`
-
- Current state of the alert.
-
- `context.cloud`
-
- The cloud object defined by ECS if available in the source.
-
- `context.container`
-
- The container object defined by ECS if available in the source.
-
- `context.group`
-
- Name of the group reporting data.
-
- `context.host`
-
- The host object defined by ECS if available in the source.
-
- `context.labels`
-
- List of labels associated with the entity where this alert triggered.
-
- `context.metric`
-
- The metric name in the specified condition. Usage: (`ctx.metric.condition0`, `ctx.metric.condition1`, and so on).
-
- `context.orchestrator`
-
- The orchestrator object defined by ECS if available in the source.
-
- `context.originalAlertState`
-
- The state of the alert before it recovered. This is only available in the recovery context.
-
- `context.originalAlertStateWasALERT`
-
- Boolean value of the state of the alert before it recovered. This can be used for template conditions. This is only available in the recovery context.
-
- `context.originalAlertStateWasWARNING`
-
- Boolean value of the state of the alert before it recovered. This can be used for template conditions. This is only available in the recovery context.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.tags`
-
- List of tags associated with the entity where this alert triggered.
-
- `context.threshold`
-
- The threshold value of the metric for the specified condition. Usage: (`ctx.threshold.condition0`, `ctx.threshold.condition1`, and so on)
-
- `context.timestamp`
-
- A timestamp of when the alert was detected.
-
- `context.value`
-
- The value of the metric in the specified condition. Usage: (`ctx.value.condition0`, `ctx.value.condition1`, and so on).
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
-
-
-
-## Settings
-
-With infrastructure threshold rules, it's not possible to set an explicit index pattern as part of the configuration. The index pattern
-is instead inferred from **Metrics indices** on the Settings page of the ((infrastructure-app)).
-
-With each execution of the rule check, the **Metrics indices** setting is checked, but it is not stored when the rule is created.
diff --git a/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx
deleted file mode 100644
index 87cfcb3a29..0000000000
--- a/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx
+++ /dev/null
@@ -1,159 +0,0 @@
----
-slug: /serverless/observability/create-latency-threshold-alert-rule
-title: Create a latency threshold rule
-description: Get alerts when the latency of a specific transaction type in a service exceeds a defined threshold.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-You can create a latency threshold rule to alert you when the latency of a specific transaction type in a service exceeds a defined threshold. Threshold rules can be set at different levels: environment, service, transaction type, and/or transaction name. Add actions to raise alerts via services or third-party integrations e.g. mail, Slack, Jira.
-
-
-
-
-These steps show how to use the **Alerts** UI.
-You can also create a latency threshold rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create threshold rule** and then **Latency**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
-
-
-To create your latency threshold rule:
-
-1. In your ((observability)) project, go to **Alerts**.
-1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
-1. Enter a **Name** for your rule, and any optional **Tags** for more granular reporting (leave blank if unsure).
-1. Select the **Latency threshold** rule type from the APM use case.
-1. Select the appropriate **Service**, **Type**, **Environment** and **Name** (or leave **ALL** to include all options). Alternatively, you can select **Use KQL Filter** and enter a KQL expression to limit the scope of your rule.
-1. Define the threshold and period:
- * **When**: Choose between `Average`, `95th percentile`, or `99th percentile`.
- * **Is Above**: Enter a time in milliseconds (defaults to 1500ms).
- * **For the last**: Define the period to be assessed in (defaults to last 5 minutes).
-1. Choose how to **Group alerts by**. Every unique value will create an alert.
-1. Define the interval to check the rule (for example, check every 1 minute).
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when the alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers.
-
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.environment`
-
- The transaction type the alert is created for.
-
- `context.interval`
-
- The length and unit of time period where the alert conditions were met.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.serviceName`
-
- The service the alert is created for.
-
- `context.threshold`
-
- Any trigger value above this value will cause the alert to fire.
-
- `context.transactionName`
-
- The transaction name the alert is created for.
-
- `context.transactionType`
-
- The transaction type the alert is created for.
-
- `context.triggerValue`
-
- The value that breached the threshold and triggered the alert.
-
- `context.viewInAppUrl`
-
- Link to the alert source.
-
-
-
-
-
-
-## Example
-
-The latency threshold alert triggers when the latency of a specific transaction type in a service exceeds a defined threshold.
-
-Before continuing, identify the service name, environment name, and transaction type that you’d like to create a latency threshold rule for.
-
-This guide will create an alert for an error group ID based on the following criteria:
-
-* Service: `{your_service.name}`
-* Transaction: `{your_transaction.name}`
-* Environment: `{your_service.environment}`
-* Average latency is above 1500ms for last 5 minutes
-* Group alerts by `service.name` and `service.environment`
-* Check every 1 minute
-* Send the alert via email to the site reliability team
-
-From any page in **Applications**, select **Alerts and rules** → **Create threshold rule** → **Latency threshold**. Change the name of the alert (if you wish), but do not edit the tags.
-
-Based on the criteria above, define the following rule details:
-
-* **Service**: `{your_service.name}`
-* **Type**: `{your_transaction.name}`
-* **Environment**: `{your_service.environment}`
-* **When:** `Average`
-* **Is above:** `1500ms`
-* **For the last:** `5 minutes`
-* **Group alerts by:** `service.name` `service.environment`
-* **Check every:** `1 minute`
-
-Next, select the **Email** connector and click **Create a connector**. Fill out the required details: sender, host, port, etc., and select **Save**.
-
-A default message is provided as a starting point for your alert. You can use the Mustache template syntax (`{{variable}}`) to pass additional alert values at the time a condition is detected to an action. A list of available variables can be accessed by selecting the add variable button.
-
-Select **Save**. The alert has been created and is now active!
-
diff --git a/docs/en/serverless/alerting/create-manage-rules.mdx b/docs/en/serverless/alerting/create-manage-rules.mdx
deleted file mode 100644
index 25930510bc..0000000000
--- a/docs/en/serverless/alerting/create-manage-rules.mdx
+++ /dev/null
@@ -1,141 +0,0 @@
----
-slug: /serverless/observability/create-manage-rules
-title: Create and manage rules
-description: Create and manage rules for alerting when conditions are met.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Alerting enables you to define _rules_, which detect complex conditions within different apps and trigger actions when those conditions are met. Alerting provides a set of built-in connectors and rules for you to use.
-
-## Observability rules
-
-Learn more about Observability rules and how to create them:
-
-
-
-
- AIOps
- Anomaly detection
- Anomalies match specific conditions.
-
-
- APM
- APM anomaly
- The latency, throughput, or failed transaction rate of a service is abnormal.
-
-
- Observability
- Custom threshold
- An Observability data type reaches or exceeds a given value.
-
-
- Stack
- ((es)) query
- Matches are found during the latest query run.
-
-
- APM
- Error count threshold
- The number of errors in a service exceeds a defined threshold.
-
-
- APM
- Failed transaction rate threshold
- The rate of transaction errors in a service exceeds a defined threshold.
-
-
- Metrics
- Inventory
- The infrastructure inventory exceeds a defined threshold.
-
-
- APM
- Latency threshold
- The latency of a specific transaction type in a service exceeds a defined threshold.
-
-
- SLO
- SLO burn rate rule
- The burn rate is above a defined threshold.
-
-
-
-## Creating rules and alerts
-
-You start by defining the rule and how often it should be evaluated. You can extend these rules by adding an appropriate action (for example, send an email or create an issue) to be triggered when the rule conditions are met. These actions are defined within each rule and implemented by the appropriate connector for that action e.g. Slack, Jira. You can create any rules from scratch using the **Manage Rules** page, or you can create specific rule types from their respective UIs and benefit from some of the details being pre-filled (for example, Name and Tags).
-
-* For APM alert types, you can select **Alerts and rules** and create rules directly from the **Services**, **Traces**, and **Dependencies** UIs.
-
-* For SLO alert types, from the **SLOs** page open the **More actions** menu for an SLO and select **Create new alert rule**. Alternatively, when you create a new SLO, the **Create new SLO burn rate alert rule** checkbox is enabled by default and will prompt you to Create SLO burn rate rule upon saving the SLO.
-
-{/*
-Clarify available Logs rule
-*/}
-
-After a rule is created, you can open the **More actions** menu and select **Edit rule** to check or change the definition, and/or add or modify actions.
-
-
-
-From the action menu you can also:
-
-* Disable or delete rule
-* Clone rule
-* Snooze rule notifications
-* Run rule (without waiting for next scheduled check)
-* Update API keys
-
-## View rule details
-
-Click on an individual rule on the **((rules-app))** page to view details including the rule name, status, definition, execution history, related alerts, and more.
-
-
-
-A rule can have one of the following responses:
-
-`failed`
- : The rule ran with errors.
-
-`succeeded`
- : The rule ran without errors.
-
-`warning`
- : The rule ran with some non-critical errors.
-
-## Snooze and disable rules
-
-The rule listing enables you to quickly snooze, disable, enable, or delete individual rules.
-
-{/*  */}
-
-When you snooze a rule, the rule checks continue to run on a schedule but the
-alert will not trigger any actions. You can snooze for a specified period of
-time, indefinitely, or schedule single or recurring downtimes.
-
-{/*  */}
-
-When a rule is in a snoozed state, you can cancel or change the duration of
-this state.
-
- To temporarily suppress notifications for _all_ rules, create a .
-
-{/* Remove tech preview? */}
-
-## Import and export rules
-
-To import and export rules, use ((saved-objects-app)).
-
-Rules are disabled on export.
-You are prompted to re-enable the rule on successful import.
-
-{/* Can you import / export rules? */}
diff --git a/docs/en/serverless/alerting/create-slo-burn-rate-alert-rule.mdx b/docs/en/serverless/alerting/create-slo-burn-rate-alert-rule.mdx
deleted file mode 100644
index 54ceb955a7..0000000000
--- a/docs/en/serverless/alerting/create-slo-burn-rate-alert-rule.mdx
+++ /dev/null
@@ -1,133 +0,0 @@
----
-slug: /serverless/observability/create-slo-burn-rate-alert-rule
-title: Create an SLO burn rate rule
-description: Get alerts when the SLO failure rate is too high over a defined period of time.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-import Connectors from './alerting-connectors.mdx'
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Create an SLO burn rate rule to get alerts when the burn rate is too high over a defined threshold for two different lookback periods: a long period and a short period that is 1/12th of the long period. For example, if your long lookback period is one hour, your short lookback period is five minutes.
-
-Choose which SLO to monitor and then define multiple burn rate windows with appropriate severity. For each period, the burn rate is computed as the error rate divided by the error budget. When the burn rates for both periods surpass the threshold, an alert is triggered. Add actions to raise alerts via services or third-party integrations e.g. mail, Slack, Jira.
-
-
-
-
-These steps show how to use the **Alerts** UI. You can also create an SLO burn rate rule directly from **Observability** → **SLOs**.
-Click the more options icon () to the right of the SLO you want to add a burn rate rule for, and select ** Create new alert rule** from the menu.
-
-When you use the UI to create an SLO, a default SLO burn rate alert rule is created automatically.
-The burn rate rule will use the default configuration and no connector.
-You must configure a connector if you want to receive alerts for SLO breaches.
-
-
-To create an SLO burn rate rule:
-
-1. In your ((observability)) project, go to **Alerts**.
-1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
-1. Enter a **Name** for your rule, and any optional **Tags** for more granular reporting (leave blank if unsure).
-1. Select **SLO burn rate** from the **Select rule type** list.
-1. Select the **SLO** you want to monitor.
-1. Define multiple burn rate windows for each **Action Group** (defaults to 4 windows but you can edit):
- * **Lookback (hours)**: Enter the lookback period for this window. A shorter period equal to 1/12th of this period will be used for faster recovery.
- * **Burn rate threshold**: Enter a burn rate for this window.
- * **Action Group**: Select a severity for this window.
-1. Define the interval to check the rule e.g. check every 1 minute.
-1. (Optional) Set up **Actions**.
-1. **Save** your rule.
-
-## Add actions
-
-You can extend your rules with actions that interact with third-party systems, write to logs or indices, or send user notifications. You can add an action to a rule at any time. You can create rules without adding actions, and you can also define multiple actions for a single rule.
-
-To add actions to rules, you must first create a connector for that service (for example, an email or external incident management system), which you can then use for different rules, each with their own action frequency.
-
-
-Connectors provide a central place to store connection information for services and integrations with third party systems.
-The following connectors are available when defining actions for alerting rules:
-
-
-
-For more information on creating connectors, refer to Connectors.
-
-
-
-
-After you select a connector, you must set the action frequency. You can choose to create a **Summary of alerts** on each check interval or on a custom interval. For example, you can send email notifications that summarize the new, ongoing, and recovered alerts every twelve hours.
-
-Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. For example, you can send an email only when the alert status changes to critical.
-
-
-
-With the **Run when** menu you can choose if an action runs for a specific severity (critical, high, medium, low), or when the alert is recovered. For example, you can add a corresponding action for each severity you want an alert for, and also for when the alert recovers.
-
-
-
-
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the Add variable icon and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You can also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.alertDetailsUrl`
-
- Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
-
- `context.burnRateThreshold`
-
- The burn rate threshold value.
-
- `context.longWindow`
-
- The window duration with the associated burn rate value.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.shortWindow`
-
- The window duration with the associated burn rate value.
-
- `context.sloId`
-
- The SLO unique identifier.
-
- `context.sloInstanceId`
-
- The SLO instance ID.
-
- `context.sloName`
-
- The SLO name.
-
- `context.timestamp`
-
- A timestamp of when the alert was detected.
-
- `context.viewInAppUrl`
-
- The url to the SLO details page to help with further investigation.
-
-
-
-
-
-## Next steps
-
-Learn how to view alerts and triage SLO burn rate breaches:
-
-*
-*
diff --git a/docs/en/serverless/alerting/rate-aggregation.mdx b/docs/en/serverless/alerting/rate-aggregation.mdx
deleted file mode 100644
index 650cfc0304..0000000000
--- a/docs/en/serverless/alerting/rate-aggregation.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-slug: /serverless/observability/rateAggregation
-title: Rate aggregation
-description: Analyze the rate at which a specific field changes over time.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-You can use a rate aggregation to analyze the rate at which a specific field changes over time.
-This type of aggregation is useful when you want to analyze fields like counters.
-
-For example, imagine you have a counter field called restarts that increments each time a service restarts.
-You can use rate aggregation to get an alert if the service restarts more than X times within a specific time window (for example, per day).
-
-## How rates are calculated
-
-Rates used in alerting rules are calculated by comparing the maximum value of the field in the previous bucket to the maximum value of the field in the current bucket and then dividing the result by the number of seconds in the selected interval.
-For example, if the value of the restarts increases, the rate would be calculated as:
-
-`(max_value_in_current_bucket - max_value_in_previous_bucket)/interval_in_seconds`
-
-In this example, let’s assume you have one document per bucket with the following data:
-
-
-
-```json
-{
-"timestamp": 0000,
-"restarts": 0
-}
-
-{
-"timestamp": 60000,
-"restarts": 1
-}
-```
-
-Let’s assume the timestamp is a UNIX timestamp in milliseconds,
-and we started counting on Thursday, January 1, 1970 12:00:00 AM.
-In that case, the rate will be calculated as follows:
-
-`(max_value_in_current_bucket - max_value_in_previous_bucket)/interval_in_seconds`, where:
-
-* `max_value_in_current_bucket` [now-1m → now]: 1
-* `max_value_in_previous_bucket` [now-2m → now-1m]: 0
-* `interval_in_seconds`: 60
-
-The rate calculation would be: `(1 - 0) / 60 = 0.0166666666667`
-
-If you want to alert when the rate of restarts is above 1 within a 1-minute window, you would set the threshold above `0.0166666666667`.
-
-The calculation you need to use depends on the interval that's selected.
diff --git a/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx b/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx
deleted file mode 100644
index 1dc8829e2d..0000000000
--- a/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx
+++ /dev/null
@@ -1,152 +0,0 @@
----
-slug: /serverless/observability/monitor-status-alert
-title: Create a synthetic monitor status rule
-description: Get alerts based on the status of synthetic monitors.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-import Connectors from './alerting-connectors.mdx'
-
-Within the Synthetics UI, create a **Monitor Status** rule to receive notifications
-based on errors and outages.
-
-1. To access this page, go to **Synthetics** → **Overview**.
-1. At the top of the page, click **Alerts and rules** → **Monitor status rule** → **Create status rule**.
-
-## Filters
-
-The **Filter by** section controls the scope of the rule.
-The rule will only check monitors that match the filters defined in this section.
-In this example, the rule will only alert on `browser` monitors located in `Asia/Pacific - Japan`.
-
-
-
-## Conditions
-
-Conditions for each rule will be applied to all monitors that match the filters in the [**Filter by** section](#filters).
-You can choose the number of times the monitor has to be down relative to either a number of checks run
-or a time range in which checks were run, and the minimum number of locations the monitor must be down in.
-
-
- Retests are included in the number of checks.
-
-
-The **Rule schedule** defines how often to evaluate the condition. Note that checks are queued, and they run as close
-to the defined value as capacity allows. For example, if a check is scheduled to run every 2 minutes, but the check
-takes longer than 2 minutes to run, a check will not run until the previous check has finished.
-
-You can also set **Advanced options** such as the number of consecutive runs that must meet the rule conditions before
-an alert occurs.
-
-In this example, the conditions will be met any time a `browser` monitor is down `3` of the last `5` times
-the monitor ran across any locations that match the filter. These conditions will be evaluated every minute,
-and you will only receive an alert when the conditions are met three times consecutively.
-
-
-
-## Action types
-
-Extend your rules by connecting them to actions that use the following supported built-in integrations.
-
-
-
-After you select a connector, you must set the action frequency.
-You can choose to create a summary of alerts on each check interval or on a custom interval.
-For example, send email notifications that summarize the new, ongoing, and recovered alerts each hour:
-
-
-
-Alternatively, you can set the action frequency such that you choose how often the action runs
-(for example, at each check interval, only when the alert status changes, or at a custom action interval).
-In this case, you must also select the specific threshold condition that affects when actions run:
-the _Synthetics monitor status_ changes or when it is _Recovered_ (went from down to up).
-
-
-
-You can also further refine the conditions under which actions run by specifying that actions only run
-when they match a KQL query or when an alert occurs within a specific time frame:
-
-* **If alert matches query**: Enter a KQL query that defines field-value pairs or query conditions that must
- be met for notifications to send. The query only searches alert documents in the indices specified for the rule.
-* **If alert is generated during timeframe**: Set timeframe details. Notifications are only sent if alerts are
- generated within the timeframe you define.
-
-
-
-### Action variables
-
-Use the default notification message or customize it.
-You can add more context to the message by clicking the icon above the message text box
-and selecting from a list of available variables.
-
-
-
-The following variables are specific to this rule type.
-You an also specify [variables common to all rules](((kibana-ref))/rule-action-variables.html).
-
-
- `context.checkedAt`
-
- Timestamp of the monitor run.
-
- `context.hostName`
-
- Hostname of the location from which the check is performed.
-
- `context.lastErrorMessage`
-
- Monitor last error message.
-
- `context.locationId`
-
- Location id from which the check is performed.
-
- `context.locationName`
-
- Location name from which the check is performed.
-
- `context.locationNames`
-
- Location names from which the checks are performed.
-
- `context.message`
-
- A generated message summarizing the status of monitors currently down.
-
- `context.monitorId`
-
- ID of the monitor.
-
- `context.monitorName`
-
- Name of the monitor.
-
- `context.monitorTags`
-
- Tags associated with the monitor.
-
- `context.monitorType`
-
- Type (for example, HTTP/TCP) of the monitor.
-
- `context.monitorUrl`
-
- URL of the monitor.
-
- `context.reason`
-
- A concise description of the reason for the alert.
-
- `context.recoveryReason`
-
- A concise description of the reason for the recovery.
-
- `context.status`
-
- Monitor status (for example, "down").
-
- `context.viewInAppUrl`
-
- Open alert details and context in Synthetics app.
-
-
\ No newline at end of file
diff --git a/docs/en/serverless/alerting/triage-slo-burn-rate-breaches.mdx b/docs/en/serverless/alerting/triage-slo-burn-rate-breaches.mdx
deleted file mode 100644
index 33e420bd4d..0000000000
--- a/docs/en/serverless/alerting/triage-slo-burn-rate-breaches.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-slug: /serverless/observability/triage-slo-burn-rate-breaches
-title: Triage SLO burn rate breaches
-description: Triage SLO burn rate breaches to avoid exhausting your error budget and violating your SLO.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-
-
-SLO burn rate breaches occur when the percentage of bad events over a specified time period exceeds the threshold set in your .
-When this happens, you are at risk of exhausting your error budget and violating your SLO.
-
-To triage issues quickly, go to the alert details page:
-
-1. In your Observability project, go to **Alerts** (or open the SLO and click **Alerts**.)
-2. From the Alerts table, click the
-icon next to the alert and select **View alert details**.
-
-The alert details page shows information about the alert, including when the alert was triggered,
-the duration of the alert, the source SLO, and the rule that triggered the alert.
-You can follow the links to navigate to the source SLO or rule definition.
-
-Explore charts on the page to learn more about the SLO breach:
-
-* **Burn rate chart**. The first chart shows the burn rate during the time range when the alert was active.
- The line indicates how close the SLO came to breaching the threshold.
-
- 
-
-
- The timeline is annotated to show when the threshold was breached.
- You can hover over an alert icon to see the timestamp of the alert.
-
-
-* **Alerts history chart**. The next chart provides information about alerts for the same rule and group over the last 30 days.
- It shows the number of those alerts that were triggered per day, the total number of alerts triggered throughout the 30 days,
- and the average time it took to recover after a breach.
-
- 
-
-The number, duration, and frequency of these breaches over time gives you an indication of how severely the service is degrading so that you can focus on high severity issues first.
-
-
- The contents of the alert details page may vary depending on the type of SLI that's defined in the SLO.
-
-
-After investigating the alert, you may want to:
-
-* Click **Snooze the rule** to snooze notifications for a specific time period or indefinitely.
-* Click the icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to .
-* Click the icon and select **Mark as untracked**.
-When an alert is marked as untracked, actions are no longer generated.
-You can choose to move active alerts to this state when you disable or delete rules.
diff --git a/docs/en/serverless/alerting/triage-threshold-breaches.mdx b/docs/en/serverless/alerting/triage-threshold-breaches.mdx
deleted file mode 100644
index 91af20b189..0000000000
--- a/docs/en/serverless/alerting/triage-threshold-breaches.mdx
+++ /dev/null
@@ -1,59 +0,0 @@
----
-slug: /serverless/observability/triage-threshold-breaches
-title: Triage threshold breaches
-description: Triage threshold breaches on the alert details page.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting' ]
----
-
-Threshold breaches occur when an ((observability)) data type reaches or exceeds the threshold set in your .
-For example, you might have a custom threshold rule that triggers an alert when the total number of log documents with a log level of `error` reaches 100.
-
-To triage issues quickly, go to the alert details page:
-
-1. In your Observability project, go to **Alerts**.
-2. From the Alerts table, click the
-icon next to the alert and select **View alert details**.
-
-The alert details page shows information about the alert, including when the alert was triggered,
-the duration of the alert, and the last status update.
-If there is a "group by" field specified in the rule, the page also includes the source.
-You can follow the links to navigate to the rule definition.
-
-Explore charts on the page to learn more about the threshold breach:
-
-* **Charts for each condition**. The page includes a chart for each condition specified in the rule.
- These charts help you understand when the breach occurred and its severity.
-
- 
-
-
- The timeline is annotated to show when the threshold was breached.
- You can hover over an alert icon to see the timestamp of the alert.
-
-
-* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches
- (that is, it has a single condition that uses a count aggregation),
- you can run a log rate analysis, assuming you have the required license.
- Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs.
- Notice that you can adjust the baseline and deviation, and then run the analysis again.
- For more information about using the log rate analysis feature,
- refer to the [AIOps Labs](((kibana-ref))/xpack-ml-aiops.html#log-rate-analysis) documentation.
-
- 
-
-* **Alerts history chart**. The next chart provides information about alerts for the same rule and group over the last 30 days.
- It shows the number of those alerts that were triggered per day, the total number of alerts triggered throughout the 30 days,
- and the average time it took to recover after a breach.
-
- 
-
-Analyze these charts to better understand when the breach started, it's current
-state, and how the issue is trending.
-
-After investigating the alert, you may want to:
-
-* Click **Snooze the rule** to snooze notifications for a specific time period or indefinitely.
-* Click the icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to .
-* Click the icon and select **Mark as untracked**.
-When an alert is marked as untracked, actions are no longer generated.
-You can choose to move active alerts to this state when you disable or delete rules.
diff --git a/docs/en/serverless/alerting/view-alerts.mdx b/docs/en/serverless/alerting/view-alerts.mdx
deleted file mode 100644
index 1a10210a19..0000000000
--- a/docs/en/serverless/alerting/view-alerts.mdx
+++ /dev/null
@@ -1,120 +0,0 @@
----
-slug: /serverless/observability/view-alerts
-title: View alerts
-description: Track and manage alerts for your services and applications.
-tags: [ 'serverless', 'observability', 'how-to', 'alerting']
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-You can track and manage alerts for your applications and SLOs from the **Alerts** page. You can filter this view by alert status or time period, or search for specific alerts using KQL. Manage your alerts by adding them to cases or viewing them within the respective UIs.
-
-{/* Is this a page or dashboard? */}
-
-
-
-## Filter alerts
-
-To help you get started with your analysis faster, use the KQL bar to create structured queries using
-[((kib)) Query Language](((kibana-ref))/kuery-query.html).
-{/* TO-DO: Fix example
-For example, `kibana.alert.rule.name : <>`.
-*/}
-
-You can use the time filter to define a specific date and time range.
-By default, this filter is set to search for the last 15 minutes.
-
-You can also filter by alert status using the buttons below the KQL bar.
-By default, this filter is set to **Show all** alerts, but you can filter to show only active, recovered or untracked alerts.
-
-## View alert details
-
-There are a few ways to inspect the details for a specific alert.
-
-From the **Alerts** table, you can click on a specific alert to open the alert detail flyout to view a summary of the alert without leaving the page.
-There you'll see the current status of the alert, its duration, and when it was last updated.
-To help you determine what caused the alert, you can view the expected and actual threshold values, and the rule that produced the alert.
-
-
-
-There are three common alert statuses:
-
-`active`
- : The conditions for the rule are met and actions should be generated according to the notification settings.
-
-`flapping`
- : The alert is switching repeatedly between active and recovered states.
-
-`recovered`
- : The conditions for the rule are no longer met and recovery actions should be generated.
-
-`untracked`
- : The corresponding rule is disabled or you've marked the alert as untracked. To mark the alert as untracked, go to the **Alerts** table, click the icon to expand the _More actions_ menu, and click **Mark as untracked**.
- When an alert is marked as untracked, actions are no longer generated.
- You can choose to move active alerts to this state when you disable or delete rules.
-
-
-The flapping state is possible only if you have enabled alert flapping detection.
-Go to the **Alerts** page and click **Manage Rules** to navigate to the ((observability)) **((rules-app))** page.
-Click **Settings** then set the look back window and threshold that are used to determine whether alerts are flapping.
-For example, you can specify that the alert must change status at least 6 times in the last 10 runs.
-If the rule has actions that run when the alert status changes, those actions are suppressed while the alert is flapping.
-
-
-{/*  */}
-
-To further inspect the rule:
-
-* From the alert detail flyout, click **View rule details**.
-* From the **Alerts** table, click the icon and select **View rule details**.
-
-To view the alert in the app that triggered it:
-
-* From the alert detail flyout, click **View in app**.
-* From the **Alerts** table, click the icon.
-
-## Customize the alerts table
-
-Use the toolbar buttons in the upper-left of the alerts table to customize the columns you want displayed:
-
-* **Columns**: Reorder the columns.
-* **_x_ fields sorted**: Sort the table by one or more columns.
-* **Fields**: Select the fields to display in the table.
-
-For example, click **Fields** and choose the `Maintenance Windows` field.
-If an alert was affected by a maintenance window, its identifier appears in the new column.
-For more information about their impact on alert notifications, refer to .
-
-{/*  */}
-
-You can also use the toolbar buttons in the upper-right to customize the display options or view the table in full-screen mode.
-
-## Add alerts to cases
-
-From the **Alerts** table, you can add one or more alerts to a case.
-Click the icon to add the alert to a new or existing case.
-You can add an unlimited amount of alerts from any rule type.
-
-
-Each case can have a maximum of 1,000 alerts.
-
-
-### Add an alert to a new case
-
-To add an alert to a new case:
-
-1. Select **Add to new case**.
-1. Enter a case name, add relevant tags, and include a case description.
-1. Under **External incident management system**, select a connector. If you've previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`.
-1. After you've completed all of the required fields, click **Create case**. A notification message confirms you successfully created the case. To view the case details, click the notification link or go to the Cases page.
-
-### Add an alert to an existing case
-
-To add an alert to an existing case:
-
-1. Select **Add to existing case**.
-1. Select the case where you will attach the alert. A confirmation message displays.
diff --git a/docs/en/serverless/apm-agents/apm-agents-aws-lambda-functions.mdx b/docs/en/serverless/apm-agents/apm-agents-aws-lambda-functions.mdx
deleted file mode 100644
index 43b8860e37..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-aws-lambda-functions.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-slug: /serverless/observability/apm-agents-aws-lambda-functions
-title: AWS Lambda functions
-description: Use Elastic APM to monitor your AWS Lambda functions.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
-Elastic APM lets you monitor your AWS Lambda functions.
-The natural integration of distributed tracing into your AWS Lambda functions provides insights into each function's execution and runtime behavior as well as its relationships and dependencies to other services.
-
-
-
-## AWS Lambda architecture
-
-{/* comes from sandbox.elastic.dev/test-books/apm/lambda/aws-lambda-arch.mdx */}
-AWS Lambda uses a special execution model to provide a scalable, on-demand compute service for code execution. In particular, AWS freezes the execution environment of a lambda function when no active requests are being processed. This execution model poses additional requirements on APM in the context of AWS Lambda functions:
-
-1. To avoid data loss, APM data collected by APM agents needs to be flushed before the execution environment of a lambda function is frozen.
-1. Flushing APM data must be fast so as not to impact the response times of lambda function requests.
-
-To accomplish the above, Elastic APM agents instrument AWS Lambda functions and dispatch APM data via an [AWS Lambda extension](https://docs.aws.amazon.com/lambda/latest/dg/using-extensions.html).
-
-Normally, during the execution of a Lambda function, there's only a single language process running in the AWS Lambda execution environment. With an AWS Lambda extension, Lambda users run a _second_ process alongside their main service/application process.
-
-
-
-By using an AWS Lambda extension, Elastic APM agents can send data to a local Lambda extension process, and that process will forward data on to the managed intake service asynchronously. The Lambda extension ensures that any potential latency between the Lambda function and the managed intake service instance will not cause latency in the request flow of the Lambda function itself.
-
-## Setup
-
-To get started with monitoring AWS Lambda functions, refer to the APM agent documentation:
-
-* [Monitor AWS Lambda Node.js functions](((apm-node-ref))/lambda.html)
-* [Monitor AWS Lambda Python functions](((apm-py-ref))/lambda-support.html)
-* [Monitor AWS Lambda Java functions](((apm-java-ref))/aws-lambda.html)
-
-
- The APM agent documentation states that you can use either an APM secret token or API key to authorize requests to the managed intake service. **However, when sending data to a project, you _must_ use an API key**.
-
- Read more about API keys in .
-
-
diff --git a/docs/en/serverless/apm-agents/apm-agents-elastic-apm-agents.mdx b/docs/en/serverless/apm-agents/apm-agents-elastic-apm-agents.mdx
deleted file mode 100644
index 31031526d3..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-elastic-apm-agents.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
----
-slug: /serverless/observability/apm-agents-elastic-apm-agents
-title: Elastic APM agents
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-import Go from '../transclusion/apm/guide/about/go.mdx'
-import Java from '../transclusion/apm/guide/about/java.mdx'
-import Net from '../transclusion/apm/guide/about/net.mdx'
-import Node from '../transclusion/apm/guide/about/node.mdx'
-import Php from '../transclusion/apm/guide/about/php.mdx'
-import Python from '../transclusion/apm/guide/about/python.mdx'
-import Ruby from '../transclusion/apm/guide/about/ruby.mdx'
-
-Elastic APM agents automatically measure application performance and track errors.
-They offer built-in support for popular frameworks and technologies, and provide easy-to-use APIs that allow you to instrument any application.
-
-Elastic APM agents are built and maintained by Elastic. While they are similar, different programming languages have different nuances and requirements. Select your preferred language below to learn more about how each agent works.
-
-
-
-
-
-
-
-
-
-
-
-## Minimum supported versions
-
-The following versions of Elastic APM agents are supported:
-
-| Agent name | Agent version |
-|---|---|
-| **APM AWS Lambda extension** | ≥`1.x` |
-| **Go agent** | ≥`1.x` |
-| **Java agent** | ≥`1.x` |
-| **.NET agent** | ≥`1.x` |
-| **Node.js agent** | ≥`4.x` |
-| **PHP agent** | ≥`1.x` |
-| **Python agent** | ≥`6.x` |
-| **Ruby agent** | ≥`3.x` |
-
-
-Some recently added features may require newer agent versions than those listed above.
-In these instances, the required APM agent versions will be documented with the feature.
-
diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-collect-metrics.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-collect-metrics.mdx
deleted file mode 100644
index ba3e6d2359..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-collect-metrics.mdx
+++ /dev/null
@@ -1,61 +0,0 @@
----
-slug: /serverless/observability/apm-agents-opentelemetry-collect-metrics
-title: Collect metrics
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-When collecting metrics, please note that the [`DoubleValueRecorder`](https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html)
-and [`LongValueRecorder`](https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/LongValueObserver.html) metrics are not yet supported.
-
-
-Here's an example of how to capture business metrics from a Java application.
-
-```java
-// initialize metric
-Meter meter = GlobalMetricsProvider.getMeter("my-frontend");
-DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build();
-
-public void createOrder(HttpServletRequest request) {
-
- // create order in the database
- ...
- // increment business metrics for monitoring
- orderValueCounter.add(orderPrice);
-}
-```
-
-See the [Open Telemetry Metrics API](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md)
-for more information.
-
-
-
-## Verify OpenTelemetry metrics data
-
-Use **Discover** to validate that metrics are successfully reported to your project.
-
-1. Open your Observability project.
-1. In your ((observability)) project, go to **Discover**, and select the **Logs Explorer** tab.
-1. Click **All logs** → **Data Views** then select **APM**.
-1. Filter the data to only show documents with metrics: `processor.name :"metric"`
-1. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return
- only OpenTelemetry metrics documents.
-
-
-
-## Visualize
-
-Use **Lens** to create visualizations for OpenTelemetry metrics. Lens enables you to build visualizations by dragging and dropping data fields. It makes smart visualization suggestions for your data, allowing you to switch between visualization types.
-
-To get started with a new Lens visualization:
-
-1. In your ((observability)) project, go to **Visualizations**.
-1. Click **Create new visualization**.
-1. Select **Lens**.
-
-For more information on using Lens, refer to the [Lens documentation](((kibana-ref))/lens.html).
-
diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-limitations.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-limitations.mdx
deleted file mode 100644
index 9391f10de1..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-limitations.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-slug: /serverless/observability/apm-agents-opentelemetry-limitations
-title: Limitations
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-## OpenTelemetry traces
-
-* Traces of applications using `messaging` semantics might be wrongly displayed as `transactions` in the Applications UI, while they should be considered `spans` (see issue [#7001](https://github.com/elastic/apm-server/issues/7001)).
-* Inability to see Stack traces in spans.
-* Inability in APM views to view the "Time Spent by Span Type" (see issue [#5747](https://github.com/elastic/apm-server/issues/5747)).
-
-
-
-## OpenTelemetry logs
-
-* The OpenTelemetry logs intake via Elastic is in technical preview.
-* The application logs data stream (`app_logs`) has dynamic mapping disabled. This means the automatic detection and mapping of new fields is disabled (see issue [#9093](https://github.com/elastic/apm-server/issues/9093)).
-
-
-
-## OpenTelemetry Line Protocol (OTLP)
-
-Elastic supports both the
-[(OTLP/gRPC)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc) and
-[(OTLP/HTTP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp) protocol
-with ProtoBuf payload. Elastic does not yet support JSON Encoding for OTLP/HTTP.
-
-
-
-## OpenTelemetry Collector exporter for Elastic
-
-The [OpenTelemetry Collector exporter for Elastic](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter#legacy-opentelemetry-collector-exporter-for-elastic)
-has been deprecated and replaced by the native support of the OpenTelemetry Line Protocol in Elastic Observability (OTLP). To learn more, see [migration](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter#migration).
-
-The [OpenTelemetry Collector exporter for Elastic](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter)
-(which is different from the legacy exporter mentioned above) is not intended to be used with Elastic APM and Elastic Observability. Use instead.
diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx
deleted file mode 100644
index ab639a4e1b..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx
+++ /dev/null
@@ -1,177 +0,0 @@
----
-slug: /serverless/observability/apm-agents-opentelemetry-opentelemetry-native-support
-title: Upstream OpenTelemetry Collectors and language SDKs
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
-This is one of several approaches you can use to integrate Elastic with OpenTelemetry.
-**To compare approaches and choose the best approach for your use case, refer to .**
-
-
-Elastic natively supports the OpenTelemetry protocol (OTLP).
-This means trace data and metrics collected from your applications and infrastructure can
-be sent directly to Elastic.
-
-* Send data to Elastic from an upstream [OpenTelemetry Collector](#send-data-from-an-upstream-opentelemetry-collector)
-* Send data to Elastic from an upstream [OpenTelemetry language SDK](#send-data-from-an-upstream-opentelemetry-sdk)
-
-## Send data from an upstream OpenTelemetry Collector
-
-Connect your OpenTelemetry Collector instances to ((observability)) using the OTLP exporter:
-
-```yaml
-receivers: [^1]
- # ...
- otlp:
-
-processors: [^2]
- # ...
- memory_limiter:
- check_interval: 1s
- limit_mib: 2000
- batch:
-
-exporters:
- logging:
- loglevel: warn [^3]
- otlp/elastic: [^4]
- # Elastic https endpoint without the "https://" prefix
- endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" <5> [^7]
- headers:
- # Elastic API key
- Authorization: "ApiKey ${ELASTIC_APM_API_KEY}" <6> [^7]
-
-service:
- pipelines:
- traces:
- receivers: [otlp]
- processors: [..., memory_limiter, batch]
- exporters: [logging, otlp/elastic]
- metrics:
- receivers: [otlp]
- processors: [..., memory_limiter, batch]
- exporters: [logging, otlp/elastic]
- logs: [^8]
- receivers: [otlp]
- processors: [..., memory_limiter, batch]
- exporters: [logging, otlp/elastic]
-```
-[^1]: The receivers, like the
-[OTLP receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver), that forward data emitted by APM agents, or the [host metrics receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver).
-[^2]: We recommend using the [Batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and the [memory limiter processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md). For more information, see [recommended processors](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors).
-[^3]: The [logging exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter) is helpful for troubleshooting and supports various logging levels, like `debug`, `info`, `warn`, and `error`.
-[^4]: ((observability)) endpoint configuration.
-Elastic supports a ProtoBuf payload via both the OTLP protocol over gRPC transport [(OTLP/gRPC)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc)
-and the OTLP protocol over HTTP transport [(OTLP/HTTP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp).
-To learn more about these exporters, see the OpenTelemetry Collector documentation:
-[OTLP/HTTP Exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) or
-[OTLP/gRPC exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter).
-[^5]: Hostname and port of the Elastic endpoint. For example, `elastic-apm-server:8200`.
-[^6]: Credential for Elastic APM API key authorization (`Authorization: "ApiKey an_api_key"`).
-[^7]: Environment-specific configuration parameters can be conveniently passed in as environment variables documented [here](https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables) (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_API_KEY`).
-[^8]: To send OpenTelemetry logs to your project, declare a `logs` pipeline.
-
-You're now ready to export traces and metrics from your services and applications.
-
-
-When using the OpenTelemetry Collector, you should always prefer sending data via the [`OTLP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter).
-Using other methods, like the [`elasticsearch` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter), will bypass all of the validation and data processing that Elastic performs.
-In addition, your data will not be viewable in your Observability project if you use the `elasticsearch` exporter.
-
-
-## Send data from an upstream OpenTelemetry SDK
-
-
-This document outlines how to send data directly from an upstream OpenTelemetry SDK to APM Server, which is appropriate when getting started. However, in many cases you should use the OpenTelemetry SDK to send data to an OpenTelemetry Collector that processes and exports data to APM Server. Read more about when and how to use a collector in the [OpenTelemetry documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).
-
-
-To export traces and metrics to Elastic, instrument your services and applications
-with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app with the
-[OpenTelemetry agent for Java](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
-See the [OpenTelemetry Instrumentation guides](https://opentelemetry.io/docs/instrumentation/) to download the
-OpenTelemetry agent or SDK for your language.
-
-Define environment variables to configure the OpenTelemetry agent or SDK and enable communication with Elastic APM.
-For example, if you are instrumenting a Java app, define the following environment variables:
-
-```bash
-export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production
-export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200
-export OTEL_EXPORTER_OTLP_HEADERS="Authorization=ApiKey an_apm_api_key"
-export OTEL_METRICS_EXPORTER="otlp" \
-export OTEL_LOGS_EXPORTER="otlp" \ [^1]
-java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
- -classpath lib/*:classes/ \
- com.mycompany.checkout.CheckoutServiceServer
-```
-[^1]: The OpenTelemetry logs intake via Elastic is currently in technical preview.
-
-
-
-
- `OTEL_RESOURCE_ATTRIBUTES`
- Fields that describe the service and the environment that the service runs in. See resource attributes for more information.
-
-
- `OTEL_EXPORTER_OTLP_ENDPOINT`
- Elastic URL. The host and port that Elastic listens for APM events on.
-
-
- `OTEL_EXPORTER_OTLP_HEADERS`
-
- Authorization header that includes the Elastic APM API key: `"Authorization=ApiKey an_api_key"`.
- Note the required space between `ApiKey` and `an_api_key`.
-
- For information on how to format an API key, refer to Secure communication with APM agents.
-
-
- If you are using a version of the Python OpenTelemetry agent _before_ 1.27.0, the content of the header _must_ be URL-encoded. You can use the Python standard library's `urllib.parse.quote` function to encode the content of the header.
-
-
-
-
- `OTEL_METRICS_EXPORTER`
- Metrics exporter to use. See [exporter selection](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection) for more information.
-
-
- `OTEL_LOGS_EXPORTER`
- Logs exporter to use. See [exporter selection](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection) for more information.
-
-
-
-You are now ready to collect traces and metrics before verifying metrics
-and visualizing metrics.
-
-## Proxy requests to Elastic
-
-Elastic supports both the [(OTLP/gRPC)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc) and [(OTLP/HTTP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp) protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to Elastic.
-
-If you use the OTLP/gRPC protocol, requests to Elastic must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: `"Content-Type: application/grpc"`.
-
-When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to Elastic follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you'd select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request.
-
-For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post:
-[Application Load Balancer Support for End-to-End HTTP/2 and gRPC](https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/).
-
-For more information on how Elastic services gRPC requests, see
-[Muxing gRPC and HTTP/1.1](https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11).
-
-## Next steps
-
-* Collect metrics
-* Add Resource attributes
-* Learn about the limitations of this integration
-
diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-resource-attributes.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-resource-attributes.mdx
deleted file mode 100644
index 392689a395..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-resource-attributes.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-slug: /serverless/observability/apm-agents-opentelemetry-resource-attributes
-title: Resource attributes
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-A resource attribute is a key/value pair containing information about the entity producing telemetry.
-Resource attributes are mapped to Elastic Common Schema (ECS) fields like `service.*`, `cloud.*`, `process.*`, etc.
-These fields describe the service and the environment that the service runs in.
-
-The examples shown here set the Elastic (ECS) `service.environment` field for the resource, i.e. service, that is producing trace events.
-Note that Elastic maps the OpenTelemetry `deployment.environment` field to
-the ECS `service.environment` field on ingestion.
-
-**OpenTelemetry agent**
-
-Use the `OTEL_RESOURCE_ATTRIBUTES` environment variable to pass resource attributes at process invocation.
-
-```bash
-export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production
-```
-
-**OpenTelemetry collector**
-
-Use the [resource processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor) to set or apply changes to resource attributes.
-
-```yaml
-...
-processors:
- resource:
- attributes:
- - key: deployment.environment
- action: insert
- value: production
-...
-```
-
-
-
-Need to add event attributes instead?
-Use attributes—not to be confused with resource attributes—to add data to span, log, or metric events.
-Attributes can be added as a part of the OpenTelemetry instrumentation process or with the [attributes processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/attributesprocessor).
-
-
-
diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx
deleted file mode 100644
index ed5b4c6670..0000000000
--- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx
+++ /dev/null
@@ -1,109 +0,0 @@
----
-slug: /serverless/observability/apm-agents-opentelemetry
-title: Use OpenTelemetry with APM
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
- For a complete overview of using OpenTelemetry with Elastic, explore [Elastic Distributions of OpenTelemetry](https://github.com/elastic/opentelemetry).
-
-
-[OpenTelemetry](https://opentelemetry.io/docs/concepts/what-is-opentelemetry/) is a set of APIs, SDKs, tooling, and integrations that enable the capture and management of telemetry data from your services and applications.
-
-Elastic integrates with OpenTelemetry, allowing you to reuse your existing instrumentation to easily send observability data to the Elastic Stack. There are several ways to integrate OpenTelemetry with the Elastic Stack:
-
-* [Elastic Distributions of OpenTelemetry language SDKs](#elastic-distributions-of-opentelemetry-language-sdks)
-* [Upstream OpenTelemetry API/SDK + Elastic APM agent](#upstream-opentelemetry-apisdk--elastic-apm-agent)
-* [Upstream OpenTelemetry Collector and language SDKs](#upstream-opentelemetry-collector-and-language-sdks)
-* [AWS Lambda collector exporter](#aws-lambda-collector-exporter)
-
-## Elastic Distributions of OpenTelemetry language SDKs
-
-
-
-Elastic offers several distributions of OpenTelemetry language SDKs. A _distribution_ is a customized version of an upstream OpenTelemetry repository. Each Elastic Distribution of OpenTelemetry is a customized version of an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/).
-
-
-
-With an Elastic Distribution of OpenTelemetry language SDK you have access to all the features of the OpenTelemetry SDK that it customizes, plus:
-
-* You may get access to SDK improvements and bug fixes contributed by the Elastic team _before_ the changes are available upstream in the OpenTelemetry repositories.
-* The distribution preconfigures the collection of tracing and metrics signals, applying some opinionated defaults, such as which sources are collected by default.
-
-{/* Why you wouldn't choose this method */}
-{/* Just that it's still in tech preview? */}
-
-{/* Where to go next */}
-Get started with an Elastic Distribution of OpenTelemetry language SDK:
-
-* [**Elastic Distribution of OpenTelemetry Java →**](https://github.com/elastic/elastic-otel-java)
-* [**Elastic Distribution of OpenTelemetry .NET →**](https://github.com/elastic/elastic-otel-dotnet)
-* [**Elastic Distribution of OpenTelemetry Node.js →**](https://github.com/elastic/elastic-otel-node)
-* [**Elastic Distribution of OpenTelemetry Python →**](https://github.com/elastic/elastic-otel-python)
-* [**Elastic Distribution of OpenTelemetry PHP →**](https://github.com/elastic/elastic-otel-php)
-
-
- For more details about OpenTelemetry distributions in general, visit the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/distributions).
-
-
-## Upstream OpenTelemetry API/SDK + Elastic APM agent
-
-Use the OpenTelemetry API/SDKs with Elastic APM agents to translate OpenTelemetry API calls to Elastic APM API calls.
-
-
-
-{/* Why you _would_ choose this method */}
-This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans — avoiding vendor lock-in and having to redo manual instrumentation.
-
-{/* Why you would _not_ choose this method */}
-However, not all features of the OpenTelemetry API are supported when using this approach, and not all Elastic APM agents support this approach.
-
-{/* Where to go next */}
-Find more details about how to use an OpenTelemetry API or SDK with an Elastic APM agent and which OpenTelemetry API features are supported in the APM agent documentation:
-
-* [**APM Java agent →**](https://www.elastic.co/guide/en/apm/agent/java/current/opentelemetry-bridge.html)
-* [**APM .NET agent →**](https://www.elastic.co/guide/en/apm/agent/dotnet/current/opentelemetry-bridge.html)
-* [**APM Node.js agent →**](https://www.elastic.co/guide/en/apm/agent/nodejs/current/opentelemetry-bridge.html)
-* [**APM Python agent →**](https://www.elastic.co/guide/en/apm/agent/python/current/opentelemetry-bridge.html)
-
-## Upstream OpenTelemetry Collector and language SDKs
-
-The Elastic Stack natively supports the OpenTelemetry protocol (OTLP). This means trace data and metrics collected from your applications and infrastructure by an OpenTelemetry Collector or OpenTelemetry language SDK can be sent to the Elastic Stack.
-
-You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), instrument your application with an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/) that sends data to the collector, and use the collector to process and export the data to APM Server.
-
-
-
-
- It's also possible to send data directly to APM Server from an upstream OpenTelemetry SDK. You might do this during development or if you're monitoring a small-scale application. Read more about when to use a collector in the [OpenTelemetry documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).
-
-
-{/* Why you _would_ choose this approach */}
-This approach works well when you need to instrument a technology that Elastic doesn't provide a solution for. For example, if you want to instrument C or C((plus))((plus)) you could use the [OpenTelemetry C((plus))((plus)) client](https://github.com/open-telemetry/opentelemetry-cpp).
-{/* Other languages include erlang, lua, perl. */}
-
-{/* Why you would _not_ choose this approach */}
-However, there are some limitations when using collectors and language SDKs built and maintained by OpenTelemetry, including:
-
-* Elastic can't provide implementation support on how to use upstream OpenTelemetry tools.
-* You won't have access to Elastic enterprise APM features.
-* You may experience problems with performance efficiency.
-
-For more on the limitations associated with using upstream OpenTelemetry tools, refer to .
-
-{/* Where to go next */}
-**Get started with upstream OpenTelemetry Collectors and language SDKs →**
-
-## AWS Lambda collector exporter
-
-AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic Observability.
-
-{/* Do we want to say anything about why you would/wouldn't choose this method to send data to Elastic? */}
-
-{/* Where to go next */}
-To get started, follow the official AWS Distro for OpenTelemetry Lambda documentation, and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster:
-
-**Get started with the AWS Distro for OpenTelemetry Lambda**
diff --git a/docs/en/serverless/apm/apm-compress-spans.mdx b/docs/en/serverless/apm/apm-compress-spans.mdx
deleted file mode 100644
index 5e02366cc8..0000000000
--- a/docs/en/serverless/apm/apm-compress-spans.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
----
-slug: /serverless/observability/apm-compress-spans
-title: Compress spans
-description: Compress similar or identical spans to reduce storage overhead, processing power needed, and clutter in the Applications UI.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction.
-For example, this can happen if spans are captured inside a loop or in unoptimized SQL queries that use multiple
-queries instead of joins to fetch related data.
-
-In such cases, the upper limit of spans per transaction (by default, 500 spans) can be reached quickly, causing the agent to stop capturing potentially more relevant spans for a given transaction.
-
-Capturing similar or identical spans often isn't helpful, especially if they are of very short duration.
-They can also clutter the UI, and cause processing and storage overhead.
-
-To address this problem, APM agents can compress similar spans into a single span.
-The compressed span retains most of the original span information, including the overall duration and number of spans it represents.
-
-Regardless of the compression strategy, a span is eligible for compression if:
-
-- It has not propagated its trace context.
-- It is an _exit_ span (such as database query spans).
-- Its outcome is not `"failure"`.
-
-## Compression strategies
-
-The ((apm-agent)) selects between two strategies to decide if adjacent spans can be compressed.
-In both strategies, only one previous span needs to be kept in memory.
-This ensures that the agent doesn't require large amounts of memory to enable span compression.
-
-### Same-Kind strategy
-
-The agent uses the same-kind strategy if two adjacent spans have the same:
-
- * span type
- * span subtype
- * `destination.service.resource` (e.g. database name)
-
-### Exact-Match strategy
-
-The agent uses the exact-match strategy if two adjacent spans have the same:
-
- * span name
- * span type
- * span subtype
- * `destination.service.resource` (e.g. database name)
-
-## Settings
-
-You can specify the maximum span duration in the agent's configuration settings.
-Spans with a duration longer than the specified value will not be compressed.
-
-For the "Same-Kind" strategy, the default maximum span duration is 0 milliseconds, which means that
-the "Same-Kind" strategy is disabled by default.
-For the "Exact-Match" strategy, the default limit is 50 milliseconds.
-
-### Agent support
-
-Support for span compression is available in the following agents and can be configured
-using the options listed below:
-
-| Agent | Same-kind config | Exact-match config |
-|---|---|---|
-| **Go agent** | [`ELASTIC_APM_SPAN_COMPRESSION_SAME_KIND_MAX_DURATION`](((apm-go-ref-v))/configuration.html#config-span-compression-same-kind-duration) | [`ELASTIC_APM_SPAN_COMPRESSION_EXACT_MATCH_MAX_DURATION`](((apm-go-ref-v))/configuration.html#config-span-compression-exact-match-duration) |
-| **Java agent** | [`span_compression_same_kind_max_duration`](((apm-java-ref-v))/config-huge-traces.html#config-span-compression-same-kind-max-duration) | [`span_compression_exact_match_max_duration`](((apm-java-ref-v))/config-huge-traces.html#config-span-compression-exact-match-max-duration) |
-| **.NET agent** | [`SpanCompressionSameKindMaxDuration`](((apm-dotnet-ref-v))/config-core.html#config-span-compression-same-kind-max-duration) | [`SpanCompressionExactMatchMaxDuration`](((apm-dotnet-ref-v))/config-core.html#config-span-compression-exact-match-max-duration) |
-| **Node.js agent** | [`spanCompressionSameKindMaxDuration`](((apm-node-ref-v))/configuration.html#span-compression-same-kind-max-duration) | [`spanCompressionExactMatchMaxDuration`](((apm-node-ref-v))/configuration.html#span-compression-exact-match-max-duration) |
-| **Python agent** | [`span_compression_same_kind_max_duration`](((apm-py-ref-v))/configuration.html#config-span-compression-same-kind-max-duration) | [`span_compression_exact_match_max_duration`](((apm-py-ref-v))/configuration.html#config-span-compression-exact-match-max_duration) |
diff --git a/docs/en/serverless/apm/apm-create-custom-links.mdx b/docs/en/serverless/apm/apm-create-custom-links.mdx
deleted file mode 100644
index a500ba94e0..0000000000
--- a/docs/en/serverless/apm/apm-create-custom-links.mdx
+++ /dev/null
@@ -1,204 +0,0 @@
----
-slug: /serverless/observability/apm-create-custom-links
-title: Create custom links
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Elastic's custom link feature allows you to easily create up to 500 dynamic links
-based on your specific APM data.
-Custom links can be filtered to only appear for relevant services,
-environments, transaction types, or transaction names.
-
-Ready to dive in? Jump straight to the examples.
-
-## Create a link
-
-Each custom link consists of a label, URL, and optional filter.
-The easiest way to create a custom link is from within the actions dropdown in the transaction detail page.
-This method will automatically apply filters, scoping the link to that specific service,
-environment, transaction type, and transaction name.
-
-Alternatively, you can create a custom link by navigating to any page within **Applications** and selecting **Settings** → **Custom Links** → **Create custom link**.
-
-### Label
-
-The name of your custom link.
-The actions context menu displays this text, so keep it as short as possible.
-
-
-Custom links are displayed alphabetically in the actions menu.
-
-
-### URL
-
-The URL your link points to.
-URLs support dynamic field name variables, encapsulated in double curly brackets: `{{field.name}}`.
-These variables will be replaced with transaction metadata when the link is clicked.
-
-Because everyone's data is different,
-you'll need to examine your traces to see what metadata is available for use.
-To do this, select a trace and click **Metadata** in the **Trace Sample** table.
-
-
-
-### Filters
-
-Filter each link to only appear for specific services or transactions.
-You can filter on the following fields:
-
-* `service.name`
-* `service.env`
-* `transaction.type`
-* `transaction.name`
-
-Multiple values are allowed when comma-separated.
-
-## Custom link examples
-
-Not sure where to start with custom links?
-Take a look at the examples below and customize them to your liking!
-
-### Email
-
-Email the owner of a service.
-
-{/* TODO: If we change these to Docsmobile tables they might look better */}
-
-| | |
-|---|---|
-| Label | `Email engineer` |
-| Link | `mailto:@.com` |
-| Filters | `service.name:` |
-
-**Example**
-
-This link opens an email addressed to the team or owner of `python-backend`.
-It will only appear on services with the name `python-backend`.
-
-| | |
-|---|---|
-| Label | `Email python-backend engineers` |
-| Link | `mailto:python_team@elastic.co` |
-| Filters | `service.name:python-backend` |
-
-### GitHub issue
-
-Open a GitHub issue with prepopulated metadata from the selected trace sample.
-
-| | |
-|---|---|
-| Label | `Open an issue in ` |
-| Link | `https://github.com///issues/new?title=&body=` |
-| Filters | `service.name:client` |
-
-**Example**
-
-This link opens a new GitHub issue in the apm-agent-rum repository.
-It populates the issue body with relevant metadata from the currently active trace.
-Clicking this link results in the following issue being created:
-
-
-
-| | |
-|---|---|
-| Label | `Open an issue in apm-rum-js` |
-| Link | `https://github.com/elastic/apm-agent-rum-js/issues/new?title=Investigate+APM+trace&body=Investigate+the+following+APM+trace%3A%0D%0A%0D%0Aservice.name%3A+{{service.name}}%0D%0Atransaction.id%3A+{{transaction.id}}%0D%0Acontainer.id%3A+{{container.id}}%0D%0Aurl.full%3A+{{url.full}}` |
-| Filters | `service.name:client` |
-
-See the [GitHub automation documentation](https://help.github.com/en/github/managing-your-work-on-github/about-automation-for-issues-and-pull-requests-with-query-parameters) for a full list of supported query parameters.
-
-
-
-### Jira task
-
-Create a Jira task with prepopulated metadata from the selected trace sample.
-
-| | |
-|---|---|
-| Label | `Open an issue in Jira` |
-| Link | `https:///secure/CreateIssueDetails!init.jspa?` |
-
-**Example**
-
-This link creates a new task on the Engineering board in Jira.
-It populates the issue body with relevant metadata from the currently active trace.
-Clicking this link results in the following task being created in Jira:
-
-
-
-| | |
-|---|---|
-| Label | `Open a task in Jira` |
-| Link | `https://test-site-33.atlassian.net/secure/CreateIssueDetails!init.jspa?pid=10000&issuetype=10001&summary=Created+via+APM&description=Investigate+the+following+APM+trace%3A%0D%0A%0D%0Aservice.name%3A+{{service.name}}%0D%0Atransaction.id%3A+{{transaction.id}}%0D%0Acontainer.id%3A+{{container.id}}%0D%0Aurl.full%3A+{{url.full}}` |
-
-See the [Jira application administration knowledge base](https://confluence.atlassian.com/jirakb/how-to-create-issues-using-direct-html-links-in-jira-server-159474.html)
-for a full list of supported query parameters.
-
-### Dashboards
-
-Link to a custom dashboard.
-
-| | |
-|---|---|
-| Label | `Open transaction in custom visualization` |
-| Link | `https://kibana-instance/app/kibana#/dashboard?_g=query:(language:kuery,query:'transaction.id:{{transaction.id}}'...` |
-
-**Example**
-
-This link opens the current `transaction.id` in a custom dashboard.
-There are no filters set.
-
-| | |
-|---|---|
-| Label | `Open transaction in Python drilldown viz` |
-| URL | `https://kibana-instance/app/kibana#/dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-24h,to:now))&_a=(description:'',filters:!(),fullScreenMode:!f,options:(hidePanelTitles:!f,useMargins:!t),panels:!((embeddableConfig:(),gridData:(h:15,i:cb79c1c0-1af8-472c-aaf7-d158a76946fb,w:24,x:0,y:0),id:c8c74b20-6a30-11ea-92ab-b5d3feff11df,panelIndex:cb79c1c0-1af8-472c-aaf7-d158a76946fb,type:visualization,version:'7.7')),query:(language:kuery,query:'transaction.id:{{transaction.id}}'),timeRestore:!f,title:'',viewMode:edit)` |
-
-### Slack channel
-
-Open a specified slack channel.
-
-| | |
-|---|---|
-| Label | `Open SLACK_CHANNEL` |
-| Link | `https://COMPANY_SLACK.slack.com/archives/SLACK_CHANNEL` |
-| Filters | `service.name` : `SERVICE_NAME` |
-
-**Example**
-
-This link opens a company slack channel, #apm-user-support.
-It only appears when `transaction.name` is `GET user/login`.
-
-| | |
-|---|---|
-| Label | `Open #apm-user-support` |
-| Link | `https://COMPANY_SLACK.slack.com/archives/efk52kt23k` |
-| Filters | `transaction.name:GET user/login` |
-
-### Website
-
-Open an internal or external website.
-
-| | |
-|---|---|
-| Label | `Open ` |
-| Link | `https://.slack.com/archives/` |
-| Filters | `service.name:` |
-
-**Example**
-
-This link opens more data on a specific `user.email`.
-It only appears on front-end transactions.
-
-| | |
-|---|---|
-| Label | `View user internally` |
-| Link | `https://internal-site.company.com/user/{{user.email}}` |
-| Filters | `service.name:client` |
-
diff --git a/docs/en/serverless/apm/apm-data-types.mdx b/docs/en/serverless/apm/apm-data-types.mdx
deleted file mode 100644
index 0082fd1bc6..0000000000
--- a/docs/en/serverless/apm/apm-data-types.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
----
-slug: /serverless/observability/apm-data-types
-title: APM data types
-description: Learn about the various APM data types.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Elastic APM agents capture different types of information from within their instrumented applications.
-These are known as events, and can be spans, transactions, errors, or metrics:
-
-* **Spans** contain information about the execution of a specific code path.
-They measure from the start to the end of an activity, and they can have a parent/child
-relationship with other spans.
-* **Transactions** are a special kind of _span_ that have additional attributes associated with them.
-They describe an event captured by an Elastic ((apm-agent)) instrumenting a service.
-You can think of transactions as the highest level of work you’re measuring within a service.
-* **Errors** contain at least information about the original `exception` that occurred or about
-a `log` created when the exception occurred. For simplicity, errors are represented by a unique ID.
-* **Metrics** measure the state of a system by gathering information on a regular interval.
diff --git a/docs/en/serverless/apm/apm-distributed-tracing.mdx b/docs/en/serverless/apm/apm-distributed-tracing.mdx
deleted file mode 100644
index 6b72c89c2e..0000000000
--- a/docs/en/serverless/apm/apm-distributed-tracing.mdx
+++ /dev/null
@@ -1,106 +0,0 @@
----
-slug: /serverless/observability/apm-distributed-tracing
-title: Distributed tracing
-description: Understand how a single request that travels through multiple services impacts your application.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import TabWidgetsDistributedTraceSendWidget from '../transclusion/apm/guide/tab-widgets/distributed-trace-send-widget.mdx'
-import TabWidgetsDistributedTraceReceiveWidget from '../transclusion/apm/guide/tab-widgets/distributed-trace-receive-widget.mdx'
-
-A `trace` is a group of transactions and spans with a common root.
-Each `trace` tracks the entirety of a single request.
-When a `trace` travels through multiple services, as is common in a microservice architecture,
-it is known as a distributed trace.
-
-## Why is distributed tracing important?
-
-Distributed tracing enables you to analyze performance throughout your microservice architecture
-by tracing the entirety of a request — from the initial web request on your front-end service
-all the way to database queries made on your back-end services.
-
-Tracking requests as they propagate through your services provides an end-to-end picture of
-where your application is spending time, where errors are occurring, and where bottlenecks are forming.
-Distributed tracing eliminates individual service's data silos and reveals what's happening outside of
-service borders.
-
-For supported technologies, distributed tracing works out-of-the-box, with no additional configuration required.
-
-## How distributed tracing works
-
-Distributed tracing works by injecting a custom `traceparent` HTTP header into outgoing requests.
-This header includes information, like `trace-id`, which is used to identify the current trace,
-and `parent-id`, which is used to identify the parent of the current span on incoming requests
-or the current span on an outgoing request.
-
-When a service is working on a request, it checks for the existence of this HTTP header.
-If it's missing, the service starts a new trace.
-If it exists, the service ensures the current action is added as a child of the existing trace,
-and continues to propagate the trace.
-
-### Trace propagation examples
-
-In this example, Elastic's Ruby agent communicates with Elastic's Java agent.
-Both support the `traceparent` header, and trace data is successfully propagated.
-
-
-
-In this example, Elastic's Ruby agent communicates with OpenTelemetry's Java agent.
-Both support the `traceparent` header, and trace data is successfully propagated.
-
-
-
-In this example, the trace meets a piece of middleware that doesn't propagate the `traceparent` header.
-The distributed trace ends and any further communication will result in a new trace.
-
-
-
-### W3C Trace Context specification
-
-All Elastic agents now support the official W3C Trace Context specification and `traceparent` header.
-See the table below for the minimum required agent version:
-
-| Agent name | Agent Version |
-|---|---|
-| **Go Agent** | ≥`1.6` |
-| **Java Agent** | ≥`1.14` |
-| **.NET Agent** | ≥`1.3` |
-| **Node.js Agent** | ≥`3.4` |
-| **PHP Agent** | ≥`1.0` |
-| **Python Agent** | ≥`5.4` |
-| **Ruby Agent** | ≥`3.5` |
-
-
-Older Elastic agents use a unique `elastic-apm-traceparent` header.
-For backward-compatibility purposes, new versions of Elastic agents still support this header.
-
-
-## Visualize distributed tracing
-
-APM's timeline visualization provides a visual deep-dive into each of your application's traces:
-
-
-
-## Manual distributed tracing
-
-Elastic agents automatically propagate distributed tracing context for supported technologies.
-If your service communicates over a different, unsupported protocol,
-you can manually propagate distributed tracing context from a sending service to a receiving service
-with each agent's API.
-
-### Add the `traceparent` header to outgoing requests
-
-Sending services must add the `traceparent` header to outgoing requests.
-
-
-
-
-
-### Parse the `traceparent` header on incoming requests
-
-Receiving services must parse the incoming `traceparent` header,
-and start a new transaction or span as a child of the received context.
-
-
diff --git a/docs/en/serverless/apm/apm-filter-your-data.mdx b/docs/en/serverless/apm/apm-filter-your-data.mdx
deleted file mode 100644
index ab25033721..0000000000
--- a/docs/en/serverless/apm/apm-filter-your-data.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
----
-slug: /serverless/observability/apm-filter-your-data
-title: Filter your data
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Global filters are ways you can filter your APM data based on a specific
-time range or environment. When viewing a specific service, the filter persists
-as you move between tabs.
-
-
-
-
-
-If you prefer to use advanced queries on your data to filter on specific pieces
-of information, see Query your data.
-
-
-
-## Global time range
-
-The global time range filter restricts APM data to a specific time period.
-
-## Service environment filter
-
-The environment selector is a global filter for `service.environment`.
-It allows you to view only relevant data and is especially useful for separating development from production environments.
-By default, all environments are displayed. If there are no environment options, you'll see "not defined".
-
-Service environments are defined when configuring your APM agents.
-It's vital to be consistent when naming environments in your APM agents.
-To learn how to configure service environments, see the specific APM agent documentation:
-
-* **Go:** [`ELASTIC_APM_ENVIRONMENT`](((apm-go-ref))/configuration.html#config-environment)
-* **Java:** [`environment`](((apm-java-ref))/config-core.html#config-environment)
-* **.NET:** [`Environment`](((apm-dotnet-ref))/config-core.html#config-environment)
-* **Node.js:** [`environment`](((apm-node-ref))/configuration.html#environment)
-* **PHP:** [`environment`](((apm-php-ref))/configuration-reference.html#config-environment)
-* **Python:** [`environment`](((apm-py-ref))/configuration.html#config-environment)
-* **Ruby:** [`environment`](((apm-ruby-ref))/configuration.html#config-environment)
-{/* * **iOS agent:** _Not yet supported_ */}
-{/* * **Real User Monitoring:** [`environment`](((apm-rum-ref))/configuration.html#environment) */}
-
diff --git a/docs/en/serverless/apm/apm-find-transaction-latency-and-failure-correlations.mdx b/docs/en/serverless/apm/apm-find-transaction-latency-and-failure-correlations.mdx
deleted file mode 100644
index d84b32ad51..0000000000
--- a/docs/en/serverless/apm/apm-find-transaction-latency-and-failure-correlations.mdx
+++ /dev/null
@@ -1,98 +0,0 @@
----
-slug: /serverless/observability/apm-find-transaction-latency-and-failure-correlations
-title: Find transaction latency and failure correlations
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Correlations surface attributes of your data that are potentially correlated
-with high-latency or erroneous transactions. For example, if you are a site
-reliability engineer who is responsible for keeping production systems up and
-running, you want to understand what is causing slow transactions. Identifying
-attributes that are responsible for higher latency transactions can potentially
-point you toward the root cause. You may find a correlation with a particular
-piece of hardware, like a host or pod. Or, perhaps a set of users, based on IP
-address or region, is facing increased latency due to local data center issues.
-
-To find correlations:
-
-1. In your ((observability)) project, go to **Applications** → **Services**.
-1. Select a service.
-1. Select the **Transactions** tab.
-1. Select a transaction group in the **Transactions** table.
-
-
-Active queries _are_ applied to correlations.
-
-
-## Find high transaction latency correlations
-
-The correlations on the **Latency correlations** tab help you discover which
-attributes are contributing to increased transaction latency.
-
-
-
-The progress bar indicates the status of the asynchronous analysis, which
-performs statistical searches across a large number of attributes. For large
-time ranges and services with high transaction throughput, this might take some
-time. To improve performance, reduce the time range.
-
-The latency distribution chart visualizes the overall latency of the
-transactions in the transaction group. If there are attributes that have a
-statistically significant correlation with slow response times, they are listed
-in a table below the chart. The table is sorted by correlation coefficients that
-range from 0 to 1. Attributes with higher correlation values are more likely to
-contribute to high latency transactions. By default, the attribute with the
-highest correlation value is added to the chart. To see the latency distribution
-for other attributes, select their row in the table.
-
-If a correlated attribute seems noteworthy, use the **Filter** quick links:
-
-* `+` creates a new query in the Applications UI for filtering transactions containing
- the selected value.
-
-* `-` creates a new query in the Applications UI to filter out transactions containing
- the selected value.
-
-You can also click the icon beside the field name to view and filter its most
-popular values.
-
-In this example screenshot, there are transactions that are skewed to the right
-with slower response times than the overall latency distribution. If you select
-the `+` filter in the appropriate row of the table, it creates a new query in
-the Applications UI for transactions with this attribute. With the "noise" now
-filtered out, you can begin viewing sample traces to continue your investigation.
-
-
-
-## Find failed transaction correlations
-
-The correlations on the **Failed transaction correlations** tab help you discover
-which attributes are most influential in distinguishing between transaction
-failures and successes. In this context, the success or failure of a transaction
-is determined by its [event.outcome](((ecs-ref))/ecs-event.html#field-event-outcome)
-value. For example, APM agents set the `event.outcome` to `failure` when an HTTP
-transaction returns a `5xx` status code.
-
-The chart highlights the failed transactions in the overall latency distribution
-for the transaction group. If there are attributes that have a statistically
-significant correlation with failed transactions, they are listed in a table.
-The table is sorted by scores, which are mapped to high, medium, or low impact
-levels. Attributes with high impact levels are more likely to contribute to
-failed transactions. By default, the attribute with the highest score is added
-to the chart. To see a different attribute in the chart, select its row in the
-table.
-
-For example, in the screenshot below, there are attributes such as a specific
-node and pod name that have medium impact on the failed transactions.
-
-
-
-Select the `+` filter to create a new query in the Applications UI for transactions
-with one or more of these attributes. If you are unfamiliar with a field, click
-the icon beside its name to view its most popular values and optionally filter
-on those values too. Each time that you add another attribute, it is filtering
-out more and more noise and bringing you closer to a diagnosis.
-
diff --git a/docs/en/serverless/apm/apm-get-started.mdx b/docs/en/serverless/apm/apm-get-started.mdx
deleted file mode 100644
index 682c566206..0000000000
--- a/docs/en/serverless/apm/apm-get-started.mdx
+++ /dev/null
@@ -1,137 +0,0 @@
----
-slug: /serverless/observability/apm-get-started
-title: Get started with traces and APM
-description: Learn how to collect Application Performance Monitoring (APM) data and visualize it in real time.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-import Go from '../transclusion/apm/guide/install-agents/go.mdx'
-import Java from '../transclusion/apm/guide/install-agents/java.mdx'
-import Net from '../transclusion/apm/guide/install-agents/net.mdx'
-import Node from '../transclusion/apm/guide/install-agents/node.mdx'
-import Php from '../transclusion/apm/guide/install-agents/php.mdx'
-import Python from '../transclusion/apm/guide/install-agents/python.mdx'
-import Ruby from '../transclusion/apm/guide/install-agents/ruby.mdx'
-import OpenTelemetry from '../transclusion/apm/guide/open-telemetry/otel-get-started.mdx'
-
-In this guide you'll learn how to collect and send Application Performance Monitoring (APM) data
-to Elastic, then explore and visualize the data in real time.
-
-
-
-## Step 1: Add data
-
-You'll use APM agents to send APM data from your application to Elastic. Elastic offers APM agents
-written in several languages and supports OpenTelemetry. Which agent you'll use depends on the language used in your service.
-
-To send APM data to Elastic, you must install an APM agent and configure it to send data to
-your project:
-
-1. Create a new ((observability)) project, or open an existing one.
-1. To install and configure one or more APM agents, do one of following:
- * In your Observability project, go to **Add data** → **Monitor my application performance** → **Elastic APM** and follow the prompts.
- * Use the following instructions:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- While there are many configuration options, all APM agents require:
-
-
-
- **Service name**
-
- The APM integration maps an instrumented service's name — defined in
- each ((apm-agent))'s configuration — to the index where its data is stored.
- Service names are case-insensitive and must be unique.
-
- For example, you cannot have a service named `Foo` and another named `foo`.
- Special characters will be removed from service names and replaced with underscores (`_`).
-
-
-
- **Server URL**
-
- The host and port that the managed intake service listens for events on.
-
- To find the URL for your project:
-
- 1. Go to the [Cloud console](https://cloud.elastic.co/).
- 1. Next to your project, select **Manage**.
- 1. Next to _Endpoints_, select **View**.
- 1. Copy the _APM endpoint_.
-
-
-
- **API key**
-
- Authentication method for communication between ((apm-agent)) and the managed intake service.
-
- You can create and delete API keys in Applications Settings:
- 1. Go to any page in the _Applications_ section of the main menu.
- 1. Click **Settings** in the top bar.
- 1. Go to the **Agent keys** tab.
-
-
-
- **Environment**
-
- The name of the environment this service is deployed in, for example "production" or "staging".
-
- Environments allow you to easily filter data on a global level in the UI.
- It's important to be consistent when naming environments across agents.
-
-
-
-
-1. If you're using the step-by-step instructions in the UI, after you've installed and configured an agent,
-you can click **Check Agent Status** to verify that the agent is sending data.
-
-To learn more about APM agents, including how to fine-tune how agents send traces to Elastic,
-refer to .
-
-
-
-## Step 2: View your data
-
-After one or more APM agents are installed and successfully sending data, you can view
-application performance monitoring data in the UI.
-
-In the _Applications_ section of the main menu, select **Services**.
-This will show a high-level overview of the health and general performance of all your services.
-
-Learn more about visualizing APM data in .
-
-{/* TO DO: ADD SCREENSHOT */}
-
-
-Not seeing any data? Find helpful tips in Troubleshooting.
-
-
-## Next steps
-
-Now that data is streaming into your project, take your investigation to a
-deeper level. Learn how to use Elastic's built-in visualizations for APM data,
-alert on APM data,
-or fine-tune how agents send traces to Elastic.
diff --git a/docs/en/serverless/apm/apm-integrate-with-machine-learning.mdx b/docs/en/serverless/apm/apm-integrate-with-machine-learning.mdx
deleted file mode 100644
index 1027edc86d..0000000000
--- a/docs/en/serverless/apm/apm-integrate-with-machine-learning.mdx
+++ /dev/null
@@ -1,69 +0,0 @@
----
-slug: /serverless/observability/apm-integrate-with-machine-learning
-title: Integrate with machine learning
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-The Machine learning integration initiates a new job predefined to calculate anomaly scores on APM transaction durations.
-With this integration, you can quickly pinpoint anomalous transactions and see the health of
-any upstream and downstream services.
-
-Machine learning jobs are created per environment and are based on a service's average response time.
-Because jobs are created at the environment level,
-you can add new services to your existing environments without the need for additional machine learning jobs.
-
-Results from machine learning jobs are shown in multiple places throughout the Applications UI:
-
-* The **Services overview** provides a quick-glance view of the general health of all of your services.
-
- {/* TODO: Take this screenshot (no data in oblt now)
-  */}
-
-* The transaction duration chart will show the expected bounds and add an annotation when the anomaly score is 75 or above.
-
- {/* TODO: Take this screenshot (no data in oblt now)
-  */}
-
-* Service Maps will display a color-coded anomaly indicator based on the detected anomaly score.
-
- 
-
-## Enable anomaly detection
-
-To enable machine learning anomaly detection:
-
-1. In your ((observability)) project, go to any **Applications** page.
-
-1. Click **Anomaly detection**.
-
-1. Click **Create Job**.
-
-1. Machine learning jobs are created at the environment level.
- Select all of the service environments that you want to enable anomaly detection in.
- Anomalies will surface for all services and transaction types within the selected environments.
-
-1. Click **Create Jobs**.
-
-That's it! After a few minutes, the job will begin calculating results;
-it might take additional time for results to appear on your service maps.
-To manage existing jobs, click **Manage jobs** (or go to **AIOps** → **Anomaly detection**).
-
-## Anomaly detection warning
-
-To make machine learning as easy as possible to set up,
-Elastic will warn you when filtered to an environment without a machine learning job.
-
-{/* TODO: Take this screenshot (no data in oblt now)
- */}
-
-## Unknown service health
-
-After enabling anomaly detection, service health may display as "Unknown". Here are some reasons why this can occur:
-
-1. No machine learning job exists. See Enable anomaly detection to enable anomaly detection and create a machine learning job.
-1. There is no machine learning data for the job. If you just created the machine learning job you'll need to wait a few minutes for data to be available. Alternatively, if the service or its environment are new, you'll need to wait for more trace data.
-1. No "request" or "page-load" transaction type exists for this service; service health is only available for these transaction types.
-
diff --git a/docs/en/serverless/apm/apm-keep-data-secure.mdx b/docs/en/serverless/apm/apm-keep-data-secure.mdx
deleted file mode 100644
index a1f6f2b48d..0000000000
--- a/docs/en/serverless/apm/apm-keep-data-secure.mdx
+++ /dev/null
@@ -1,79 +0,0 @@
----
-slug: /serverless/observability/apm-keep-data-secure
-title: Keep APM data secure
-description: Make sure APM data is sent to Elastic securely and sensitive data is protected.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-{/* TODO: Find out whether Editor or Admin is required to create and manage API keys. */}
-
-When setting up Elastic APM, it's essential to ensure that the data collected by
-APM agents is sent to Elastic securely and that sensitive data is protected.
-
-## Secure communication with APM agents
-
-Communication between APM agents and the managed intake service is both encrypted and authenticated.
-Requests without a valid API key will be denied.
-
-### Create a new API key
-
-To create a new API key:
-
-1. In your ((observability)) project, go to any **Applications** page.
-1. Click **Settings**.
-1. Select the **APM agent keys** tab.
-1. Click **Create APM agent key**.
-1. Name the key and assign privileges to it.
-1. Click **Create APM agent key**.
-1. Copy the key now. You will not be able to see it again. API keys do not expire.
-
-### Delete an API key
-
-To delete an API key:
-
-1. From any of the **Application** pages, click **Settings**.
-1. Select the **APM agent keys** tab.
-1. Search for the API key you want to delete.
-1. Click the trash can icon to delete the selected API key.
-
-### View existing API keys
-
-To view all API keys for your project:
-
-1. Expand **Project settings**.
-1. Select **Management**.
-1. Select **API keys**.
-
-## Data security
-
-When setting up Elastic APM, it's essential to review all captured data carefully to ensure it doesn't contain sensitive information like passwords, credit card numbers, or health data.
-
-Some APM agents offer a way to manipulate or drop APM events _before_ they leave your services.
-Refer to the relevant agent's documentation for more information and examples:
-
-### Java
-
-**`include_process_args`**: Remove process arguments from transactions. This option is disabled by default. Read more in the [Java agent configuration docs](((apm-java-ref-v))/config-reporter.html#config-include-process-args).
-
-### .NET
-
-**Filter API**: Drop APM events _before_ they are sent to Elastic. Read more in the [.NET agent Filter API docs](((apm-dotnet-ref-v))/public-api.html#filter-api).
-
-### Node.js
-
-* **`addFilter()`**: Drop APM events _before_ they are sent to Elastic. Read more in the [Node.js agent API docs](((apm-node-ref-v))/agent-api.html#apm-add-filter).
-* **`captureExceptions`**: Remove errors raised by the server-side process by disabling the `captureExceptions` configuration option. Read more in [the Node.js agent configuration docs](((apm-node-ref-v))/configuration.html#capture-exceptions).
-
-### Python
-
-**Custom processors**: Drop APM events _before_ they are sent to Elastic. Read more in the [Python agent Custom processors docs](((apm-py-ref-v))/sanitizing-data.html).
-
-### Ruby
-
-**`add_filter()`**: Drop APM events _before_ they are sent to Elastic. Read more in the [Ruby agent API docs](((apm-ruby-ref-v))/api.html#api-agent-add-filter).
diff --git a/docs/en/serverless/apm/apm-kibana-settings.mdx b/docs/en/serverless/apm/apm-kibana-settings.mdx
deleted file mode 100644
index 5d18522adb..0000000000
--- a/docs/en/serverless/apm/apm-kibana-settings.mdx
+++ /dev/null
@@ -1,91 +0,0 @@
----
-slug: /serverless/observability/apm-kibana-settings
-title: Settings
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-You can adjust Application settings to fine-tune your experience in the Applications UI.
-
-## General settings
-
-To change APM settings, select **Settings** from any **Applications** page.
-The following settings are available.
-
-`observability:apmAgentExplorerView`
-
-: Enables the Agent explorer view.
-
-`observability:apmAWSLambdaPriceFactor`
-
-: Set the price per Gb-second for your AWS Lambda functions.
-
-`observability:apmAWSLambdaRequestCostPerMillion`
-
-: Set the AWS Lambda cost per million requests.
-
-`observability:apmEnableContinuousRollups`
-
-: When continuous rollups are enabled, the UI will select metrics with the appropriate resolution.
-On larger time ranges, lower resolution metrics will be used, which will improve loading times.
-
-`observability:apmEnableServiceMetrics`
-
-: Enables the usage of service transaction metrics, which are low cardinality metrics that can be used by certain views like the service inventory for faster loading times.
-
-`observability:apmLabsButton`
-
-: Enable or disable the APM Labs button — a quick way to enable and disable technical preview features in APM.
-
-{/* [[observability-apm-critical-path]]`observability:apmEnableCriticalPath`
-When enabled, displays the critical path of a trace. */}
-
-{/* [[observability-enable-progressive-loading]]`observability:apmProgressiveLoading`
-preview:[] When enabled, uses progressive loading of some APM views.
-Data may be requested with a lower sampling rate first, with lower accuracy but faster response times,
-while the unsampled data loads in the background. */}
-
-`observability:apmServiceGroupMaxNumberOfServices`
-
-: Limit the number of services in a given service group.
-
-{/* [[observability-apm-optimized-sort]]`observability:apmServiceInventoryOptimizedSorting`
-preview:[] Sorts services without anomaly detection rules on the APM Service inventory page by service name. */}
-
-`observability:apmDefaultServiceEnvironment`
-
-: Set the default environment for APM. When left empty, data from all environments will be displayed by default.
-
-`observability:apmEnableProfilingIntegration`
-
-: Enable the Universal Profiling integration in APM.
-
-{/* [[observability-enable-aws-lambda-metrics]]`observability:enableAwsLambdaMetrics`
-preview:[] Display Amazon Lambda metrics in the service metrics tab. */}
-
-`observability:enableComparisonByDefault`
-
-: Enable the comparison feature by default.
-
-`observability:enableInspectEsQueries`
-
-: When enabled, allows you to inspect Elasticsearch queries in API responses.
-
-{/* [[observability-apm-trace-explorer-tab]]`observability:apmTraceExplorerTab`
-preview:[] Enable the APM Trace Explorer feature, that allows you to search and inspect traces with KQL or EQL. */}
-
-## APM Labs
-
-**APM Labs** allows you to easily try out new features that are technical preview.
-
-To enable APM labs, go to **Applications** → **Settings** → **General settings** and toggle **Enable labs button in APM**.
-Select **Save changes** and refresh the page.
-
-After enabling **APM Labs** select **Labs** in the toolbar to see the technical preview features available to try out.
-
diff --git a/docs/en/serverless/apm/apm-observe-lambda-functions.mdx b/docs/en/serverless/apm/apm-observe-lambda-functions.mdx
deleted file mode 100644
index 1e8100998f..0000000000
--- a/docs/en/serverless/apm/apm-observe-lambda-functions.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-slug: /serverless/observability/apm-observe-lambda-functions
-title: Observe Lambda functions
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Elastic APM provides performance and error monitoring for AWS Lambda functions.
-See how your Lambda functions relate to and depend on other services, and
-get insight into function execution and runtime behavior, like lambda duration, cold start rate, cold start duration, compute usage, memory usage, and more.
-
-To set up Lambda monitoring, refer to .
-
-
-
-## Cold starts
-
-A cold start occurs when a Lambda function has not been used for a certain period of time. A lambda worker receives a request to run the function and prepares an execution environment.
-
-Cold starts are an unavoidable byproduct of the serverless world, but visibility into how they impact your services can help you make better decisions about factors like how much memory to allocate to a function, whether to enable provisioned concurrency, or if it's time to consider removing a large dependency.
-
-### Cold start rate
-
-The cold start rate (i.e. proportion of requests that experience a cold start) is displayed per service and per transaction.
-
-Cold start is also displayed in the trace waterfall, where you can drill-down into individual traces and see trace metadata like AWS request ID, trigger type, and trigger request ID.
-
-{/* TODO: RETAKE
- */}
-
-### Latency distribution correlation
-
-The latency correlations feature can be used to visualize the impact of Lambda cold starts on latency—just select the `faas.coldstart` field.
-
-{/* TODO: RETAKE
- */}
-
-## AWS Lambda function grouping
-
-The default APM agent configuration results in one APM service per AWS Lambda function,
-where the Lambda function name is the service name.
-
-In some use cases, it makes more sense to logically group multiple lambda functions under a single
-APM service. You can achieve this by setting the `ELASTIC_APM_SERVICE_NAME` environment variable
-on related Lambda functions to the same value.
-
diff --git a/docs/en/serverless/apm/apm-query-your-data.mdx b/docs/en/serverless/apm/apm-query-your-data.mdx
deleted file mode 100644
index 9a3b08caf1..0000000000
--- a/docs/en/serverless/apm/apm-query-your-data.mdx
+++ /dev/null
@@ -1,74 +0,0 @@
----
-slug: /serverless/observability/apm-query-your-data
-title: Query your data
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Querying your APM data is an essential tool that can make finding bottlenecks in your code even more straightforward.
-
-Using the query bar, a powerful data query feature, you can pass advanced queries on your data
-to filter on specific pieces of information you’re interested in.
-APM queries entered into the query bar are added as parameters to the URL, so it’s easy to share a specific query or view with others.
-
-The query bar comes with a handy autocomplete that helps find the fields and even provides suggestions to the data they include.
-You can select the query bar and hit the down arrow on your keyboard to begin scanning recommendations.
-
-When you type, you can begin to see some of the fields available for filtering:
-
-
-
-
-
-To learn more about the ((kib)) query language capabilities, see the [Kibana Query Language Enhancements](((kibana-ref))/kuery-query.html) documentation.
-
-
-
-## APM queries
-
-APM queries can be handy for removing noise from your data in the Services, Transactions,
-Errors, Metrics, and Traces views.
-
-For example, in the **Services** view, you can quickly view a list of all the instrumented services running on your production
-environment: `service.environment : production`. Or filter the list by including the APM agent's name and the host it’s running on:
-`service.environment : "production" and agent.name : "java" and host.name : "prod-server1"`.
-
-On the **Traces** view, you might want to view failed transaction results from any of your running containers:
-`transaction.result :"FAILURE" and container.id : *`.
-
-On the **Transactions** view, you may want to list only the slower transactions than a specified time threshold: `transaction.duration.us > 2000000`.
-Or filter the list by including the service version and the Kubernetes pod it's running on:
-`transaction.duration.us > 2000000 and service.version : "7.12.0" and kubernetes.pod.name : "pod-5468b47f57-pqk2m"`.
-
-## Querying in Discover
-
-Alternatively, you can query your APM documents in [*Discover*](((kibana-ref))/discover.html).
-Querying documents in **Discover** works the same way as queries in the Applications UI,
-and **Discover** supports all of the example APM queries shown on this page.
-
-### Discover queries
-
-One example where you may want to make use of **Discover**
-is to view _all_ transactions for an endpoint instead of just a sample.
-
-Use the Applications UI to find a transaction name and time bucket that you're interested in learning more about.
-Then, switch to **Discover** and make a search:
-
-```shell
-processor.event: "transaction" AND transaction.name: "" and transaction.duration.us > 13000 and transaction.duration.us < 14000
-```
-
-In this example, we're interested in viewing all of the `APIRestController#customers` transactions
-that took between 13 and 14 milliseconds. Here's what Discover returns:
-
-
-
-You can now explore the data until you find a specific transaction that you're interested in.
-Copy that transaction's `transaction.id` and paste it into APM to view the data in the context of the APM:
-
-
-
-
-
diff --git a/docs/en/serverless/apm/apm-reduce-your-data-usage.mdx b/docs/en/serverless/apm/apm-reduce-your-data-usage.mdx
deleted file mode 100644
index 289e17a6b3..0000000000
--- a/docs/en/serverless/apm/apm-reduce-your-data-usage.mdx
+++ /dev/null
@@ -1,20 +0,0 @@
----
-slug: /serverless/observability/apm-reduce-your-data-usage
-title: Reduce your data usage
-description: Implement strategies for reducing your data usage without compromising the ability to analyze APM data.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-The richness and volume of APM data provides unique insights into your applications, but it can
-also mean higher costs and more noise when analyzing data. There are a couple strategies you can
-use to reduce your data usage while continuing to get the full value of APM data. Read more about
-these strategies:
-
-* : Reduce data storage, costs, and
-noise by ingesting only a percentage of all traces that you can extrapolate from in your analysis.
-* : Compress similar or identical spans to
-reduce storage overhead, processing power needed, and clutter in the Applications UI.
-* : Reduce the stacktrace information
-collected by your APM agents.
diff --git a/docs/en/serverless/apm/apm-reference.mdx b/docs/en/serverless/apm/apm-reference.mdx
deleted file mode 100644
index d427c25520..0000000000
--- a/docs/en/serverless/apm/apm-reference.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
----
-slug: /serverless/observability/apm-reference
-title: Reference
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-The following reference documentation is available:
-
-*
-* [API reference](https://docs.elastic.co/api-reference/observability/post_api-apm-agent-keys)
-
-In addition to the public API above, the APM managed intake service offers an
-.
-This API is exclusively for APM agent developers. The vast majority of users should have no reason to interact with this API.
diff --git a/docs/en/serverless/apm/apm-send-traces-to-elastic.mdx b/docs/en/serverless/apm/apm-send-traces-to-elastic.mdx
deleted file mode 100644
index 8aa7057e54..0000000000
--- a/docs/en/serverless/apm/apm-send-traces-to-elastic.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
----
-slug: /serverless/observability/apm-send-data-to-elastic
-title: Send APM data to Elastic
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
- Want to get started quickly? See Get started with traces and APM.
-
-
-Send APM data to Elastic with:
-
-* **:** Elastic APM agents are lightweight libraries you install in your applications and services. They automatically instrument supported technologies, and offer APIs for custom code instrumentation.
-* **:** OpenTelemetry is a set of APIs, SDKs, tooling, and integrations that enable the capture and management of telemetry data from your services and applications.
-
-Elastic also supports instrumentation of .
-
-{/* To do: We should put a diagram here showing how high-level arch */}
diff --git a/docs/en/serverless/apm/apm-server-api.mdx b/docs/en/serverless/apm/apm-server-api.mdx
deleted file mode 100644
index 625a1540ca..0000000000
--- a/docs/en/serverless/apm/apm-server-api.mdx
+++ /dev/null
@@ -1,58 +0,0 @@
----
-slug: /serverless/observability/apm-server-api
-title: Managed intake service event API
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import Api from './apm-server-api/api.mdx'
-import ApiError from './apm-server-api/api-error.mdx'
-import ApiEvents from './apm-server-api/api-events.mdx'
-import ApiInfo from './apm-server-api/api-info.mdx'
-import ApiMetadata from './apm-server-api/api-metadata.mdx'
-import ApiMetricset from './apm-server-api/api-metricset.mdx'
-import ApiSpan from './apm-server-api/api-span.mdx'
-import ApiTransaction from './apm-server-api/api-transaction.mdx'
-import OtelAPI from './apm-server-api/otel-api.mdx'
-
-
-
- This API is exclusively for APM agent developers. The vast majority of users should have no reason to interact with this API.
-
-
-
-
-## Server information API
-
-
-
-## Events intake API
-
-
-
-### Metadata
-
-
-
-### Transactions
-
-
-
-### Spans
-
-
-
-### Errors
-
-
-
-### Metrics
-
-
-
-## OpenTelemetry API
-
-
-
diff --git a/docs/en/serverless/apm/apm-server-api/api-error.mdx b/docs/en/serverless/apm/apm-server-api/api-error.mdx
deleted file mode 100644
index a0a97086d8..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-error.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-import V2Error from '../../transclusion/apm/guide/spec/v2/error.mdx'
-
-
-
-An error or a logged error message captured by an agent occurring in a monitored service.
-
-
-
-#### Error Schema
-
-The managed intake service uses a JSON Schema to validate requests. The specification for errors is defined on
-[GitHub](https://github.com/elastic/apm-server/blob/main/docs/spec/v2/error.json) and included below.
-
-
-
-
diff --git a/docs/en/serverless/apm/apm-server-api/api-events.mdx b/docs/en/serverless/apm/apm-server-api/api-events.mdx
deleted file mode 100644
index 2bc1a53d92..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-events.mdx
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
-
-
-
-Most users do not need to interact directly with the events intake API.
-
-
-The events intake API is what we call the internal protocol that APM agents use to talk to the managed intake service.
-Agents communicate with the Server by sending events — captured pieces of information — in an HTTP request.
-Events can be:
-
-* Transactions
-* Spans
-* Errors
-* Metrics
-
-Each event is sent as its own line in the HTTP request body.
-This is known as [newline delimited JSON (NDJSON)](https://github.com/ndjson/ndjson-spec).
-
-With NDJSON, agents can open an HTTP POST request and use chunked encoding to stream events to the managed intake service
-as soon as they are recorded in the agent.
-This makes it simple for agents to serialize each event to a stream of newline delimited JSON.
-The managed intake service also treats the HTTP body as a compressed stream and thus reads and handles each event independently.
-
-Refer to to learn more about the different types of events.
-
-
-
-### Endpoints
-
-The managed intake service exposes the following endpoints for Elastic APM agent data intake:
-
-| Name | Endpoint |
-|---|---|
-| APM agent event intake | `/intake/v2/events` |
-
-{/* | RUM event intake (v2) | `/intake/v2/rum/events` |
-| RUM event intake (v3) | `/intake/v3/rum/events` | */}
-
-
-
-### Request
-
-Send an `HTTP POST` request to the managed intake service `intake/v2/events` endpoint:
-
-```bash
-https://{hostname}:{port}/intake/v2/events
-```
-
-The managed intake service supports asynchronous processing of batches.
-To request asynchronous processing the `async` query parameter can be set in the POST request
-to the `intake/v2/events` endpoint:
-
-```bash
-https://{hostname}:{port}/intake/v2/events?async=true
-```
-
-
-Since asynchronous processing defers some of the event processing to the
-background and takes place after the client has closed the request, some errors
-can't be communicated back to the client and are logged by the managed intake service.
-Furthermore, asynchronous processing requests will only be scheduled if the managed intake service can
-service the incoming request, requests that cannot be serviced will receive an internal error
-`503` "queue is full" error.
-
-
-{/* For RUM send an `HTTP POST` request to the managed intake service `intake/v3/rum/events` endpoint instead:
-
-```bash
-http(s)://{hostname}:{port}/intake/v3/rum/events
-``` */}
-
-
-
-### Response
-
-On success, the server will respond with a 202 Accepted status code and no body.
-
-Keep in mind that events can succeed and fail independently of each other. Only if all events succeed does the server respond with a 202.
-
-
-
-### API Errors
-
-There are two types of errors that the managed intake service may return to an agent:
-
-* Event related errors (typically validation errors)
-* Non-event related errors
-
-The managed intake service processes events one after the other.
-If an error is encountered while processing an event,
-the error encountered as well as the document causing the error are added to an internal array.
-The managed intake service will only save 5 event related errors.
-If it encounters more than 5 event related errors,
-the additional errors will not be returned to agent.
-Once all events have been processed,
-the error response is sent.
-
-Some errors, not relating to specific events,
-may terminate the request immediately.
-For example: IP rate limit reached, wrong metadata, etc.
-If at any point one of these errors is encountered,
-it is added to the internal array and immediately returned.
-
-An example error response might look something like this:
-
-```json
-{
- "errors": [
- {
- "message": "", [^1]
- "document": "" [^2]
- },{
- "message": "",
- "document": ""
- },{
- "message": "",
- "document": ""
- },{
- "message": "too many requests" [^3]
- },
- ],
- "accepted": 2320 [^4]
-}
-```
-[^1]: An event related error
-[^2]: The document causing the error
-[^3]: An immediately returning non-event related error
-[^4]: The number of accepted events
-
-If you're developing an agent, these errors can be useful for debugging.
-
-
-
-### Event API Schemas
-
-The managed intake service uses a collection of JSON Schemas for validating requests to the intake API.
diff --git a/docs/en/serverless/apm/apm-server-api/api-info.mdx b/docs/en/serverless/apm/apm-server-api/api-info.mdx
deleted file mode 100644
index 244d44a4bb..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-info.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-
-
-The managed intake service exposes an API endpoint to query general server information.
-This lightweight endpoint is useful as a server up/down health check.
-
-
-
-### Server Information endpoint
-
-Send an `HTTP GET` request to the server information endpoint:
-
-```bash
-https://{hostname}:{port}/
-```
-
-This endpoint always returns an HTTP 200.
-
-Requests to this endpoint must be authenticated.
-
-
-
-#### Example
-
-Example managed intake service information request:
-
-```sh
-curl -X POST http://127.0.0.1:8200/ \
- -H "Authorization: ApiKey api_key"
-
-{
- "build_date": "2021-12-18T19:59:06Z",
- "build_sha": "24fe620eeff5a19e2133c940c7e5ce1ceddb1445",
- "publish_ready": true,
- "version": "((version))"
-}
-```
diff --git a/docs/en/serverless/apm/apm-server-api/api-metadata.mdx b/docs/en/serverless/apm/apm-server-api/api-metadata.mdx
deleted file mode 100644
index 1faa07dc72..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-metadata.mdx
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-import V2Metadata from '../../transclusion/apm/guide/spec/v2/metadata.mdx'
-
-
-
-Every new connection to the managed intake service starts with a `metadata` stanza.
-This provides general metadata concerning the other objects in the stream.
-
-Rather than send this metadata information from the agent multiple times,
-the managed intake service hangs on to this information and applies it to other objects in the stream as necessary.
-
-
-Metadata is stored under `context` when viewing documents in ((es)).
-
-
-#### Metadata Schema
-
-The managed intake service uses JSON Schema to validate requests. The specification for metadata is defined on
-[GitHub](https://github.com/elastic/apm-server/blob/main/docs/spec/v2/metadata.json) and included below.
-
-
-
-
-
-#### Kubernetes data
-
-APM agents automatically read Kubernetes data and send it to the managed intake service.
-In most instances, agents are able to read this data from inside the container.
-If this is not the case, or if you wish to override this data, you can set environment variables for the agents to read.
-These environment variable are set via the Kubernetes [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables).
-Here's how you would add the environment variables to your Kubernetes pod spec:
-
-```yaml
- - name: KUBERNETES_NODE_NAME
- valueFrom:
- fieldRef:
- fieldPath: spec.nodeName
- - name: KUBERNETES_POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: KUBERNETES_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- - name: KUBERNETES_POD_UID
- valueFrom:
- fieldRef:
- fieldPath: metadata.uid
-```
-
-The table below maps these environment variables to the APM metadata event field:
-
-| Environment variable | Metadata field name |
-|---|---|
-| `KUBERNETES_NODE_NAME` | system.kubernetes.node.name |
-| `KUBERNETES_POD_NAME` | system.kubernetes.pod.name |
-| `KUBERNETES_NAMESPACE` | system.kubernetes.namespace |
-| `KUBERNETES_POD_UID` | system.kubernetes.pod.uid |
diff --git a/docs/en/serverless/apm/apm-server-api/api-metricset.mdx b/docs/en/serverless/apm/apm-server-api/api-metricset.mdx
deleted file mode 100644
index ea9031f136..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-metricset.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-import V2Metricset from '../../transclusion/apm/guide/spec/v2/metricset.mdx'
-
-
-
-Metrics contain application metric data captured by an ((apm-agent)).
-
-
-
-#### Metric Schema
-
-The managed intake service uses JSON Schema to validate requests. The specification for metrics is defined on
-[GitHub](https://github.com/elastic/apm-server/blob/main/docs/spec/v2/metricset.json) and included below.
-
-
-
-
diff --git a/docs/en/serverless/apm/apm-server-api/api-span.mdx b/docs/en/serverless/apm/apm-server-api/api-span.mdx
deleted file mode 100644
index eac1803c2b..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-span.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-import V2Span from '../../transclusion/apm/guide/spec/v2/span.mdx'
-
-
-
-Spans are events captured by an agent occurring in a monitored service.
-
-
-
-#### Span Schema
-
-The managed intake service uses JSON Schema to validate requests. The specification for spans is defined on
-[GitHub](https://github.com/elastic/apm-server/blob/main/docs/spec/v2/span.json) and included below.
-
-
-
-
diff --git a/docs/en/serverless/apm/apm-server-api/api-transaction.mdx b/docs/en/serverless/apm/apm-server-api/api-transaction.mdx
deleted file mode 100644
index 943c30623c..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api-transaction.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-import V2Transaction from '../../transclusion/apm/guide/spec/v2/transaction.mdx'
-
-
-
-Transactions are events corresponding to an incoming request or similar task occurring in a monitored service.
-
-
-
-#### Transaction Schema
-
-The managed intake service uses JSON Schema to validate requests. The specification for transactions is defined on
-[GitHub](https://github.com/elastic/apm-server/blob/main/docs/spec/v2/transaction.json) and included below.
-
-
-
-
\ No newline at end of file
diff --git a/docs/en/serverless/apm/apm-server-api/api.mdx b/docs/en/serverless/apm/apm-server-api/api.mdx
deleted file mode 100644
index fe08e4786f..0000000000
--- a/docs/en/serverless/apm/apm-server-api/api.mdx
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-
-
-The managed intake service exposes endpoints for:
-
-* The managed intake service information API
-* Elastic APM events intake API
-* OpenTelemetry intake API
diff --git a/docs/en/serverless/apm/apm-server-api/otel-api.mdx b/docs/en/serverless/apm/apm-server-api/otel-api.mdx
deleted file mode 100644
index 8241f6b747..0000000000
--- a/docs/en/serverless/apm/apm-server-api/otel-api.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
-
-Elastic supports receiving traces, metrics, and logs over the
-[OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp/).
-OTLP is the default transfer protocol for OpenTelemetry and is supported natively by the managed intake service.
-
-The managed intake service supports two OTLP communication protocols on the same port:
-
-* OTLP/HTTP (protobuf)
-* OTLP/gRPC
-
-### OTLP/gRPC paths
-
-| Name | Endpoint |
-|---|---|
-|OTLP metrics intake |`/opentelemetry.proto.collector.metrics.v1.MetricsService/Export`
-|OTLP trace intake |`/opentelemetry.proto.collector.trace.v1.TraceService/Export`
-|OTLP logs intake |`/opentelemetry.proto.collector.logs.v1.LogsService/Export`
-
-### OTLP/HTTP paths
-
-| Name | Endpoint |
-|---|---|
-|OTLP metrics intake |`/v1/metrics`
-|OTLP trace intake |`/v1/traces`
-|OTLP logs intake |`/v1/logs`
-
-
- See our to learn how to send data to the managed intake service from an OpenTelemetry agent OpenTelemetry collector.
-
\ No newline at end of file
diff --git a/docs/en/serverless/apm/apm-stacktrace-collection.mdx b/docs/en/serverless/apm/apm-stacktrace-collection.mdx
deleted file mode 100644
index 10e020742f..0000000000
--- a/docs/en/serverless/apm/apm-stacktrace-collection.mdx
+++ /dev/null
@@ -1,14 +0,0 @@
----
-slug: /serverless/observability/apm-stacktrace-collection
-title: Stacktrace collection
-description: Reduce data storage and costs by reducing stacktrace collection
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Elastic APM agents collect `stacktrace` information under certain circumstances. This can be very helpful in identifying issues in your code, but it also comes with an overhead at collection time and increases your storage usage.
-
-Stack trace collection settings are managed in each APM agent. You can enable and disable this feature, or set specific configuration limits, like the maximum number of stacktrace frames to collect, or the minimum duration of a stacktrace to collect.
-
-See the relevant [((apm-agent)) documentation](((apm-agents-ref))/index.html) to learn how to customize stacktrace collection.
diff --git a/docs/en/serverless/apm/apm-track-deployments-with-annotations.mdx b/docs/en/serverless/apm/apm-track-deployments-with-annotations.mdx
deleted file mode 100644
index c82980d484..0000000000
--- a/docs/en/serverless/apm/apm-track-deployments-with-annotations.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
----
-slug: /serverless/observability/apm-track-deployments-with-annotations
-title: Track deployments with annotations
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-For enhanced visibility into your deployments, we offer deployment annotations on all transaction charts.
-This feature enables you to easily determine if your deployment has increased response times for an end-user,
-or if the memory/CPU footprint of your application has changed.
-Being able to quickly identify bad deployments enables you to rollback and fix issues without causing costly outages.
-
-By default, automatic deployment annotations are enabled.
-This means APM will create an annotation on your data when the `service.version` of your application changes.
-
-Alternatively, you can explicitly create deployment annotations with our annotation API.
-The API can integrate into your CI/CD pipeline,
-so that each time you deploy, a POST request is sent to the annotation API endpoint:
-
-{/* TODO: This is commented out for now, but it might be nice to add a working example? */}
-{/* ```shell
-curl -X POST \
- http://localhost:5601/api/apm/services/${SERVICE_NAME}/annotation \ [^1]
--H 'Content-Type: application/json' \
--H 'kbn-xsrf: true' \
--H 'Authorization: Basic ${API_KEY}' \ [^2]
--d '{
- "@timestamp": "${DEPLOY_TIME}", [^3]
- "service": {
- "version": "${SERVICE_VERSION}" [^4]
- },
- "message": "${MESSAGE}" [^5]
- }'
-```
-[^1]: The `service.name` of your application
-[^2]: An APM API key with sufficient privileges
-[^3]: The time of the deployment
-[^4]: The `service.version` to be displayed in the annotation
-[^5]: A custom message to be displayed in the annotation */}
-
-{/* Todo: Link to API docs */}
-See the Annotation API reference for more information.
-
-
-If custom annotations have been created for the selected time period, any derived annotations, i.e., those created automatically when `service.version` changes, will not be shown.
-
-
diff --git a/docs/en/serverless/apm/apm-transaction-sampling.mdx b/docs/en/serverless/apm/apm-transaction-sampling.mdx
deleted file mode 100644
index d4f981f67a..0000000000
--- a/docs/en/serverless/apm/apm-transaction-sampling.mdx
+++ /dev/null
@@ -1,137 +0,0 @@
----
-slug: /serverless/observability/apm-transaction-sampling
-title: Transaction sampling
-description: Reduce data storage, costs, and noise by ingesting only a percentage of all traces that you can extrapolate from in your analysis.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import ConfigureHeadBasedSampling from './apm-transaction-sampling/configure-head-based-sampling.mdx'
-
-Distributed tracing can
-generate a substantial amount of data. More data can mean higher costs and more noise.
-Sampling aims to lower the amount of data ingested and the effort required to analyze that data —
-all while still making it easy to find anomalous patterns in your applications, detect outages, track errors,
-and lower mean time to recovery (MTTR).
-
-## Head-based sampling
-
-In head-based sampling, the sampling decision for each trace is made when the trace is initiated.
-Each trace has a defined and equal probability of being sampled.
-
-For example, a sampling value of `.2` indicates a transaction sample rate of `20%`.
-This means that only `20%` of traces will send and retain all of their associated information.
-The remaining traces will drop contextual information to reduce the transfer and storage size of the trace.
-
-Head-based sampling is quick and easy to set up.
-Its downside is that it's entirely random — interesting
-data might be discarded purely due to chance.
-
-### Distributed tracing
-
-In a _distributed_ trace, the sampling decision is still made when the trace is initiated.
-Each subsequent service respects the initial service's sampling decision, regardless of its configured sample rate;
-the result is a sampling percentage that matches the initiating service.
-
-In the example in _Figure 1_, `Service A` initiates four transactions and has a sample rate of `.5` (`50%`).
-The upstream sampling decision is respected, so even if the sample rate is defined and is a different
-value in `Service B` and `Service C`, the sample rate will be `.5` (`50%`) for all services.
-
-**Figure 1. Upstream sampling decision is respected**
-
-
-
-In the example in _Figure 2_, `Service A` initiates four transactions and has a sample rate of `1` (`100%`).
-Again, the upstream sampling decision is respected, so the sample rate for all services will
-be `1` (`100%`).
-
-**Figure 2. Upstream sampling decision is respected**
-
-
-
-### Trace continuation strategies with distributed tracing
-
-In addition to setting the sample rate, you can also specify which _trace continuation strategy_ to use.
-There are three trace continuation strategies: `continue`, `restart`, and `restart_external`.
-
-The **`continue`** trace continuation strategy is the default and will behave similar to the examples in
-the [Distributed tracing section](#distributed-tracing).
-
-Use the **`restart_external`** trace continuation strategy on an Elastic-monitored service to start
-a new trace if the previous service did not have a `traceparent` header with `es` vendor data.
-This can be helpful if a transaction includes an Elastic-monitored service that is receiving requests
-from an unmonitored service.
-
-In the example in _Figure 3_, `Service A` is an Elastic-monitored service that initiates four transactions
-with a sample rate of `.25` (`25%`). Because `Service B` is unmonitored, the traces started in
-`Service A` will end there. `Service C` is an Elastic-monitored service that initiates four transactions
-that start new traces with a new sample rate of `.5` (`50%`). Because `Service D` is also
-Elastic-monitored service, the upstream sampling decision defined in `Service C` is respected.
-The end result will be three sampled traces.
-
-**Figure 3. Using the `restart_external` trace continuation strategy**
-
-
-
-Use the **`restart`** trace continuation strategy on an Elastic-monitored service to start
-a new trace regardless of whether the previous service had a `traceparent` header.
-This can be helpful if an Elastic-monitored service is publicly exposed, and you do not
-want tracing data to possibly be spoofed by user requests.
-
-In the example in _Figure 4_, `Service A` and `Service B` are Elastic-monitored services that use the
-default trace continuation strategy. `Service A` has a sample rate of `.25` (`25%`), and that
-sampling decision is respected in `Service B`. `Service C` is an Elastic-monitored service that
-uses the `restart` trace continuation strategy and has a sample rate of `1` (`100%`).
-Because it uses `restart`, the upstream sample rate is _not_ respected in `Service C` and all four
-traces will be sampled as new traces in `Service C`. The end result will be five sampled traces.
-
-**Figure 4. Using the `restart` trace continuation strategy**
-
-
-
-### OpenTelemetry
-
-Head-based sampling is implemented directly in the APM agents and SDKs.
-The sample rate must be propagated between services and the managed intake service in order to produce accurate metrics.
-
-OpenTelemetry offers multiple samplers. However, most samplers do not propagate the sample rate.
-This results in inaccurate span-based metrics, like APM throughput, latency, and error metrics.
-
-For accurate span-based metrics when using head-based sampling with OpenTelemetry, you must use
-a [consistent probability sampler](https://opentelemetry.io/docs/specs/otel/trace/tracestate-probability-sampling/).
-These samplers propagate the sample rate between services and the managed intake service, resulting in accurate metrics.
-
-
- OpenTelemetry does not offer consistent probability samplers in all languages. Refer to the documentation of your favorite OpenTelemetry agent or SDK for more information.
-
-
-## Sampled data and visualizations
-
-A sampled trace retains all data associated with it.
-A non-sampled trace drops all span and transaction data.
-Regardless of the sampling decision, all traces retain error data.
-
-Some visualizations in the Applications UI, like latency, are powered by aggregated transaction and span metrics.
-Metrics are based on sampled traces and weighted by the inverse sampling rate.
-For example, if you sample at 5%, each trace is counted as 20.
-As a result, as the variance of latency increases, or the sampling rate decreases, your level of error will increase.
-
-## Sample rates
-
-What's the best sampling rate? Unfortunately, there isn't one.
-Sampling is dependent on your data, the throughput of your application, data retention policies, and other factors.
-A sampling rate from `.1%` to `100%` would all be considered normal.
-You'll likely decide on a unique sample rate for different scenarios.
-Here are some examples:
-
-* Services with considerably more traffic than others might be safe to sample at lower rates
-* Routes that are more important than others might be sampled at higher rates
-* A production service environment might warrant a higher sampling rate than a development environment
-* Failed trace outcomes might be more interesting than successful traces — thus requiring a higher sample rate
-
-Regardless of the above, cost conscious customers are likely to be fine with a lower sample rate.
-
-## Configure head-based sampling
-
-
diff --git a/docs/en/serverless/apm/apm-transaction-sampling/configure-head-based-sampling.mdx b/docs/en/serverless/apm/apm-transaction-sampling/configure-head-based-sampling.mdx
deleted file mode 100644
index 313a8f73b7..0000000000
--- a/docs/en/serverless/apm/apm-transaction-sampling/configure-head-based-sampling.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-{/* There are three ways to adjust the head-based sampling rate of your APM agents:
-
-### Dynamic configuration
-
-The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment
-basis with [((apm-agent)) Configuration](((kibana-ref))/agent-configuration.html) in ((kib)). */}
-
-{/* ### ((kib)) API configuration
-
-((apm-agent)) configuration exposes an API that can be used to programmatically change
-your agents' sampling rate.
-An example is provided in the [Agent configuration API reference](((kibana-ref))/agent-config-api.html). */}
-
-Each APM agent provides a configuration value used to set the transaction sample rate.
-Refer to the relevant agent's documentation for more details:
-
-* Go: [`ELASTIC_APM_TRANSACTION_SAMPLE_RATE`](((apm-go-ref-v))/configuration.html#config-transaction-sample-rate)
-* Java: [`transaction_sample_rate`](((apm-java-ref-v))/config-core.html#config-transaction-sample-rate)
-* .NET: [`TransactionSampleRate`](((apm-dotnet-ref-v))/config-core.html#config-transaction-sample-rate)
-* Node.js: [`transactionSampleRate`](((apm-node-ref-v))/configuration.html#transaction-sample-rate)
-* PHP: [`transaction_sample_rate`](((apm-php-ref))/configuration-reference.html#config-transaction-sample-rate)
-* Python: [`transaction_sample_rate`](((apm-py-ref-v))/configuration.html#config-transaction-sample-rate)
-* Ruby: [`transaction_sample_rate`](((apm-ruby-ref-v))/configuration.html#config-transaction-sample-rate)
-
diff --git a/docs/en/serverless/apm/apm-troubleshooting.mdx b/docs/en/serverless/apm/apm-troubleshooting.mdx
deleted file mode 100644
index b2e40e9c4c..0000000000
--- a/docs/en/serverless/apm/apm-troubleshooting.mdx
+++ /dev/null
@@ -1,49 +0,0 @@
----
-slug: /serverless/observability/apm-troubleshooting
-title: Troubleshooting
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import CommonProblems from './apm-troubleshooting/common-problems.mdx'
-import CommonResponseCodes from './apm-troubleshooting/common-response-codes.mdx'
-
-
-
-This section provides solutions to common questions and problems,
-and processing and performance guidance.
-
-## Common problems
-
-
-
-## Common response codes
-
-
-
-## Related troubleshooting resources
-
-For additional help with other APM components, see the links below.
-((agent)) and each ((apm-agent)) has its own troubleshooting guide:
-
-* [((fleet)) and ((agent)) troubleshooting](((fleet-guide))/troubleshooting-intro.html)
-* [.NET agent troubleshooting](((apm-dotnet-ref))/troubleshooting.html)
-* [Go agent troubleshooting](((apm-go-ref))/troubleshooting.html)
-* [Java agent troubleshooting](((apm-java-ref))/trouble-shooting.html)
-* [Node.js agent troubleshooting](((apm-node-ref))/troubleshooting.html)
-* [PHP agent troubleshooting](((apm-php-ref))/troubleshooting.html)
-* [Python agent troubleshooting](((apm-py-ref))/troubleshooting.html)
-* [Ruby agent troubleshooting](((apm-ruby-ref))/debugging.html)
-
-## Elastic Support
-
-We offer a support experience unlike any other.
-Our team of professionals 'speak human and code' and love making your day.
-[Learn more about subscriptions](https://www.elastic.co/subscriptions).
-
-{/* ### Discussion forum
-
-For additional questions and feature requests,
-visit our [discussion forum](https://discuss.elastic.co/c/apm). */}
diff --git a/docs/en/serverless/apm/apm-troubleshooting/common-problems.mdx b/docs/en/serverless/apm/apm-troubleshooting/common-problems.mdx
deleted file mode 100644
index fcf46f1ddf..0000000000
--- a/docs/en/serverless/apm/apm-troubleshooting/common-problems.mdx
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-import NoDataIndexed from '../../transclusion/apm/guide/tab-widgets/no-data-indexed/fleet-managed.mdx'
-
-This section describes common problems you might encounter.
-
-{/* * No data is indexed
-* APM Server response codes
-* Common SSL-related problems
-* I/O Timeout
-* Field limit exceeded */}
-
-### No data is indexed
-
-If no data shows up, first make sure that your APM components are properly connected.
-
-
-
-
-
-### Data is indexed but doesn't appear in the Applications UI
-
-Elastic APM relies on default index mappings, data streams, and pipelines to query and display data.
-If your APM data isn't showing up in the Applications UI, but is elsewhere in Elastic, like Discover,
-you've likely made a change that overwrote a default.
-If you've manually changed a data stream, index template, or index pipeline,
-please verify you are not interfering with the default APM setup.
-
-{/* ### I/O Timeout
-
-I/O Timeouts can occur when your timeout settings across the stack are not configured correctly,
-especially when using a load balancer.
-
-You may see an error like the one below in the ((apm-agent)) logs, and/or a similar error on the intake side:
-
-```logs
-[ElasticAPM] APM Server responded with an error:
-"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout"
-```
-
-To fix this error, ensure timeouts are incrementing from the ((apm-agent)),
-through your load balancer, to the Elastic APM intake.
-
-By default, Elastic APM agent timeouts are set at 10 seconds, and the Elastic intake timeout is set at 60 seconds.
-Your load balancer should be set somewhere between these numbers.
-
-For example:
-
-```txt
-APM agent --> Load Balancer --> Elastic APM intake
- 10s 15s 60s
-``` */}
-
-
-
-### Field limit exceeded
-
-When adding too many distinct tag keys on a transaction or span,
-you risk creating a [mapping explosion](((ref))/mapping.html#mapping-limit-settings).
-
-For example, you should avoid that user-specified data,
-like URL parameters, is used as a tag key.
-Likewise, using the current timestamp or a user ID as a tag key is not a good idea.
-However, tag **values** with a high cardinality are not a problem.
-Just try to keep the number of distinct tag keys at a minimum.
-
-The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. Usually, on the next day,
-the spans and transactions will be indexed again because a new index is created each day.
-But as soon as the field limit is reached, indexing stops again.
diff --git a/docs/en/serverless/apm/apm-troubleshooting/common-response-codes.mdx b/docs/en/serverless/apm/apm-troubleshooting/common-response-codes.mdx
deleted file mode 100644
index 610ee83c24..0000000000
--- a/docs/en/serverless/apm/apm-troubleshooting/common-response-codes.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-
-### HTTP 400: Data decoding error / Data validation error
-
-The most likely cause for this error is using an incompatible version of an ((apm-agent)).
-See minimum supported APM agent versions to verify compatibility.
-
-
-
-### HTTP 400: Event too large
-
-APM agents communicate with the Managed intake service by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you can reduce the size of the events that your APM agents send by: or .
-
-
-
-### HTTP 401: Invalid token
-
-The API key is invalid.
diff --git a/docs/en/serverless/apm/apm-ui-dependencies.mdx b/docs/en/serverless/apm/apm-ui-dependencies.mdx
deleted file mode 100644
index 7916976fac..0000000000
--- a/docs/en/serverless/apm/apm-ui-dependencies.mdx
+++ /dev/null
@@ -1,50 +0,0 @@
----
-slug: /serverless/observability/apm-dependencies
-title: Dependencies
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import FeatureBeta from '../partials/feature-beta.mdx'
-
-APM agents collect details about external calls made from instrumented services.
-Sometimes, these external calls resolve into a downstream service that's instrumented — in these cases,
-you can utilize distributed tracing to drill down into problematic downstream services.
-Other times, though, it's not possible to instrument a downstream dependency —
-like with a database or third-party service.
-**Dependencies** gives you a window into these uninstrumented, downstream dependencies.
-
-
-
-Many application issues are caused by slow or unresponsive downstream dependencies.
-And because a single, slow dependency can significantly impact the end-user experience,
-it's important to be able to quickly identify these problems and determine the root cause.
-
-Select a dependency to see detailed latency, throughput, and failed transaction rate metrics.
-
-
-
-When viewing a dependency, consider your pattern of usage with that dependency.
-If your usage pattern _hasn't_ increased or decreased,
-but the experience has been negatively affected—either with an increase in latency or errors—there's
-likely a problem with the dependency that needs to be addressed.
-
-If your usage pattern _has_ changed, the dependency view can quickly show you whether
-that pattern change exists in all upstream services, or just a subset of your services.
-You might then start digging into traces coming from
-impacted services to determine why that pattern change has occurred.
-
-## Operations
-
-
-
-**Dependency operations** provides a granular breakdown of the operations/queries a dependency is executing.
-
-
-
-Selecting an operation displays the operation's impact and performance trends over time, via key metrics like latency, throughput, and failed transaction rate. In addition, the **Trace sample timeline** provides a visual drill-down into an end-to-end trace sample.
-
-
-
diff --git a/docs/en/serverless/apm/apm-ui-errors.mdx b/docs/en/serverless/apm/apm-ui-errors.mdx
deleted file mode 100644
index f41f416a72..0000000000
--- a/docs/en/serverless/apm/apm-ui-errors.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-slug: /serverless/observability/apm-errors
-title: Errors
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-_Errors_ are groups of exceptions with a similar exception or log message.
-The **Errors** overview provides a high-level view of the exceptions that APM agents catch,
-or that users manually report with APM agent APIs.
-Like errors are grouped together to make it easy to quickly see which errors are affecting your services,
-and to take actions to rectify them.
-
-A service returning a 5xx code from a request handler, controller, etc., will not create
-an exception that an APM agent can catch, and will therefore not show up in this view.
-
-
-
-Selecting an error group ID or error message brings you to the **Error group**.
-
-
-
-The error group details page visualizes the number of error occurrences over time and compared to a recent time range.
-This allows you to quickly determine if the error rate is changing or remaining constant.
-You'll also see the top 5 affected transactions—enabling you to quickly narrow down which transactions are most impacted
-by the selected error.
-
-Further down, you'll see an Error sample.
-The error shown is always the most recent to occur.
-The sample includes the exception message, culprit, stack trace where the error occurred,
-and additional contextual information to help debug the issue—all of which can be copied with the click of a button.
-
-In some cases, you might also see a Transaction sample ID.
-This feature allows you to make a connection between the errors and transactions,
-by linking you to the specific transaction where the error occurred.
-This allows you to see the whole trace, including which services the request went through.
-
diff --git a/docs/en/serverless/apm/apm-ui-infrastructure.mdx b/docs/en/serverless/apm/apm-ui-infrastructure.mdx
deleted file mode 100644
index 9a4300a9d7..0000000000
--- a/docs/en/serverless/apm/apm-ui-infrastructure.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
----
-slug: /serverless/observability/apm-infrastructure
-title: Infrastructure
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import FeatureBeta from '../partials/feature-beta.mdx'
-
-
-
-The **Infrastructure** tab provides information about the containers, pods, and hosts
-that the selected service is linked to.
-
-
-
-IT ops and software reliability engineers (SREs) can use this tab
-to quickly find a service's underlying infrastructure resources when debugging a problem.
-Knowing what infrastructure is related to a service allows you to remediate issues by restarting, killing hanging instances, changing configuration, rolling back deployments, scaling up, scaling out, and so on.
diff --git a/docs/en/serverless/apm/apm-ui-logs.mdx b/docs/en/serverless/apm/apm-ui-logs.mdx
deleted file mode 100644
index 10b77f0246..0000000000
--- a/docs/en/serverless/apm/apm-ui-logs.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
----
-slug: /serverless/observability/apm-logs
-title: Logs
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import LogOverview from '../transclusion/kibana/logs/log-overview.mdx'
-
-The **Logs** tab shows contextual logs for the selected service.
-
-
-
-
-
-
-Logs displayed on this page are filtered on `service.name`
-
-
diff --git a/docs/en/serverless/apm/apm-ui-metrics.mdx b/docs/en/serverless/apm/apm-ui-metrics.mdx
deleted file mode 100644
index d0a9cbcd06..0000000000
--- a/docs/en/serverless/apm/apm-ui-metrics.mdx
+++ /dev/null
@@ -1,27 +0,0 @@
----
-slug: /serverless/observability/apm-metrics
-title: Metrics
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-The **Metrics** overview provides APM agent-specific metrics,
-which lets you perform more in-depth root cause analysis investigations within the Applications UI.
-
-If you're experiencing a problem with your service, you can use this page to attempt to find the underlying cause.
-For example, you might be able to correlate a high number of errors with a long transaction duration, high CPU usage, or a memory leak.
-
-
-
-If you're using the Java APM agent, you can view metrics for each JVM.
-
-
-
-Breaking down metrics by JVM makes it much easier to analyze the provided metrics:
-CPU usage, memory usage, heap or non-heap memory,
-thread count, garbage collection rate, and garbage collection time spent per minute.
-
-
-
diff --git a/docs/en/serverless/apm/apm-ui-overview.mdx b/docs/en/serverless/apm/apm-ui-overview.mdx
deleted file mode 100644
index 43c629944e..0000000000
--- a/docs/en/serverless/apm/apm-ui-overview.mdx
+++ /dev/null
@@ -1,27 +0,0 @@
----
-slug: /serverless/observability/apm-ui-overview
-title: Navigate the Applications UI
-description: Learn how to navigate the Applications UI.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-For a quick, high-level overview of the health and performance of your application,
-start with:
-
-* Services
-* Traces
-* Dependencies
-* Service Map
-
-Notice something awry? Select a service or trace and dive deeper with:
-
-* Service overview
-* Transactions
-* Trace sample timeline
-* Errors
-* Metrics
-* Infrastructure
-* Logs
-
diff --git a/docs/en/serverless/apm/apm-ui-service-map.mdx b/docs/en/serverless/apm/apm-ui-service-map.mdx
deleted file mode 100644
index 5c75b80ef1..0000000000
--- a/docs/en/serverless/apm/apm-ui-service-map.mdx
+++ /dev/null
@@ -1,113 +0,0 @@
----
-slug: /serverless/observability/apm-service-map
-title: Service map
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-A service map is a real-time visual representation of the instrumented services in your application's architecture.
-It shows you how these services are connected, along with high-level metrics like average transaction duration,
-requests per minute, and errors per minute.
-If enabled, service maps also integrate with machine learning—for real-time health indicators based on anomaly detection scores.
-All of these features can help you quickly and visually assess your services' status and health.
-
-We currently surface two types of service maps:
-
-* **Global**: All services instrumented with APM agents and the connections between them are shown.
-* **Service-specific**: Highlight connections for a selected service.
-
-## How do service maps work?
-
-Service Maps rely on distributed traces to draw connections between services.
-As [distributed tracing](((apm-guide-ref))/apm-distributed-tracing.html) is enabled out-of-the-box for supported technologies, so are service maps.
-However, if a service isn't instrumented,
-or a `traceparent` header isn't being propagated to it,
-distributed tracing will not work, and the connection will not be drawn on the map.
-
-## Visualize your architecture
-
-From **Services**, switch to the **Service Map** tab to get started.
-By default, all instrumented services and connections are shown.
-Whether you're onboarding a new engineer, or just trying to grasp the big picture,
-drag things around, zoom in and out, and begin to visualize how your services are connected.
-
-Customize what the service map displays using either the query bar or the environment selector.
-The query bar enables you to use advanced queries to customize the service map based on your needs.
-The environment selector allows you to narrow displayed results to a specific environment.
-This can be useful if you have two or more services, in separate environments, but with the same name.
-Use the environment drop-down to only see the data you're interested in, like `dev` or `production`.
-
-If there's a specific service that interests you, select that service to highlight its connections.
-Click **Focus map** to refocus the map on the selected service and lock the connection highlighting.
-Click the **Transactions** tab to jump to the Transaction overview for the selected service.
-You can also use the tabs at the top of the page to easily jump to the **Errors** or **Metrics** overview.
-
-
-
-## Anomaly detection with machine learning
-
-You can create machine learning jobs to calculate anomaly scores on APM transaction durations within the selected service.
-When these jobs are active, service maps will display a color-coded anomaly indicator based on the detected anomaly score:
-
-
-
-
-
-
-
- Max anomaly score **≤25**. Service is healthy.
-
-
-
-
-
-
-
- Max anomaly score **26-74**. Anomalous activity detected. Service may be degraded.
-
-
-
-
-
-
-
- Max anomaly score **≥75**. Anomalous activity detected. Service is unhealthy.
-
-
-
-
-
-
-If an anomaly has been detected, click **View anomalies** to view the anomaly detection metric viewer.
-This time series analysis will display additional details on the severity and time of the detected anomalies.
-
-To learn how to create a machine learning job, refer to .
-
-## Legend
-
-Nodes appear on the map in one of two shapes:
-
-* **Circle**: Instrumented services. Interior icons are based on the language of the APM agent used.
-* **Diamond**: Databases, external, and messaging. Interior icons represent the generic type,
- with specific icons for known entities, like Elasticsearch.
- Type and subtype are based on `span.type`, and `span.subtype`.
-
-## Supported APM agents
-
-Service Maps are supported for the following APM agent versions:
-
-| | |
-|---|---|
-| Go agent | ≥ v1.7.0 |
-| Java agent | ≥ v1.13.0 |
-| .NET agent | ≥ v1.3.0 |
-| Node.js agent | ≥ v3.6.0 |
-| PHP agent | ≥ v1.2.0 |
-| Python agent | ≥ v5.5.0 |
-| Ruby agent | ≥ v3.6.0 |
-
diff --git a/docs/en/serverless/apm/apm-ui-service-overview.mdx b/docs/en/serverless/apm/apm-ui-service-overview.mdx
deleted file mode 100644
index 8169865b55..0000000000
--- a/docs/en/serverless/apm/apm-ui-service-overview.mdx
+++ /dev/null
@@ -1,134 +0,0 @@
----
-slug: /serverless/observability/apm-service-overview
-title: Service Overview
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import ThroughputTransactions from '../transclusion/kibana/apm/service-overview/throughput-transactions.mdx'
-import Ftr from '../transclusion/kibana/apm/service-overview/ftr.mdx'
-import Dependencies from '../transclusion/kibana/apm/service-overview/dependencies.mdx'
-
-Selecting a {/* non-mobile */} **service** brings you to the **Service overview**.
-The **Service overview** contains a wide variety of charts and tables that provide
-high-level visibility into how a service is performing across your infrastructure:
-
-* Service details like service version, runtime version, framework, and APM agent name and version
-* Container and orchestration information
-* Cloud provider, machine type, service name, region, and availability zone
-* Serverless function names and event trigger type
-* Latency, throughput, and errors over time
-* Service dependencies
-
-## Time series and expected bounds comparison
-
-For insight into the health of your services, you can compare how a service
-performs relative to a previous time frame or to the expected bounds from the
-corresponding ((anomaly-job)). For example, has latency been slowly increasing
-over time, did the service experience a sudden spike, is the throughput similar
-to what the ((ml)) job expects — enabling a comparison can provide the answer.
-
-
-
-Select the **Comparison** box to apply a time-based or expected bounds comparison.
-The time-based comparison options are based on the selected time filter range:
-
-| Time filter | Time comparison options |
-|---|---|
-| ≤ 24 hours | One day or one week |
-| \> 24 hours and ≤ 7 days | One week |
-| \> 7 days | An identical amount of time immediately before the selected time range |
-
-The expected bounds comparison is powered by machine learning and requires anomaly detection to be enabled.
-
-## Latency
-
-Response times for the service. You can filter the **Latency** chart to display the average,
-95th, or 99th percentile latency times for the service.
-
-
-
-## Throughput and transactions
-
-
-
-## Failed transaction rate and errors
-
-
-
-The **Errors** table provides a high-level view of each error message when it first and last occurred,
-along with the total number of occurrences. This makes it very easy to quickly see which errors affect
-your services and take actions to rectify them. To do so, click **View errors**.
-
-
-
-## Span types average duration and dependencies
-
-The **Time spent by span type** chart visualizes each span type's average duration and helps you determine
-which spans could be slowing down transactions. The "app" label displayed under the
-chart indicates that something was happening within the application. This could signal that the APM
-agent does not have auto-instrumentation for whatever was happening during that time or that the time was spent in the
-application code and not in database or external requests.
-
-
-
-## Cold start rate
-
-The cold start rate chart is specific to serverless services, and displays the
-percentage of requests that trigger a cold start of a serverless function.
-A cold start occurs when a serverless function has not been used for a certain period of time.
-Analyzing the cold start rate can be useful for deciding how much memory to allocate to a function,
-or when to remove a large dependency.
-
-The cold start rate chart is currently supported for AWS Lambda
-functions and Azure functions.
-
-## Instances
-
-The **Instances** table displays a list of all the available service instances within the selected time range.
-Depending on how the service runs, the instance could be a host or a container. The table displays latency, throughput,
-failed transaction, CPU usage, and memory usage for each instance. By default, instances are sorted by _Throughput_.
-
-
-
-## Service metadata
-
-To view metadata relating to the service agent, and if relevant, the container and cloud provider,
-click on each icon located at the top of the page beside the service name.
-
-
-
-**Service information**
-
-* Service version
-* Runtime name and version
-* Framework name
-* APM agent name and version
-
-**Container information**
-
-* Operating system
-* Containerized (yes or no)
-* Total number of instances
-* Orchestration
-
-**Cloud provider information**
-
-* Cloud provider
-* Cloud service name
-* Availability zones
-* Machine types
-* Project ID
-* Region
-
-**Serverless information**
-
-* Function name(s)
-* Event trigger type
-
-**Alerts**
-
-* Recently fired alerts
-
diff --git a/docs/en/serverless/apm/apm-ui-services.mdx b/docs/en/serverless/apm/apm-ui-services.mdx
deleted file mode 100644
index 54a98eb2c4..0000000000
--- a/docs/en/serverless/apm/apm-ui-services.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-slug: /serverless/observability/apm-services
-title: Services
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import FeatureBeta from '../partials/feature-beta.mdx'
-
-The **Services** inventory provides a quick, high-level overview of the health and general
-performance of all instrumented services.
-
-To help surface potential issues, services are sorted by their health status:
-**critical** → **warning** → **healthy** → **unknown**.
-Health status is powered by machine learning
-and requires anomaly detection to be enabled.
-
-In addition to health status, active alerts for each service are prominently displayed in the service inventory table. Selecting an active alert badge brings you to the **Alerts** tab where you can learn more about the active alert and take action.
-
-
-
-## Service groups
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-Group services together to build meaningful views that remove noise, simplify investigations across services,
-and combine related alerts.
-
-{/* This screenshot is reused in the alerts docs */}
-{/* Ensure it has an active alert showing */}
-
-
-To create a service group:
-
-1. In your ((observability)) project, go to **Applications** → **Services**.
-1. Switch to **Service groups**.
-1. Click **Create group**.
-1. Specify a name, color, and description.
-1. Click **Select services**.
-1. Specify a [Kibana Query Language (KQL)](((kibana-ref))/kuery-query.html) query to filter services
- by one or more of the following dimensions: `agent.name`, `service.name`, `service.language.name`,
- `service.environment`, `labels.`. Services that match the query within the last 24 hours will
- be assigned to the group.
-
-### Examples
-
-Not sure where to get started? Here are some sample queries you can build from:
-
-* **Group services by environment**: To group "production" services, use `service.environment : "production"`.
-* **Group services by name**: To group all services that end in "beat", use `service.name : *beat`. This will match services named "Auditbeat", "Heartbeat", "Filebeat", and so on.
diff --git a/docs/en/serverless/apm/apm-ui-trace-sample-timeline.mdx b/docs/en/serverless/apm/apm-ui-trace-sample-timeline.mdx
deleted file mode 100644
index a7fccd5bc1..0000000000
--- a/docs/en/serverless/apm/apm-ui-trace-sample-timeline.mdx
+++ /dev/null
@@ -1,75 +0,0 @@
----
-slug: /serverless/observability/apm-trace-sample-timeline
-title: Trace sample timeline
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-The trace sample timeline visualization is a high-level view of what your application was doing while it was trying to respond to a request.
-This makes it useful for visualizing where a selected transaction spent most of its time.
-
-
-
-View a span in detail by clicking on it in the timeline waterfall.
-For example, when you click on an SQL Select database query,
-the information displayed includes the actual SQL that was executed, how long it took,
-and the percentage of the trace's total time.
-You also get a stack trace, which shows the SQL query in your code.
-Finally, APM knows which files are your code and which are just modules or libraries that you've installed.
-These library frames will be minimized by default in order to show you the most relevant stack trace.
-
-
-A [span](((apm-guide-ref))/data-model-spans.html) is the duration of a single event.
-Spans are automatically captured by APM agents, and you can also define custom spans.
-Each span has a type and is defined by a different color in the timeline/waterfall visualization.
-
-
-
-
-## Investigate
-
-The trace sample timeline features an **Investigate** button which provides a quick way to jump
-to other areas of the Elastic Observability UI while maintaining the context of the currently selected trace sample.
-For example, quickly view:
-
-* logs and metrics for the selected pod
-* logs and metrics for the selected host
-* trace logs for the selected `trace.id`
-* uptime status of the selected domain
-* the service map filtered by the selected trace
-* the selected transaction in **Discover**
-* your custom links
-
-## Distributed tracing
-
-When a trace travels through multiple services it is known as a _distributed trace_.
-In the Applications UI, the colors in a distributed trace represent different services and
-are listed in the order they occur.
-
-
-
-As application architectures are shifting from monolithic to more distributed, service-based architectures,
-distributed tracing has become a crucial feature of modern application performance monitoring.
-It allows you to trace requests through your service architecture automatically, and visualize those traces in one single view in the Applications UI.
-From initial web requests to your front-end service, to queries made to your back-end services,
-this makes finding possible bottlenecks throughout your application much easier and faster.
-
-
-
-Don't forget; by definition, a distributed trace includes more than one transaction.
-When viewing distributed traces in the timeline waterfall,
-you'll see this icon: ,
-which indicates the next transaction in the trace.
-For easier problem isolation, transactions can be collapsed in the waterfall by clicking
-the icon to the left of the transactions.
-Transactions can also be expanded and viewed in detail by clicking on them.
-
-After exploring these traces,
-you can return to the full trace by clicking **View full trace**.
-
-
-Distributed tracing is supported by all APM agents, and there's no additional configuration needed.
-
-
diff --git a/docs/en/serverless/apm/apm-ui-traces.mdx b/docs/en/serverless/apm/apm-ui-traces.mdx
deleted file mode 100644
index 31e4fc7f19..0000000000
--- a/docs/en/serverless/apm/apm-ui-traces.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-slug: /serverless/observability/apm-traces
-title: Traces
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-Traces link together related transactions to show an end-to-end performance of how a request was served
-and which services were part of it.
-In addition to the Traces overview, you can view your application traces in the trace sample timeline waterfall.
-
-
-**Traces** displays your application's entry (root) transactions.
-Transactions with the same name are grouped together and only shown once in this table.
-If you're using distributed tracing,
-this view is key to finding the critical paths within your application.
-
-By default, transactions are sorted by _Impact_.
-Impact helps show the most used and slowest endpoints in your service — in other words,
-it's the collective amount of pain a specific endpoint is causing your users.
-If there's a particular endpoint you're worried about, select it to view its
-transaction details.
-
-You can also use queries to filter and search the transactions shown on this page. Note that only properties available on root transactions are searchable. For example, you can't search for `label.tier: 'high'`, as that field is only available on non-root transactions.
-
-
-
-## Trace explorer
-
-{/* */}
-**Trace explorer** is an experimental top-level search tool that allows you to query your traces using [Kibana Query Language (KQL)](((kibana-ref))/kuery-query.html) or [Event Query Language (EQL)](((ref))/eql.html).
-
-Curate your own custom queries, or use the to find and select edges to automatically generate queries based on your selection:
-
-
-
diff --git a/docs/en/serverless/apm/apm-ui-transactions.mdx b/docs/en/serverless/apm/apm-ui-transactions.mdx
deleted file mode 100644
index 1f5092625f..0000000000
--- a/docs/en/serverless/apm/apm-ui-transactions.mdx
+++ /dev/null
@@ -1,178 +0,0 @@
----
-slug: /serverless/observability/apm-transactions
-title: Transactions
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-import LogOverview from '../transclusion/kibana/logs/log-overview.mdx'
-
-A _transaction_ describes an event captured by an Elastic APM agent instrumenting a service.
-APM agents automatically collect performance metrics on HTTP requests, database queries, and much more.
-The **Transactions** tab shows an overview of all transactions.
-
-
-
-The **Latency**, **Throughput**, **Failed transaction rate**, **Time spent by span type**, and **Cold start rate**
-charts display information on all transactions associated with the selected service:
-
-
- **Latency**
-
- Response times for the service. Options include average, 95th, and 99th percentile.
- If there's a weird spike that you'd like to investigate,
- you can simply zoom in on the graph — this will adjust the specific time range,
- and all of the data on the page will update accordingly.
-
- **Throughput**
-
- Visualize response codes: `2xx`, `3xx`, `4xx`, and so on.
- Useful for determining if more responses than usual are being served with a particular response code.
- Like in the latency graph, you can zoom in on anomalies to further investigate them.
-
- **Failed transaction rate**
-
- The failed transaction rate represents the percentage of failed transactions from the perspective of the selected service.
- It's useful for visualizing unexpected increases, decreases, or irregular patterns in a service's transactions.
-
-
-
- HTTP **transactions** from the HTTP server perspective do not consider a `4xx` status code (client error) as a failure
- because the failure was caused by the caller, not the HTTP server. Thus, `event.outcome=success` and there will be no increase in failed transaction rate.
-
- HTTP **spans** from the client perspective however, are considered failures if the HTTP status code is ≥ 400.
- These spans will set `event.outcome=failure` and increase the failed transaction rate.
-
- If there is no HTTP status, both transactions and spans are considered successful unless an error is reported.
-
-
-
- **Time spent by span type**
-
- Visualize where your application is spending most of its time.
- For example, is your app spending time in external calls, database processing, or application code execution?
-
- The time a transaction took to complete is also recorded and displayed on the chart under the "app" label.
- "app" indicates that something was happening within the application, but we're not sure exactly what.
- This could be a sign that the APM agent does not have auto-instrumentation for whatever was happening during that time.
-
- It's important to note that if you have asynchronous spans, the sum of all span times may exceed the duration of the transaction.
-
- **Cold start rate**
-
- Only applicable to serverless transactions, this chart displays the percentage of requests that trigger a cold start of a serverless function.
- See Cold starts for more information.
-
-
-
-## Transactions table
-
-The **Transactions** table displays a list of _transaction groups_ for the selected service.
-In other words, this view groups all transactions of the same name together,
-and only displays one entry for each group.
-
-
-
-By default, transaction groups are sorted by _Impact_.
-Impact helps show the most used and slowest endpoints in your service — in other words,
-it's the collective amount of pain a specific endpoint is causing your users.
-If there's a particular endpoint you're worried about, you can click on it to view the transaction details.
-
-
-
-If you only see one route in the Transactions table, or if you have transactions named "unknown route",
-it could be a symptom that the APM agent either wasn't installed correctly or doesn't support your framework.
-
-For further details, including troubleshooting and custom implementation instructions,
-refer to the documentation for each APM Agent you've implemented.
-
-
-
-
-
-## Transaction details
-
-Selecting a transaction group will bring you to the **transaction** details.
-This page is visually similar to the transaction overview, but it shows data from all transactions within
-the selected transaction group.
-
-
-
-
-
-### Latency distribution
-
-The latency distribution shows a plot of all transaction durations for the given time period.
-The following screenshot shows a typical distribution
-and indicates most of our requests were served quickly — awesome!
-The requests on the right are taking longer than average; we probably need to focus on them.
-
-
-
-Click and drag to select a latency duration _bucket_ to display up to 500 trace samples.
-
-
-
-### Trace samples
-
-Trace samples are based on the _bucket_ selection in the **Latency distribution** chart;
-update the samples by selecting a new _bucket_.
-The number of requests per bucket is displayed when hovering over the graph,
-and the selected bucket is highlighted to stand out.
-
-Each bucket presents up to ten trace samples in a **timeline**, trace sample **metadata**,
-and any related **logs**.
-
-**Trace sample timeline**
-
-Each sample has a trace timeline waterfall that shows how a typical request in that bucket executed.
-This waterfall is useful for understanding the parent/child hierarchy of transactions and spans,
-and ultimately determining _why_ a request was slow.
-For large waterfalls, expand problematic transactions and collapse well-performing ones
-for easier problem isolation and troubleshooting.
-
-
-
-
-More information on timeline waterfalls is available in spans.
-
-
-**Trace sample metadata**
-
-Learn more about a trace sample in the **Metadata** tab:
-
-* Labels: Custom labels added by APM agents
-* HTTP request/response information
-* Host information
-* Container information
-* Service: The service/application runtime, APM agent, name, etc..
-* Process: The process id that served up the request.
-* APM agent information
-* URL
-* User: Requires additional configuration, but allows you to see which user experienced the current transaction.
-* FaaS information, like cold start, AWS request ID, trigger type, and trigger request ID
-
-
-All of this data is stored in documents in Elasticsearch.
-This means you can select "Actions - View transaction in Discover" to see the actual Elasticsearch document under the discover tab.
-
-
-**Trace sample logs**
-
-The **Logs** tab displays logs related to the sampled trace.
-
-
-
-
-
-
-
-### Correlations
-
-Correlations surface attributes of your data that are potentially correlated with high-latency or erroneous transactions.
-To learn more, see Find transaction latency and failure correlations.
-
-
-
diff --git a/docs/en/serverless/apm/apm-view-and-analyze-traces.mdx b/docs/en/serverless/apm/apm-view-and-analyze-traces.mdx
deleted file mode 100644
index 71ccf9a76c..0000000000
--- a/docs/en/serverless/apm/apm-view-and-analyze-traces.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
----
-slug: /serverless/observability/apm-view-and-analyze-traces
-title: View and analyze traces
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-APM allows you to monitor your software services and applications in real time;
-visualize detailed performance information on your services,
-identify and analyze errors,
-and monitor host-level and APM agent-specific metrics like JVM and Go runtime metrics.
-
-## Visualizing application bottlenecks
-
-Having access to application-level insights with just a few clicks can drastically decrease the time you spend
-debugging errors, slow response times, and crashes.
-
-For example, you can see information about response times, requests per minute, and status codes per endpoint.
-You can even dive into a specific request sample and get a complete waterfall view of what your application is spending its time on.
-You might see that your bottlenecks are in database queries, cache calls, or external requests.
-For each incoming request and each application error,
-you can also see contextual information such as the request header, user information,
-system values, or custom data that you manually attached to the request.
-
diff --git a/docs/en/serverless/apm/apm.mdx b/docs/en/serverless/apm/apm.mdx
deleted file mode 100644
index cb1be97e14..0000000000
--- a/docs/en/serverless/apm/apm.mdx
+++ /dev/null
@@ -1,27 +0,0 @@
----
-slug: /serverless/observability/apm
-title: Application performance monitoring (APM)
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Elastic APM is an application performance monitoring system.
-It allows you to monitor software services and applications in real time, by
-collecting detailed performance information on response time for incoming requests,
-database queries, calls to caches, external HTTP requests, and more.
-This makes it easy to pinpoint and fix performance problems quickly.
-
-Elastic APM also automatically collects unhandled errors and exceptions.
-Errors are grouped based primarily on the stack trace,
-so you can identify new errors as they appear and keep an eye on how many times specific errors happen.
-
-Metrics are another vital source of information when debugging production systems.
-Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics,
-like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent.
-
-## Give Elastic APM a try
-
-Ready to give Elastic APM a try? See Get started with traces and APM.
-
diff --git a/docs/en/serverless/cases/cases.mdx b/docs/en/serverless/cases/cases.mdx
deleted file mode 100644
index 7ef2a25057..0000000000
--- a/docs/en/serverless/cases/cases.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
----
-slug: /serverless/observability/cases
-title: Cases
-description: Use cases to track progress toward solving problems detected in Elastic Observability.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Collect and share information about observability issues by creating a case.
-Cases allow you to track key investigation details,
-add assignees and tags to your cases, set their severity and status, and add alerts,
-comments, and visualizations. You can also send cases to third-party systems by
-configuring external connectors.
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
\ No newline at end of file
diff --git a/docs/en/serverless/cases/create-manage-cases.mdx b/docs/en/serverless/cases/create-manage-cases.mdx
deleted file mode 100644
index a91ea184f4..0000000000
--- a/docs/en/serverless/cases/create-manage-cases.mdx
+++ /dev/null
@@ -1,117 +0,0 @@
----
-slug: /serverless/observability/create-a-new-case
-title: Create and manage cases
-description: Learn how to create a case, add files, and manage the case over time.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-Open a new case to keep track of issues and share the details with colleagues.
-To create a case in your Observability project:
-
-1. In your ((observability)) project, go to **Cases**.
-1. Click **Create case**.
-1. (Optional) If you defined templates, select one to use its default field values.
-1. Give the case a name, severity, and description.
-
-
- In the `Description` area, you can use
- [Markdown](https://www.markdownguide.org/cheat-sheet) syntax to create formatted text.
-
-
-1. (Optional) Add a category, assignees, and tags.
- {/* To do: Need to verify that a viewer cannot be assigned to a case
- (all I know is that they can _view_ the case) */}
- You can add users who are assigned the Editor user role (or a more permissive role) for the project.
-
-1. If you defined custom fields, they appear in the **Additional fields** section.
-
-1. (Optional) Under External incident management system, you can select a connector to send cases to an external system.
- If you've created any connectors previously, they will be listed here.
- If there are no connectors listed, you can create one.
-
-1. After you've completed all of the required fields, click **Create case**.
-
-
-You can also create a case from an alert or add an alert to an existing case. From the **Alerts** page, click the **More options** icon and choose either **Add to existing case** or **Create new case**, and select or complete the details as required.
-
-
-## Add files
-
-After you create a case, you can upload and manage files on the **Files** tab:
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-To download or delete the file or copy the file hash to your clipboard, open the action menu (…).
-The available hash functions are MD5, SHA-1, and SHA-256.
-
-When you upload a file, a comment is added to the case activity log.
-To view an image, click its name in the activity or file list.
-
-
-Uploaded files are also accessible under **Project settings** → **Management** → **Files**.
-When you export cases as [saved objects](((kibana-ref))/managing-saved-objects.html), the case files are not exported.
-
-
-You can add images and text, CSV, JSON, PDF, or ZIP files.
-For the complete list, check [`mime_types.ts`](https://github.com/elastic/kibana/blob/main/x-pack/plugins/cases/common/constants/mime_types.ts).
-
-
-There is a 10 MiB size limit for images. For all other MIME types, the limit is 100 MiB.
-
-
-{/*
-
-NOTE: Email notifications are not available in Observability projects yet.
-
-## Add email notifications
-
-You can configure email notifications that occur when users are assigned to
-cases.
-
-To do this, add the email addresses to the monitoring email allowlist.
-Follow the steps in [Send alerts by email](((cloud))/ec-watcher.html#ec-watcher-allowlist).
-
-You do not need to configure an email connector or update
-user settings, since the preconfigured Elastic-Cloud-SMTP connector is
-used by default.
-
-When you subsequently add assignees to cases, they receive an email.
-
-*/}
-
-## Send cases to external incident management systems
-
-To send a case to an external system, click the button in the *External incident management system* section of the individual case page.
-This information is not sent automatically.
-If you make further changes to the shared case fields, you should push the case again.
-
-For more information about configuring connections to external incident management systems, refer to .
-
-## Manage existing cases
-
-You can search existing cases and filter them by attributes such as assignees,
-categories, severity, status, and tags. You can also select multiple cases and use bulk
-actions to delete cases or change their attributes.
-
-To view a case, click on its name. You can then:
-
-* Add a new comment.
-* Edit existing comments and the description.
-* Add or remove assignees.
-* Add a connector (if you did not select one while creating the case).
-* Send updates to external systems (if external connections are configured).
-* Edit the category and tags.
-* Change the status.
-* Change the severity.
-* Remove an alert.
-* Refresh the case to retrieve the latest updates.
-* Close the case.
-* Reopen a closed case.
-
diff --git a/docs/en/serverless/cases/manage-cases-settings.mdx b/docs/en/serverless/cases/manage-cases-settings.mdx
deleted file mode 100644
index 2696bc38b8..0000000000
--- a/docs/en/serverless/cases/manage-cases-settings.mdx
+++ /dev/null
@@ -1,128 +0,0 @@
----
-slug: /serverless/observability/case-settings
-title: Configure case settings
-description: Change the default behavior of ((observability)) cases by adding connectors, custom fields, templates, and closure options.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-To access case settings in an ((observability)) project, go to **Cases** → **Settings**.
-
-
-{/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-## Case closures
-
-If you close cases in your external incident management system, the cases will remain open in Elastic Observability until you close them manually (the information is only sent in one direction).
-
-To close cases when they are sent to an external system, select **Automatically close cases when pushing new incident to external system**.
-
-## External incident management systems
-
-If you are using an external incident management system, you can integrate Elastic Observability
-cases with this system using connectors. These third-party systems are supported:
-
-* ((ibm-r))
-* ((jira)) (including ((jira)) Service Desk)
-* ((sn-itsm))
-* ((sn-sir))
-* ((swimlane))
-* TheHive
-* ((webhook-cm))
-
-You need to create a connector to send cases, which stores the information required to interact
-with an external system. For each case, you can send the title, description, and comment when
-you choose to push the case — for the **Webhook - Case Management** connector, you can also
-send the status and severity fields.
-
-
-{/* TODO: Verify user roles needed to create connectors... */}
-To add, modify, or delete a connector, you must have the Admin user role for the project
-(or a more permissive role).
-
-
-After creating a connector, you can set your cases to
-automatically close when they are sent to an external system.
-
-### Create a connector
-
-1. From the **Incident management system** list, select **Add new connector**.
-1. Select the system to send cases to: **((sn))**, **((jira))**, **((ibm-r))**,
- **((swimlane))**, **TheHive**, or **((webhook-cm))**.
-
- 
- {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-1. Enter your required settings. For connector configuration details, refer to:
- - [((ibm-r)) connector](((kibana-ref))/resilient-action-type.html)
- - [((jira)) connector](((kibana-ref))/jira-action-type.html)
- - [((sn-itsm)) connector](((kibana-ref))/servicenow-action-type.html)
- - [((sn-sir)) connector](((kibana-ref))/servicenow-sir-action-type.html)
- - [((swimlane)) connector](((kibana-ref))/swimlane-action-type.html)
- - [TheHive connector](((kibana-ref))/thehive-action-type.html)
- - [((webhook-cm)) connector](((kibana-ref))/cases-webhook-action-type.html)
-
-1. Click **Save**.
-
-### Edit a connector
-
-You can create additional connectors, update existing connectors, and change the connector used to send cases to external systems.
-
-
-You can also configure which connector is used for each case individually. Refer to .
-
-
-To change the default connector used to send cases to external systems:
-
-1. Select the required connector from the **Incident management system** list.
-
-To update an existing connector:
-
-1. Click **Update \**.
-1. Update the connector fields as required.
-
-## Custom fields
-
-You can add optional and required fields for customized case collaboration.
-
-To create a custom field:
-
-1. In the **Custom fields** section, click **Add field**.
-
- 
- {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-1. You must provide a field label and type (text or toggle).
- You can optionally designate it as a required field and provide a default value.
-
-When you create a custom field, it's added to all new and existing cases.
-In existing cases, new custom text fields initially have null values.
-
-You can subsequently remove or edit custom fields on the **Settings** page.
-
-## Templates
-
-
-
-You can make the case creation process faster and more consistent by adding templates.
-A template defines values for one or all of the case fields (such as severity, tags, description, and title) as well as any custom fields.
-
-To create a template:
-
-1. In the **Templates** section, click **Add template**.
-
- 
- {/* NOTE: This is an autogenerated screenshot. Do not edit it directly. */}
-
-1. You must provide a template name and case severity. You can optionally add template tags and a description, values for each case field, and a case connector.
-
-When users create cases, they can optionally select a template and use its field values or override them.
-
-
-If you update or delete templates, existing cases are unaffected.
-
diff --git a/docs/en/serverless/dashboards/dashboards-and-visualizations.mdx b/docs/en/serverless/dashboards/dashboards-and-visualizations.mdx
deleted file mode 100644
index f042f3602b..0000000000
--- a/docs/en/serverless/dashboards/dashboards-and-visualizations.mdx
+++ /dev/null
@@ -1,44 +0,0 @@
----
-slug: /serverless/observability/dashboards
-title: Dashboards
-description: Visualize your observability data using pre-built dashboards or create your own.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Elastic provides a wide range of pre-built dashboards for visualizing observability data from a variety of sources.
-These dashboards are loaded automatically when you install [Elastic integrations](https://docs.elastic.co/integrations).
-
-You can also create new dashboards and visualizations based on your data views to get a full picture of your data.
-
-In your Observability project, go to **Dashboards** to see installed dashboards or create your own.
-This example shows dashboards loaded by the System integration:
-
-
-
-Notice you can filter the list of dashboards:
-
-* Use the text search field to filter by name or description.
-* Use the **Tags** menu to filter by tag. To create a new tag or edit existing tags, click **Manage tags**.
-* Click a dashboard's tags to toggle filtering for each tag.
-
-## Create new dashboards
-
-To create a new dashboard, click **Create Dashboard** and begin adding visualizations.
-You can create charts, graphs, maps, tables, and other types of visualizations from your data,
-or you can add visualizations from the library.
-
-You can also add other types of panels — such as filters, links, and text — and add
-controls like time sliders.
-
-For more information about creating dashboards,
-refer to [Create your first dashboard](((kibana-ref))/create-a-dashboard-of-panels-with-web-server-data.html).
-
-
- The tutorial about creating your first dashboard is written for ((kib)) users,
- but the steps for serverless are very similar.
- To load the sample data in serverless, go to **Project Settings** → **Integrations** in the navigation pane,
- then search for "sample data".
-
-
diff --git a/docs/en/serverless/elastic-entity-model.mdx b/docs/en/serverless/elastic-entity-model.mdx
deleted file mode 100644
index 7380a3e448..0000000000
--- a/docs/en/serverless/elastic-entity-model.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
----
-slug: /serverless/observability/elastic-entity-model
-title: Elastic Entity Model
-description: Learn about the model that empowers entity-centric Elastic solution features and workflows.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-import Roles from './partials/roles.mdx'
-
-
-
-The Elastic Entity Model consists of:
-
-- a data model and related entity indices
-- an Entity Discovery Framework, which consists of [transforms](((ref))/transforms.html) and [Ingest pipelines](((ref))/ingest.html) that read from signal indices and write data to entity indices
-- a set of management APIs that empower entity-centric Elastic solution features and workflows
-
-In Elastic Observability,
-an _entity_ is an object of interest that can be associated with produced telemetry and identified as unique.
-Note that this definition is intentionally closely aligned to the work of the [OpenTelemetry Entities SIG](https://github.com/open-telemetry/oteps/blob/main/text/entities/0256-entities-data-model.md#data-model).
-Examples of entities include (but are not limited to) services, hosts, and containers.
-
-The concept of an entity is important as a means to unify observability signals based on the underlying entity that the signals describe.
-
-
- - The Elastic Entity Model currently supports the new inventory experience limited to service, host, and container entities.
- - During Technical Preview, Entity Discovery Framework components are not enabled by default.
-
-
-## Enable the Elastic Entity Model
-
-
-
-You can enable the Elastic Entity Model from the new Inventory. If already enabled, you will not be prompted to enable the Elastic Entity Model.
-
-
-## Disable the Elastic Entity Model
-
-
-
-From the Dev Console, run the command: `DELETE kbn:/internal/entities/managed/enablement`
-
-## Limitations
-
-* [Cross-cluster search (CCS)](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html) is not supported. EEM cannot leverage data stored on a remote cluster.
-* Services are only detected from documents where `service.name` is detected in index patterns that match either `logs-*` or `apm-*`.
diff --git a/docs/en/serverless/infra-monitoring/analyze-hosts.mdx b/docs/en/serverless/infra-monitoring/analyze-hosts.mdx
deleted file mode 100644
index c1b37f0b1f..0000000000
--- a/docs/en/serverless/infra-monitoring/analyze-hosts.mdx
+++ /dev/null
@@ -1,303 +0,0 @@
----
-slug: /serverless/observability/analyze-hosts
-title: Analyze and compare hosts
-description: Get a metrics-driven view of your hosts backed by an easy-to-use interface called Lens.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-import HostDetails from '../transclusion/host-details.mdx'
-
-import ContainerDetails from '../transclusion/container-details.mdx'
-
-
-
-We'd love to get your feedback!
-[Tell us what you think!](https://docs.google.com/forms/d/e/1FAIpQLScRHG8TIVb1Oq8ZhD4aks3P1TmgiM58TY123QpDCcBz83YC6w/viewform)
-
-The **Hosts** page provides a metrics-driven view of your infrastructure backed
-by an easy-to-use interface called Lens. On the **Hosts** page, you can view
-health and performance metrics to help you quickly:
-
-* Analyze and compare hosts without having to build new dashboards.
-* Identify which hosts trigger the most alerts.
-* Troubleshoot and resolve issues quickly.
-* View historical data to rule out false alerts and identify root causes.
-* Filter and search the data to focus on the hosts you care about the most.
-
-To access the **Hosts** page, in your ((observability)) project, go to
-**Infrastructure** → **Hosts**.
-
-
-
-To learn more about the metrics shown on this page, refer to the documentation.
-
-
-
-If you haven't added data yet, click **Add data** to search for and install an Elastic integration.
-
-Need help getting started? Follow the steps in
-Get started with system metrics.
-
-
-
-The **Hosts** page provides several ways to view host metrics:
-
-* Overview tiles show the number of hosts returned by your search plus
- averages of key metrics, including CPU usage, normalized load, and memory usage.
- Max disk usage is also shown.
-
-* The Host limit controls the maximum number of hosts shown on the page. The
- default is 50, which means the page shows data for the top 50 hosts based on the
- most recent timestamps. You can increase the host limit to see data for more
- hosts, but doing so may impact query performance.
-
-* The Hosts table shows a breakdown of metrics for each host along with an alert count
- for any hosts with active alerts. You may need to page through the list
- or change the number of rows displayed on each page to see all of your hosts.
-
-* Each host name is an active link to a page,
- where you can explore enhanced metrics and other observability data related to the selected host.
-
-* Table columns are sortable, but note that the sorting behavior is applied to
- the already returned data set.
-
-* The tabs at the bottom of the page show an overview of the metrics, logs,
- and alerts for all hosts returned by your search.
-
-
- For more information about creating and viewing alerts, refer to .
-
-
-
-
-## Filter the Hosts view
-
-The **Hosts** page provides several mechanisms for filtering the data on the
-page:
-
-* Enter a search query using [((kib)) Query Language](((kibana-ref))/kuery-query.html) to show metrics that match your search criteria. For example,
- to see metrics for hosts running on linux, enter `host.os.type : "linux"`.
- Otherwise you’ll see metrics for all your monitored hosts (up to the number of
- hosts specified by the host limit).
-
-* Select additional criteria to filter the view:
- * In the **Operating System** list, select one or more operating systems
- to include (or exclude) metrics for hosts running the selected operating systems.
-
- * In the **Cloud Provider** list, select one or more cloud providers to
- include (or exclude) metrics for hosts running on the selected cloud providers.
-
- * In the **Service Name** list, select one or more service names to
- include (or exclude) metrics for the hosts running the selected services.
- Services must be instrumented by APM to be filterable.
- This filter is useful for comparing different hosts to determine whether a problem lies
- with a service or the host that it is running on.
-
-
- Filtered results are sorted by _document count_.
- Document count is the number of events received by Elastic for the hosts that match your filter criteria.
-
-
-* Change the date range in the time filter, or click and drag on a
- visualization to change the date range.
-
-* Within a visualization, click a point on a line and apply filters to set other
- visualizations on the page to the same time and/or host.
-
-
-
-## View metrics
-
-On the **Metrics** tab, view metrics trending over time, including CPU usage,
-normalized load, memory usage, disk usage, and other metrics related to disk IOPs and throughput.
-Place your cursor over a line to view metrics at a specific
-point in time. From within each visualization, you can choose to open the visualization in Lens.
-
-To see metrics for a specific host, refer to .
-
-{/* TODO: Uncomment this section if/when the inspect option feature is added back in.
-
-
-### Inspect and download metrics
-
-You can access a text-based view of the data underlying
-your metrics visualizations and optionally download the data to a
-comma-separated (CSV) file.
-
-Hover your cursor over a visualization, then in the upper-right corner, click
-the ellipsis icon to inspect the data.
-
-
-
-In the flyout, click **Download CSV** to download formatted or raw data to a CSV
-file.
-
-Click **View: Data** and notice that you can change the view to **Requests** to explore the request
-used to fetch the data and the response returned from ((es)). On the **Request** tab, click links
-to further inspect and analyze the request in the Dev Console or Search Profiler. */}
-
-
-
-### Open in Lens
-
-Metrics visualizations are powered by Lens, meaning you can continue your
-analysis in Lens if you require more flexibility. Hover your cursor over a
-visualization, then click the ellipsis icon in the upper-right corner to open
-the visualization in Lens.
-
-
-
-In Lens, you can examine all the fields and formulas used to create the
-visualization, make modifications to the visualization, and save your changes.
-
-For more information about using Lens, refer to the
-[((kib)) documentation about Lens](((kibana-ref))/lens.html).
-
-
-
-## View logs
-
-On the **Logs** tab of the **Hosts** page, view logs for the systems you are monitoring and search
-for specific log entries. This view shows logs for all of the hosts returned by
-the current query.
-
-
-
-To see logs for a specific host, refer to .
-
-
-
-## View alerts
-
-On the **Alerts** tab of the **Hosts** page, view active alerts to pinpoint problems. Use this view
-to figure out which hosts triggered alerts and identify root causes. This view
-shows alerts for all of the hosts returned by the current query.
-
-From the **Actions** menu, you can choose to:
-
-* Add the alert to a new or existing case.
-* View rule details.
-* View alert details.
-
-
-
-To see alerts for a specific host, refer to .
-
-
-
- If your rules are triggering alerts that don't appear on the **Hosts** page,
- edit the rules and make sure they are correctly configured to associate the host name with the alert:
-
- * For Metric threshold or Custom threshold rules, select `host.name` in the **Group alerts by** field.
- * For Inventory rules, select **Host** for the node type under **Conditions**.
-
- To learn more about creating and managing rules, refer to .
-
-
-
-
-## View host details
-
-Without leaving the **Hosts** page, you can view enhanced metrics relating to
-each host running in your infrastructure. In the list of hosts, find the host
-you want to monitor, then click the **Toggle dialog with details**
-icon to display the host details overlay.
-
-
-To expand the overlay and view more detail, click **Open as page** in the upper-right corner.
-
-
-The host details overlay contains the following tabs:
-
-
-
-
-The metrics shown on the **Hosts** page are also available when viewing hosts on the **Inventory** page.
-
-
-
-
-## Why am I seeing dashed lines in charts?
-
-There are a few reasons why you may see dashed lines in your charts.
-
-* The chart interval is too short
-* Data is missing
-* The chart interval is too short and data is missing
-
-
-
-### The chart interval is too short
-
-In this example, the data emission rate is lower than the Lens chart interval.
-A dashed line connects the known data points to make it easier to visualize trends in the data.
-
-
-
-The chart interval is automatically set depending on the selected time duration.
-To fix this problem, change the selected time range at the top of the page.
-
-
-Want to dig in further while maintaining the selected time duration?
-Hover over the chart you're interested in and select **Options** → **Open in Lens**.
-Once in Lens, you can adjust the chart interval temporarily.
-Note that this change is not persisted in the **Hosts** view.
-
-
-
-
-### Data is missing
-
-A solid line indicates that the chart interval is set appropriately for the data transmission rate.
-In this example, a solid line turns into a dashed line—indicating missing data.
-You may want to investigate this time period to determine if there is an outage or issue.
-
-
-
-### The chart interval is too short and data is missing
-
-In the example shown in the screenshot,
-the data emission rate is lower than the Lens chart interval **and** there is missing data.
-
-This missing data can be hard to spot at first glance.
-The green boxes outline regular data emissions, while the missing data is outlined in pink.
-Similar to the above scenario, you may want to investigate the time period with the missing data
-to determine if there is an outage or issue.
-
-
-
-## Troubleshooting
-
-{/*
-Troubleshooting topic template:
-Title: Brief description of what the user sees/experiences
-Content:
-1. What the user sees/experiences (error message, UI, behavior, etc)
-2. Why it happens
-3. How to fix it
-*/}
-
-### What does _this host has been detected by APM_ mean?
-
-{/* What the user sees/experiences (error message, UI, behavior, etc) */}
-In the Hosts view, you might see a question mark icon ()
-before a host name with a tooltip note stating that the host has been detected by APM.
-{/* Why it happens */}
-When a host is detected by APM, but is not collecting full metrics
-(for example, through the system integration),
-it will be listed as a host with the partial metrics collected by APM.
-
-{/* How to fix it */}
-{/* N/A? */}
-
-{/* What the user sees/experiences (error message, UI, behavior, etc) */}
-### I don't recognize a host name and I see a question mark icon next to it
-
-{/* Why it happens */}
-This could mean that the APM agent has not been configured to use the correct host name.
-Instead, the host name might be the container name or the Kubernetes pod name.
-
-{/* How to fix it */}
-To get the correct host name, you need to set some additional configuration options,
-specifically `system.kubernetes.node.name` as described in Kubernetes data.
diff --git a/docs/en/serverless/infra-monitoring/aws-metrics.mdx b/docs/en/serverless/infra-monitoring/aws-metrics.mdx
deleted file mode 100644
index b5f456fdbd..0000000000
--- a/docs/en/serverless/infra-monitoring/aws-metrics.mdx
+++ /dev/null
@@ -1,83 +0,0 @@
----
-slug: /serverless/observability/aws-metrics
-title: AWS metrics
-description: Learn about key metrics used for AWS monitoring.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-
-
-Additional AWS charges for GetMetricData API requests are generated using this module.
-
-
-
-
-
-## Monitor EC2 instances
-
-To analyze EC2 instance metrics,
-you can select view filters based on the following predefined metrics,
-or you can add custom metrics.
-
-| | |
-|---|---|
-| **CPU Usage** | Average of `aws.ec2.cpu.total.pct`. |
-| **Inbound Traffic** | Average of `aws.ec2.network.in.bytes_per_sec`. |
-| **Outbound Traffic** | Average of `aws.ec2.network.out.bytes_per_sec`. |
-| **Disk Reads (Bytes)** | Average of `aws.ec2.diskio.read.bytes_per_sec`. |
-| **Disk Writes (Bytes)** | Average of `aws.ec2.diskio.write.bytes_per_sec`. |
-
-
-
-## Monitor S3 buckets
-
-To analyze S3 bucket metrics,
-you can select view filters based on the following predefined metrics,
-or you can add custom metrics.
-
-| | |
-|---|---|
-| **Bucket Size** | Average of `aws.s3_daily_storage.bucket.size.bytes`. |
-| **Total Requests** | Average of `aws.s3_request.requests.total`. |
-| **Number of Objects** | Average of `aws.s3_daily_storage.number_of_objects`. |
-| **Downloads (Bytes)** | Average of `aws.s3_request.downloaded.bytes`. |
-| **Uploads (Bytes)** | Average of `aws.s3_request.uploaded.bytes`. |
-
-
-
-## Monitor SQS queues
-
-To analyze SQS queue metrics,
-you can select view filters based on the following predefined metrics,
-or you can add custom metrics.
-
-| | |
-|---|---|
-| **Messages Available** | Max of `aws.sqs.messages.visible`. |
-| **Messages Delayed** | Max of `aws.sqs.messages.delayed`. |
-| **Messages Added** | Max of `aws.sqs.messages.sent`. |
-| **Messages Returned Empty** | Max of `aws.sqs.messages.not_visible`. |
-| **Oldest Message** | Max of `aws.sqs.oldest_message_age.sec`. |
-
-
-
-## Monitor RDS databases
-
-To analyze RDS database metrics,
-you can select view filters based on the following predefined metrics,
-or you can add custom metrics.
-
-| | |
-|---|---|
-| **CPU Usage** | Average of `aws.rds.cpu.total.pct`. |
-| **Connections** | Average of `aws.rds.database_connections`. |
-| **Queries Executed** | Average of `aws.rds.queries`. |
-| **Active Transactions** | Average of `aws.rds.transactions.active`. |
-| **Latency** | Average of `aws.rds.latency.dml`. |
-
-For information about the fields used by the Infrastructure UI to display AWS services metrics, see the
-.
\ No newline at end of file
diff --git a/docs/en/serverless/infra-monitoring/configure-infra-settings.mdx b/docs/en/serverless/infra-monitoring/configure-infra-settings.mdx
deleted file mode 100644
index de4d710949..0000000000
--- a/docs/en/serverless/infra-monitoring/configure-infra-settings.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-slug: /serverless/observability/configure-intra-settings
-title: Configure settings
-description: Learn how to configure infrastructure UI settings.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-From the main ((observability)) menu, go to **Infrastructure** → **Inventory** or **Hosts**,
-and click the **Settings** link at the top of the page.
-The following settings are available:
-
-
-
- **Name**
- Name of the source configuration.
-
-
- **Indices**
- ((ipm-cap)) or patterns used to match ((es)) indices that contain metrics. The default patterns are `metrics-*,metricbeat-*`.
-
-
- **Machine Learning**
- The minimum severity score required to display anomalies in the Infrastructure UI. The default is 50.
-
-
- **Features**
- Turn new features on and off.
-
-
-Click **Apply** to save your changes.
-
-If the fields are grayed out and cannot be edited, you may not have sufficient privileges to change the source configuration.
diff --git a/docs/en/serverless/infra-monitoring/container-metrics.mdx b/docs/en/serverless/infra-monitoring/container-metrics.mdx
deleted file mode 100644
index ad6bedc4ae..0000000000
--- a/docs/en/serverless/infra-monitoring/container-metrics.mdx
+++ /dev/null
@@ -1,186 +0,0 @@
----
-slug: /serverless/observability/container-metrics
-title: Container metrics
-description: Learn about key container metrics used for container monitoring.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-Learn about key container metrics displayed in the Infrastructure UI:
-
-* Docker
-* Kubernetes
-
-
-
-
-## Docker container metrics
-
-These are the key metrics displayed for Docker containers.
-
-
-
-### CPU usage metrics
-
-
-
- **CPU Usage (%)**
-
- Average CPU for the container.
-
- **Field Calculation:** `average(docker.cpu.total.pct)`
-
-
-
-
-
-
-### Memory metrics
-
-
-
- **Memory Usage (%)**
-
- Average memory usage for the container.
-
- **Field Calculation:** `average(docker.memory.usage.pct)`
-
-
-
-
-
-
-### Network metrics
-
-
-
- **Inbound Traffic (RX)**
-
- Derivative of the maximum of `docker.network.in.bytes` scaled to a 1 second rate.
-
- **Field Calculation:** `average(docker.network.inbound.bytes) * 8 / (max(metricset.period, kql='docker.network.inbound.bytes: *') / 1000)`
-
-
-
- **Outbound Traffic (TX)**
-
- Derivative of the maximum of `docker.network.out.bytes` scaled to a 1 second rate.
-
- **Field Calculation:** `average(docker.network.outbound.bytes) * 8 / (max(metricset.period, kql='docker.network.outbound.bytes: *') / 1000)`
-
-
-
-
-### Disk metrics
-
-
-
- **Disk Read IOPS**
-
- Average count of read operations from the device per second.
-
- **Field Calculation:** `counter_rate(max(docker.diskio.read.ops), kql='docker.diskio.read.ops: *')`
-
-
-
- **Disk Write IOPS**
-
- Average count of write operations from the device per second.
-
- **Field Calculation:** `counter_rate(max(docker.diskio.write.ops), kql='docker.diskio.write.ops: *')`
-
-
-
-
-
-
-## Kubernetes container metrics
-
-These are the key metrics displayed for Kubernetes (containerd) containers.
-
-
-
-### CPU usage metrics
-
-
-
- **CPU Usage (%)**
-
- Average CPU for the container.
-
- **Field Calculation:** `average(kubernetes.container.cpu.usage.limit.pct)`
-
-
-
-
-
-
-### Memory metrics
-
-
-
- **Memory Usage (%)**
-
- Average memory usage for the container.
-
- **Field Calculation:** `average(kubernetes.container.memory.usage.limit.pct)`
-
-
-
\ No newline at end of file
diff --git a/docs/en/serverless/infra-monitoring/detect-metric-anomalies.mdx b/docs/en/serverless/infra-monitoring/detect-metric-anomalies.mdx
deleted file mode 100644
index e7a0d6456d..0000000000
--- a/docs/en/serverless/infra-monitoring/detect-metric-anomalies.mdx
+++ /dev/null
@@ -1,77 +0,0 @@
----
-slug: /serverless/observability/detect-metric-anomalies
-title: Detect metric anomalies
-description: Detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-You can create ((ml)) jobs to detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods.
-
-You can model system memory usage, along with inbound and outbound network traffic across hosts or pods.
-You can detect unusual increases in memory usage and unusually high inbound or outbound traffic across hosts or pods.
-
-
-
-## Enable ((ml)) jobs for hosts or Kubernetes pods
-
-Create a ((ml)) job to detect anomalous memory usage and network traffic automatically.
-
-After creating ((ml)) jobs, you cannot change the settings.
-You can recreate these jobs later.
-However, you will remove any previously detected anomalies.
-
-{/* lint ignore anomaly-detection observability */}
-1. In your ((observability)) project, go to **Infrastructure** → **Inventory**
-and click the **Anomaly detection** link at the top of the page.
-1. Under **Hosts** or **Kubernetes Pods**, click **Enable** to create a ((ml)) job.
-1. Choose a start date for the ((ml)) analysis. ((ml-cap)) jobs analyze the last four weeks of data and continue to run indefinitely.
-1. Select a partition field.
- Partitions allow you to create independent models for different groups of data that share similar behavior.
- For example, you may want to build separate models for machine type or cloud availability zone so that anomalies are not weighted equally across groups.
-1. By default, ((ml)) jobs analyze all of your metric data.
- You can filter this list to view only the jobs or metrics that you are interested in.
- For example, you can filter by job name and node name to view specific ((anomaly-detect)) jobs for that host.
-1. Click **Enable jobs**.
-1. You're now ready to explore your metric anomalies. Click **Anomalies**.
-
-
-
-The **Anomalies** table displays a list of each single metric ((anomaly-detect)) job for the specific host or Kubernetes pod.
-By default, anomaly jobs are sorted by time to show the most recent job.
-
-Along with each anomaly job and the node name,
-detected anomalies with a severity score equal to 50 or higher are listed.
-These scores represent a severity of "warning" or higher in the selected time period.
-The **summary** value represents the increase between the actual value and the expected ("typical") value of the metric in the anomaly record result.
-
-To drill down and analyze the metric anomaly,
-select **Actions → Open in Anomaly Explorer** to view the Anomaly Explorer.
-You can also select **Actions** → **Show in Inventory** to view the host or Kubernetes pods Inventory page,
-filtered by the specific metric.
-
-
-
-These predefined ((anomaly-jobs)) use [custom rules](((ml-docs))/ml-rules.html).
-To update the rules in the Anomaly Explorer, select **Actions** → **Configure rules**.
-The changes only take effect for new results.
-If you want to apply the changes to existing results, clone and rerun the job.
-
-
-
-
-
-## History chart
-
-On the **Inventory** page, click **Show history** to view the metric values within the selected time frame.
-Detected anomalies with an anomaly score equal to 50 or higher are highlighted in red.
-To examine the detected anomalies, use the Anomaly Explorer.
-
-
diff --git a/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx b/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx
deleted file mode 100644
index aefb9438ab..0000000000
--- a/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx
+++ /dev/null
@@ -1,57 +0,0 @@
----
-slug: /serverless/observability/get-started-with-metrics
-title: Get started with system metrics
-description: Learn how to onboard your system metrics data quickly.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-In this guide you'll learn how to onboard system metrics data from a machine or server,
-then observe the data in ((observability)).
-
-To onboard system metrics data:
-
-1. Create a new ((observability)) project, or open an existing one.
-1. In your ((observability)) project, go to **Project Settings** → **Integrations**.
-1. Type **System** in the search bar, then select the integration to see more details about it.
-1. Click **Add System**.
-1. Follow the in-product steps to install the System integration and deploy an ((agent)).
-The sequence of steps varies depending on whether you have already installed an integration.
-
- * When configuring the System integration, make sure that **Collect metrics from System instances** is turned on.
- * Expand each configuration section to verify that the settings are correct for your host.
- For example, you may want to turn on **System core metrics** to get a complete view of your infrastructure.
-
-Notice that you can also configure the integration to collect logs.
-
-
- Do not try to deploy a second ((agent)) to the same system.
- You have a couple options:
-
- * **Use the System integration to collect system logs and metrics.** To do this,
- uninstall the standalone agent you deployed previously,
- then follow the in-product steps to install the System integration and deploy an ((agent)).
- * **Configure your existing standalone agent to collect metrics.** To do this,
- edit the deployed ((agent))'s YAML file and add metric inputs to the configuration manually.
- Manual configuration is a time-consuming process.
- To save time, you can follow the in-product steps that describe how to deploy a standalone ((agent)),
- and use the generated configuration as source for the input configurations that you need to add to your standalone config file.
-
-
-After the agent is installed and successfully streaming metrics data,
-go to **Infrastructure** → **Inventory** or **Hosts** to see a metrics-driven view of your infrastructure.
-To learn more, refer to or .
-
-## Next steps
-
-Now that you've added metrics and explored your data,
-learn how to onboard other types of data:
-
-*
-*
-*
diff --git a/docs/en/serverless/infra-monitoring/handle-no-results-found-message.mdx b/docs/en/serverless/infra-monitoring/handle-no-results-found-message.mdx
deleted file mode 100644
index 157d14bcb3..0000000000
--- a/docs/en/serverless/infra-monitoring/handle-no-results-found-message.mdx
+++ /dev/null
@@ -1,47 +0,0 @@
----
-slug: /serverless/observability/handle-no-results-found-message
-title: Understanding "no results found" message
-description: Learn about the reasons for "no results found" messages and how to fix them.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-
-To correctly render visualizations in the ((observability)) UI,
-all metrics used by the UI must be present in the collected data.
-For a description of these metrics,
-refer to .
-
-There are several reasons why metrics might be missing from the collected data:
-
-**The visualization requires a metric that's not relevant to your monitored hosts**
-
-For example, if you're only observing Windows hosts, the 'load' metric is not collected because 'load' is not a Windows concept.
-In this situation, you can ignore the "no results found" message.
-
-**You may not be collecting all the required metrics**
-
-This could be for any of these reasons:
-
-* The integration that collects the missing metrics is not installed.
-For example, to collect metrics from your host system, you can use the [System integration](((integrations-docs))/system).
-To fix the problem, install the integration and configure it to send the missing metrics.
-
-
- Follow one of our quickstarts under **Observability** → **Add data** → **Collect and analyze logs** to make sure the correct integrations are installed and all required metrics are collected.
-
-
-* You are not using the Elastic Distribution of the OpenTelemetry Collector, which automatically maps data to the Elastic Common Schema (ECS) fields expected by the visualization.
-
-
- Follow our OpenTelemetry quickstart under **Observability** → **Add data** → **Monitor infrastructure** to make sure OpenTelemetry data is correctly mapped to ECS-compliant fields.
-
-
-{/* TODO: Make quickstart an active link after the docs are merged. */}
-
-* You have explicitly chosen not to send these metrics.
-You may choose to limit the metrics sent to Elastic to save on space and improve cluster performance.
-For example, the System integration has options to choose which metrics you want to send.
-You can [edit the integration policy](((fleet-guide))/edit-or-delete-integration-policy.html) to begin collecting the missing metrics. For example:
-
- 
diff --git a/docs/en/serverless/infra-monitoring/host-metrics.mdx b/docs/en/serverless/infra-monitoring/host-metrics.mdx
deleted file mode 100644
index db97c8bbd3..0000000000
--- a/docs/en/serverless/infra-monitoring/host-metrics.mdx
+++ /dev/null
@@ -1,416 +0,0 @@
----
-slug: /serverless/observability/host-metrics
-title: Host metrics
-description: Learn about key host metrics used for host monitoring.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-Learn about key host metrics displayed in the Infrastructure UI:
-
-* Hosts
-* CPU usage
-* Memory
-* Log
-* Network
-* Disk
-* Legacy
-
-
-
-## Hosts metrics
-
-
-
- **Hosts**
-
- Number of hosts returned by your search criteria.
-
- **Field Calculation**: `count(system.cpu.cores)`
-
-
-
-
-
-
-## CPU usage metrics
-
-
-
- **CPU Usage (%)**
-
- Average of percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. Includes both time spent on user space and kernel space. 100% means all CPUs of the host are busy.
-
- **Field Calculation**: `average(system.cpu.total.norm.pct)`
-
- For legacy metric calculations, refer to Legacy metrics.
-
-
-
- **CPU Usage - iowait (%)**
-
- The percentage of CPU time spent in wait (on disk).
-
- **Field Calculation**: `average(system.cpu.iowait.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - irq (%)**
-
- The percentage of CPU time spent servicing and handling hardware interrupts.
-
- **Field Calculation**: `average(system.cpu.irq.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - nice (%)**
-
- The percentage of CPU time spent on low-priority processes.
-
- **Field Calculation**: `average(system.cpu.nice.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - softirq (%)**
-
- The percentage of CPU time spent servicing and handling software interrupts.
-
- **Field Calculation**: `average(system.cpu.softirq.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - steal (%)**
-
- The percentage of CPU time spent in involuntary wait by the virtual CPU while the hypervisor was servicing another processor. Available only on Unix.
-
- **Field Calculation**: `average(system.cpu.steal.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - system (%)**
-
- The percentage of CPU time spent in kernel space.
-
- **Field Calculation**: `average(system.cpu.system.pct) / max(system.cpu.cores)`
-
-
-
- **CPU Usage - user (%)**
-
- The percentage of CPU time spent in user space. On multi-core systems, you can have percentages that are greater than 100%. For example, if 3 cores are at 60% use, then the system.cpu.user.pct will be 180%.
-
- **Field Calculation**: `average(system.cpu.user.pct) / max(system.cpu.cores)`
-
-
-
- **Load (1m)**
-
- 1 minute load average.
-
- Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
-
- **Field Calculation**: `average(system.load.1)`
-
-
-
- **Load (5m)**
-
- 5 minute load average.
-
- Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
-
- **Field Calculation**: `average(system.load.5)`
-
-
-
- **Load (15m)**
-
- 15 minute load average.
-
- Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
-
- **Field Calculation**: `average(system.load.15)`
-
-
-
- **Normalized Load**
-
- 1 minute load average normalized by the number of CPU cores.
-
- Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
-
- 100% means the 1 minute load average is equal to the number of CPU cores of the host.
-
- Taking the example of a 32 CPU cores host, if the 1 minute load average is 32, the value reported here is 100%. If the 1 minute load average is 48, the value reported here is 150%.
-
- **Field Calculation**: `average(system.load.1) / max(system.load.cores)`
-
-
-
-
-
-
-## Memory metrics
-
-
-
- **Memory Cache**
-
- Memory (page) cache.
-
- **Field Calculation**: `average(system.memory.used.bytes ) - average(system.memory.actual.used.bytes)`
-
-
-
- **Memory Free**
-
- Total available memory.
-
- **Field Calculation**: `max(system.memory.total) - average(system.memory.actual.used.bytes)`
-
-
-
- **Memory Free (excluding cache)**
-
- Total available memory excluding the page cache.
-
- **Field Calculation**: `system.memory.free`
-
-
-
- **Memory Total**
-
- Total memory capacity.
-
- **Field Calculation**: `avg(system.memory.total)`
-
-
-
- **Memory Usage (%)**
-
- Percentage of main memory usage excluding page cache.
-
- This includes resident memory for all processes plus memory used by the kernel structures and code apart from the page cache.
-
- A high level indicates a situation of memory saturation for the host. For example, 100% means the main memory is entirely filled with memory that can't be reclaimed, except by swapping out.
-
- **Field Calculation**: `average(system.memory.actual.used.pct)`
-
-
-
- **Memory Used**
-
- Main memory usage excluding page cache.
-
- **Field Calculation**: `average(system.memory.actual.used.bytes)`
-
-
-
-
-
-
-## Log metrics
-
-
-
- **Log Rate**
-
- Derivative of the cumulative sum of the document count scaled to a 1 second rate. This metric relies on the same indices as the logs.
-
- **Field Calculation**: `cumulative_sum(doc_count)`
-
-
-
-
-
-
-## Network metrics
-
-
-
- **Network Inbound (RX)**
-
- Number of bytes that have been received per second on the public interfaces of the hosts.
-
- **Field Calculation**: `sum(host.network.ingress.bytes) * 8 / 1000`
-
- For legacy metric calculations, refer to Legacy metrics.
-
-
-
- **Network Outbound (TX)**
-
- Number of bytes that have been sent per second on the public interfaces of the hosts.
-
- **Field Calculation**: `sum(host.network.egress.bytes) * 8 / 1000`
-
- For legacy metric calculations, refer to Legacy metrics.
-
-
-
-
-## Disk metrics
-
-
-
- **Disk Latency**
-
- Time spent to service disk requests.
-
- **Field Calculation**: `average(system.diskio.read.time + system.diskio.write.time) / (system.diskio.read.count + system.diskio.write.count)`
-
-
-
- **Disk Read IOPS**
-
- Average count of read operations from the device per second.
-
- **Field Calculation**: `counter_rate(max(system.diskio.read.count), kql='system.diskio.read.count: *')`
-
-
-
- **Disk Read Throughput**
-
- Average number of bytes read from the device per second.
-
- **Field Calculation**: `counter_rate(max(system.diskio.read.bytes), kql='system.diskio.read.bytes: *')`
-
-
-
- **Disk Usage - Available (%)**
-
- Percentage of disk space available.
-
- **Field Calculation**: `1-average(system.filesystem.used.pct)`
-
-
-
- **Disk Usage - Max (%)**
-
- Percentage of disk space used. A high percentage indicates that a partition on a disk is running out of space.
-
- **Field Calculation**: `max(system.filesystem.used.pct)`
-
-
-
- **Disk Write IOPS**
-
- Average count of write operations from the device per second.
-
- **Field Calculation**: `counter_rate(max(system.diskio.write.count), kql='system.diskio.write.count: *')`
-
-
-
- **Disk Write Throughput**
-
- Average number of bytes written from the device per second.
-
- **Field Calculation**: `counter_rate(max(system.diskio.write.bytes), kql='system.diskio.write.bytes: *')`
-
-
-
-
-
-
-## Legacy metrics
-
-Over time, we may change the formula used to calculate a specific metric.
-To avoid affecting your existing rules, instead of changing the actual metric definition,
-we create a new metric and refer to the old one as "legacy."
-
-The UI and any new rules you create will use the new metric definition.
-However, any alerts that use the old definition will refer to the metric as "legacy."
-
-
-
- **CPU Usage (legacy)**
-
- Percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. This includes both time spent on user space and kernel space.
- 100% means all CPUs of the host are busy.
-
- **Field Calculation**: `(average(system.cpu.user.pct) + average(system.cpu.system.pct)) / max(system.cpu.cores)`
-
-
-
- **Network Inbound (RX) (legacy)**
-
- Number of bytes that have been received per second on the public interfaces of the hosts.
-
- **Field Calculation**: `average(host.network.ingress.bytes) * 8 / (max(metricset.period, kql='host.network.ingress.bytes: *') / 1000)`
-
-
-
- **Network Outbound (TX) (legacy)**
-
- Number of bytes that have been sent per second on the public interfaces of the hosts.
-
- **Field Calculation**: `average(host.network.egress.bytes) * 8 / (max(metricset.period, kql='host.network.egress.bytes: *') / 1000)`
-
-
-
diff --git a/docs/en/serverless/infra-monitoring/infra-monitoring.mdx b/docs/en/serverless/infra-monitoring/infra-monitoring.mdx
deleted file mode 100644
index 18ae85e5d1..0000000000
--- a/docs/en/serverless/infra-monitoring/infra-monitoring.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-slug: /serverless/observability/infrastructure-monitoring
-title: Infrastructure monitoring
-description: Monitor metrics from your servers, Docker, Kubernetes, Prometheus, and other services and applications.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
-
-((observability)) allows you to visualize infrastructure metrics to help diagnose problematic spikes,
-identify high resource utilization, automatically discover and track pods,
-and unify your metrics with logs and APM data.
-
-Using ((agent)) integrations, you can ingest and analyze metrics from servers,
-Docker containers, Kubernetes orchestrations, explore and analyze application
-telemetry, and more.
-
-For more information, refer to the following links:
-
-* :
-Learn how to onboard your system metrics data quickly.
-* :
-Use the **Inventory page** to get a metrics-driven view of your infrastructure grouped by resource type.
-* :
-Use the **Hosts** page to get a metrics-driven view of your infrastructure backed by an easy-to-use interface called Lens.
-* : Detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods.
-* : Learn how to configure infrastructure UI settings.
-* : Learn about key metrics used for infrastructure monitoring.
-* : Learn about the fields required to display data in the Infrastructure UI.
-
-By default, the Infrastructure UI displays metrics from ((es)) indices that
-match the `metrics-*` and `metricbeat-*` index patterns. To learn how to change
-this behavior, refer to Configure settings.
diff --git a/docs/en/serverless/infra-monitoring/kubernetes-pod-metrics.mdx b/docs/en/serverless/infra-monitoring/kubernetes-pod-metrics.mdx
deleted file mode 100644
index a8328cc7ec..0000000000
--- a/docs/en/serverless/infra-monitoring/kubernetes-pod-metrics.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
----
-slug: /serverless/observability/kubernetes-pod-metrics
-title: Kubernetes pod metrics
-description: Learn about key metrics used for Kubernetes monitoring.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-To analyze Kubernetes pod metrics,
-you can select view filters based on the following predefined metrics,
-or you can add custom metrics.
-
-
-| | |
-|---|---|
-| **CPU Usage** | Average of `kubernetes.pod.cpu.usage.node.pct`. |
-| **Memory Usage** | Average of `kubernetes.pod.memory.usage.node.pct`. |
-| **Inbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.rx.bytes` scaled to a 1 second rate. |
-| **Outbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.tx.bytes` scaled to a 1 second rate. |
-
-For information about the fields used by the Infrastructure UI to display Kubernetes pod metrics, see the
-.
diff --git a/docs/en/serverless/infra-monitoring/metrics-app-fields.mdx b/docs/en/serverless/infra-monitoring/metrics-app-fields.mdx
deleted file mode 100644
index d3592dc50f..0000000000
--- a/docs/en/serverless/infra-monitoring/metrics-app-fields.mdx
+++ /dev/null
@@ -1,451 +0,0 @@
----
-slug: /serverless/observability/infrastructure-monitoring-required-fields
-title: Required fields
-description: Learn about the fields required to display data in the Infrastructure UI.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-This section lists the fields the Infrastructure UI uses to display data.
-Please note that some of the fields listed here are not [ECS fields](((ecs-ref))/ecs-reference.html#_what_is_ecs).
-
-## Additional field details
-
-The `event.dataset` field is required to display data properly in some views. This field
-is a combination of `metricset.module`, which is the ((metricbeat)) module name, and `metricset.name`,
-which is the metricset name.
-
-To determine each metric's optimal time interval, all charts use `metricset.period`.
-If `metricset.period` is not available, then it falls back to 1 minute intervals.
-
-
-
-## Base fields
-
-The `base` field set contains all fields which are on the top level. These fields are common across all types of events.
-
-
-
-
- `@timestamp`
-
- Date/time when the event originated.
-
- This is the date/time extracted from the event, typically representing when the source generated the event.
- If the event source has no original timestamp, this value is typically populated by the first time the pipeline received the event.
- Required field for all events.
-
- Example: `May 27, 2020 @ 15:22:27.982`
-
- date
-
-
-
- `message`
-
- For log events the message field contains the log message, optimized for viewing in a log viewer.
-
- For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event.
-
- If multiple messages exist, they can be combined into one message.
-
- Example: `Hello World`
-
- text
-
-
-
-
-
-## Hosts fields
-
-These fields must be mapped to display host data in the ((infrastructure-app)).
-
-
-
-
- `host.name`
-
- Name of the host.
-
- It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use.
-
- Example: `MacBook-Elastic.local`
-
- keyword
-
-
- `host.ip`
-
- IP of the host that records the event.
-
- ip
-
-
-
-
-
-## Docker container fields
-
-These fields must be mapped to display Docker container data in the ((infrastructure-app)).
-
-
-
- `container.id`
-
- Unique container id.
-
- Example: `data`
-
- keyword
-
-
-
- `container.name`
-
- Container name.
-
- keyword
-
-
-
- `container.ip_address`
-
- IP of the container.
-
- *Not an ECS field*
-
- ip
-
-
-
-
-
-
-## Kubernetes pod fields
-
-These fields must be mapped to display Kubernetes pod data in the ((infrastructure-app)).
-
-
-
-
- `kubernetes.pod.uid`
-
- Kubernetes Pod UID.
-
- Example: `8454328b-673d-11ea-7d80-21010a840123`
-
- *Not an ECS field*
-
- keyword
-
-
-
- `kubernetes.pod.name`
-
- Kubernetes pod name.
-
- Example: `nginx-demo`
-
- *Not an ECS field*
-
- keyword
-
-
-
- `kubernetes.pod.ip`
-
- IP of the Kubernetes pod.
-
- *Not an ECS field*
-
- keyword
-
-
-
-
-
-## AWS EC2 instance fields
-
-These fields must be mapped to display EC2 instance data in the ((infrastructure-app)).
-
-
-
-
- `cloud.instance.id`
-
- Instance ID of the host machine.
-
- Example: `i-1234567890abcdef0`
-
- keyword
-
-
-
- `cloud.instance.name`
-
- Instance name of the host machine.
-
- keyword
-
-
-
- `aws.ec2.instance.public.ip`
-
- Instance public IP of the host machine.
-
- *Not an ECS field*
-
- keyword
-
-
-
-
-
-## AWS S3 bucket fields
-
-These fields must be mapped to display S3 bucket data in the ((infrastructure-app)).
-
-
-
- `aws.s3.bucket.name`
-
- The name or ID of the AWS S3 bucket.
-
- *Not an ECS field*
-
- keyword
-
-
-
-
-
-## AWS SQS queue fields
-
-These fields must be mapped to display SQS queue data in the ((infrastructure-app)).
-
-
-
-
- `aws.sqs.queue.name`
-
- The name or ID of the AWS SQS queue.
-
- *Not an ECS field*
-
- keyword
-
-
-
-
-
-## AWS RDS database fields
-
-These fields must be mapped to display RDS database data in the ((infrastructure-app)).
-
-
-
-
- `aws.rds.db_instance.arn`
-
- Amazon Resource Name (ARN) for each RDS.
-
- *Not an ECS field*
-
- keyword
-
-
-
- `aws.rds.db_instance.identifier`
-
- Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance.
-
- *Not an ECS field*
-
- keyword
-
-
-
-
-
-## Additional grouping fields
-
-Depending on which entity you select in the **Inventory** view, these additional fields can be mapped to group entities by.
-
-
-
-
- `cloud.availability_zone`
-
- Availability zone in which this host is running.
-
- Example: `us-east-1c`
-
- keyword
-
-
-
- `cloud.machine.type`
-
- Machine type of the host machine.
-
- Example: `t2.medium`
-
- keyword
-
-
-
- `cloud.region`
-
- Region in which this host is running.
-
- Example: `us-east-1`
-
- keyword
-
-
-
- `cloud.instance.id`
-
- Instance ID of the host machine.
-
- Example: `i-1234567890abcdef0`
-
- keyword
-
-
-
- `cloud.provider`
-
- Name of the cloud provider. Example values are `aws`, `azure`, `gcp`, or `digitalocean`.
-
- Example: `aws`
-
- keyword
-
-
-
- `cloud.instance.name`
-
- Instance name of the host machine.
-
- keyword
-
-
-
- `cloud.project.id`
-
- Name of the project in Google Cloud.
-
- *Not an ECS field*
-
- keyword
-
-
-
- `service.type`
-
- The type of service data is collected from.
-
- The type can be used to group and correlate logs and metrics from one service type.
-
- For example, the service type for metrics collected from ((es)) is `elasticsearch`.
-
- Example: `elasticsearch`
-
- *Not an ECS field*
-
- keyword
-
-
-
- `host.hostname`
-
- Name of the host. This field is required if you want to use ((ml-features))
-
- It normally contains what the `hostname` command returns on the host machine.
-
- Example: `Elastic.local`
-
- keyword
-
-
-
- `host.os.name`
-
- Operating system name, without the version.
-
- Multi-fields:
-
- os.name.text (type: text)
-
- Example: `Mac OS X`
-
- keyword
-
-
-
- `host.os.kernel`
-
- Operating system kernel version as a raw string.
-
- Example: `4.4.0-112-generic`
-
- keyword
-
-
diff --git a/docs/en/serverless/infra-monitoring/metrics-reference.mdx b/docs/en/serverless/infra-monitoring/metrics-reference.mdx
deleted file mode 100644
index 9000ccde73..0000000000
--- a/docs/en/serverless/infra-monitoring/metrics-reference.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
----
-slug: /serverless/observability/metrics-reference
-title: Metrics reference
-description: Learn about key metrics used for infrastructure monitoring.
-tags: [ 'serverless', 'observability', 'reference' ]
----
-
-
-
-
-
-Learn about the key metrics displayed in the Infrastructure UI and how they
-are calculated.
-
-* Host metrics
-* Kubernetes pod metrics
-* Container metrics
-* AWS metrics
\ No newline at end of file
diff --git a/docs/en/serverless/infra-monitoring/troubleshooting-infra.mdx b/docs/en/serverless/infra-monitoring/troubleshooting-infra.mdx
deleted file mode 100644
index ac34ceba55..0000000000
--- a/docs/en/serverless/infra-monitoring/troubleshooting-infra.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
----
-slug: /serverless/observability/troubleshooting-infrastructure-monitoring
-title: Troubleshooting
-description: Learn how to troubleshoot issues with infrastructure monitoring.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-
-Learn how to troubleshoot common issues on your own or ask for help.
-
-*
-
-## Elastic Support
-
-We offer a support experience unlike any other.
-Our team of professionals 'speak human and code' and love making your day.
-[Learn more about subscriptions](https://www.elastic.co/subscriptions).
-
-## Discussion forum
-
-For other questions and feature requests,
-visit our [discussion forum](https://discuss.elastic.co/c/observability).
diff --git a/docs/en/serverless/infra-monitoring/view-infrastructure-metrics.mdx b/docs/en/serverless/infra-monitoring/view-infrastructure-metrics.mdx
deleted file mode 100644
index 13a7a34ec1..0000000000
--- a/docs/en/serverless/infra-monitoring/view-infrastructure-metrics.mdx
+++ /dev/null
@@ -1,148 +0,0 @@
----
-slug: /serverless/observability/view-infrastructure-metrics
-title: View infrastructure metrics by resource type
-description: Get a metrics-driven view of your infrastructure grouped by resource type.
-tags: [ 'serverless', 'observability', 'how to' ]
----
-
-
-
-import HostDetails from '../transclusion/host-details.mdx'
-
-import ContainerDetails from '../transclusion/container-details.mdx'
-
-
-
-The **Infrastructure Inventory** page provides a metrics-driven view of your entire infrastructure grouped by
-the resources you are monitoring. All monitored resources emitting
-a core set of infrastructure metrics are displayed to give you a quick view of the overall health
-of your infrastructure.
-
-To access the **Infrastructure Inventory** page, in your ((observability)) project,
-go to **Infrastructure inventory**.
-
-
-
-To learn more about the metrics shown on this page, refer to the .
-
-
-
-If you haven't added data yet, click **Add data** to search for and install an Elastic integration.
-
-Need help getting started? Follow the steps in
-Get started with system metrics.
-
-
-
-
-
-## Filter the Inventory view
-
-To get started with your analysis, select the type of resources you want to show
-in the high-level view. From the **Show** menu, select one of the following:
-
-* **Hosts** — the default
-* **Kubernetes Pods**
-* **Docker Containers** — shows _all_ containers, not just Docker
-* **AWS** — includes EC2 instances, S3 buckets, RDS databases, and SQS queues
-
-When you hover over each resource in the waffle map, the metrics specific to
-that resource are displayed.
-
-You can sort by resource, group the resource by specific fields related to it, and sort by
-either name or metric value. For example, you can filter the view to display the memory usage
-of your Kubernetes pods, grouped by namespace, and sorted by the memory usage value.
-
-
-
-You can also use the search bar to create structured queries using [((kib)) Query Language](((kibana-ref))/kuery-query.html).
-For example, enter `host.hostname : "host1"` to view only the information for `host1`.
-
-To examine the metrics for a specific time, use the time filter to select the date and time.
-
-
-
-## View host metrics
-
-By default the **Infrastructure Inventory** page displays a waffle map that shows the hosts you
-are monitoring and the current CPU usage for each host.
-Alternatively, you can click the **Table view** icon
-to switch to a table view.
-
-Without leaving the **Infrastructure Inventory** page, you can view enhanced metrics relating to each host
-running in your infrastructure. On the waffle map, select a host to display the host details
-overlay.
-
-
-To expand the overlay and view more detail, click **Open as page** in the upper-right corner.
-
-
-The host details overlay contains the following tabs:
-
-
-
-
-These metrics are also available when viewing hosts on the **Hosts**
-page.
-
-
-
-
-## View container metrics
-
-When you select **Docker containers**, the **Inventory** page displays a waffle map that shows the containers you
-are monitoring and the current CPU usage for each container.
-Alternatively, you can click the **Table view** icon
-to switch to a table view.
-
-Without leaving the **Inventory** page, you can view enhanced metrics relating to each container
-running in your infrastructure.
-
-
- The waffle map shows _all_ monitored containers, including containerd,
- provided that the data collected from the container has the `container.id` field.
- However, the waffle map currently only displays metrics for Docker fields.
- This display problem will be resolved in a future release.
-
-
-On the waffle map, select a container to display the container details
-overlay.
-
-
- To expand the overlay and view more detail, click **Open as page** in the upper-right corner.
-
-
-The container details overlay contains the following tabs:
-
-
-
-
-
-## View metrics for other resources
-
-When you have searched and filtered for a specific resource, you can drill down to analyze the
-metrics relating to it. For example, when viewing Kubernetes Pods in the high-level view,
-click the Pod you want to analyze and select **Kubernetes Pod metrics** to see detailed metrics:
-
-
-
-
-
-## Add custom metrics
-
-If the predefined metrics displayed on the Inventory page for each resource are not
-sufficient for your specific use case, you can add and define custom metrics.
-
-Select your resource, and from the **Metric** filter menu, click **Add metric**.
-
-
-
-
-
-## Integrate with Logs and APM
-
-Depending on the features you have installed and configured, you can view logs or traces relating to a specific resource.
-For example, in the high-level view, when you click a Kubernetes Pod resource, you can choose:
-
-* **Kubernetes Pod logs** to in the ((logs-app)).
-* **Kubernetes Pod APM traces** to in the ((apm-app)).
diff --git a/docs/en/serverless/inventory.mdx b/docs/en/serverless/inventory.mdx
deleted file mode 100644
index a4859a4a44..0000000000
--- a/docs/en/serverless/inventory.mdx
+++ /dev/null
@@ -1,100 +0,0 @@
----
-slug: /serverless/observability/inventory
-title: Inventory
-description: Learn about the new Inventory experience that enables you to monitor all your entities from one single place.
-tags: [ 'serverless', 'observability', 'inventory' ]
----
-
-import Roles from './partials/roles.mdx'
-
-
-
-Inventory provides a single place to observe the status of your entire ecosystem of hosts, containers, and services at a glance, even just from logs. From there, you can monitor and understand the health of your entities, check what needs attention, and start your investigations.
-
-
- The new Inventory requires the Elastic Entity Model (EEM). To learn more, refer to .
-
-
-
-
-Inventory is currently available for hosts, containers, and services, but it will scale to support all of your entities.
-
-The EEM currently supports the inventory experience (as identified by `host.name`, `service.name`, and `container.id`) located in data identified by the following index patterns:
-
-**Hosts**
-
-Where `host.name` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat-*`
-
-**Services**
-
-Where `service.name` is set in `filebeat*`, `logs-*`, `metrics-apm.service_transaction.1m*`, and `metrics-apm.service_summary.1m*`
-
-**Containers**
-
-Where `container.id` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat-*`
-
-Inventory allows you to:
-
-- Filter for your entities to provide a high-level view of what you have leveraging your own tags and labels
-- Drill down into any host, container, or service to help you understand performance
-- Debug resource bottlenecks with your service caused by their containers and the hosts they run on.
-- Easily discover all entities related to the host, container or service you are viewing by leveraging your tags and labels
-
-## Explore your entities
-
-1. In your ((observability)) project, go to **Inventory** to view all of your entities.
-
- When you open the Inventory for the first time, you'll be asked to enable the EEM. Once enabled, the Inventory will be accessible to anyone with the appropriate privileges.
-
-
- The Inventory feature can be completely disabled using the `observability:entityCentricExperience` flag in **Stack Management**.
-
-
-
-1. In the search bar, search for your entities by name or type, for example `entity.type:service`.
-
-For each entity, you can click the entity name and get a detailed view. For example, for an entity of type `service`, you get the following details:
-
-- Overview
-- Transactions
-- Dependencies
-- Errors
-- Metrics
-- Infrastructure
-- Service Map
-- Logs
-- Alerts
-- Dashboards
-
-
-
-If you open an entity of type `host` or `container` that does not have infrastructure data, some of the visualizations will be blank and some features on the page will not be fully populated.
-
-## Add entities to the Inventory
-
-Entities are added to the Inventory through one of the following approaches: **Add data** or **Associate existing service logs**.
-
-### Add data
-To add entities, select **Add data** from the left-hand navigation and choose one of the following onboarding journeys:
-
-
-Auto-detect logs and metrics
-
- Detects hosts (with metrics and logs)
-
-
-
-Kubernetes
-
- Detects hosts, containers, and services
-
-
-Elastic APM / OpenTelemetry / Synthetic Monitor
-
- Detects services
-
-
-
-### Associate existing service logs
-
-To learn how, refer to .
\ No newline at end of file
diff --git a/docs/en/serverless/logging/add-logs-service-name.mdx b/docs/en/serverless/logging/add-logs-service-name.mdx
deleted file mode 100644
index 1c1cd49c73..0000000000
--- a/docs/en/serverless/logging/add-logs-service-name.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-slug: /serverless/observability/add-logs-service-name
-title: Add a service name to logs
-description: Learn how to add a service name field to your logs.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-Adding the `service.name` field to your logs associates them with the services that generate them.
-You can use this field to view and manage logs for distributed services located on multiple hosts.
-
-To add a service name to your logs, either:
-
-- Use the `add_fields` processor through an integration, ((agent)) configuration, or ((filebeat)) configuration.
-- Map an existing field from your data stream to the `service.name` field.
-
-## Use the add fields processor to add a service name
-For log data without a service name, use the [`add_fields` processor](((fleet-guide))/add_fields-processor.html) to add the `service.name` field.
-You can add the processor in an integration's settings or in the ((agent)) or ((filebeat)) configuration.
-
-For example, adding the `add_fields` processor to the inputs section of a standalone ((agent)) or ((filebeat)) configuration would add `your_service_name` as the `service.name` field:
-
-```console
-processors:
- - add_fields:
- target: service
- fields:
- name: your_service_name
-```
-
-Adding the `add_fields` processor to an integration's settings would add `your_service_name` as the `service.name` field:
-
-
-
-For more on defining processors, refer to [define processors](((fleet-guide))/elastic-agent-processor-configuration.html).
-
-## Map an existing field to the service name field
-
-For logs that with an existing field being used to represent the service name, map that field to the `service.name` field using the [alias field type](((ref))/field-alias.html).
-Follow these steps to update your mapping:
-
-1. Go to **Management** → **Index Management** → **Index Templates**.
-1. Search for the index template you want to update.
-1. From the **Actions** menu for that template, select **edit**.
-1. Go to **Mappings**, and select **Add field**.
-1. Under **Field type**, select **Alias** and add `service.name` to the **Field name**.
-1. Under **Field path**, select the existing field you want to map to the service name.
-1. Select **Add field**.
-
-For more ways to add a field to your mapping, refer to [add a field to an existing mapping](((ref))/explicit-mapping.html#add-field-mapping).
-
-## Additional ways to process data
-
-The ((stack)) provides additional ways to process your data:
-
-- **[Ingest pipelines](((ref))/ingest.html):** convert data to ECS, normalize field data, or enrich incoming data.
-- **[Logstash](((logstash-ref))/introduction.html):** enrich your data using input, output, and filter plugins.
\ No newline at end of file
diff --git a/docs/en/serverless/logging/correlate-application-logs.mdx b/docs/en/serverless/logging/correlate-application-logs.mdx
deleted file mode 100644
index d83bb7ddd6..0000000000
--- a/docs/en/serverless/logging/correlate-application-logs.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-slug: /serverless/observability/correlate-application-logs
-title: Stream application logs
-description: Learn about application logs and options for ingesting them.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-import CorrelateLogs from '../transclusion/observability/application-logs/correlate-logs.mdx'
-
-Application logs provide valuable insight into events that have occurred within your services and applications.
-
-The format of your logs (structured or plaintext) influences your log ingestion strategy.
-
-## Plaintext logs vs. structured Elastic Common Schema (ECS) logs
-
-Logs are typically produced as either plaintext or structured.
-Plaintext logs contain only text and have no special formatting, for example:
-
-```log
-2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer
-2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication
-2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController
-```
-
-Structured logs follow a predefined, repeatable pattern or structure.
-This structure is applied at write time — preventing the need for parsing at ingest time.
-The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs.
-This structure allows logs to be easily ingested,
-and provides the ability to correlate, search, and aggregate on individual fields within your logs.
-
-For example, the previous example logs might look like this when structured with ECS-compatible JSON:
-
-```json
-{"@timestamp":"2019-08-06T12:09:12.375Z", "log.level": "INFO", "message":"Tomcat started on port(s): 8080 (http) with context path ''", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer"}
-{"@timestamp":"2019-08-06T12:09:12.379Z", "log.level": "INFO", "message":"Started PetClinicApplication in 7.095 seconds (JVM running for 9.082)", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.samples.petclinic.PetClinicApplication"}
-{"@timestamp":"2019-08-06T14:08:40.199Z", "log.level":"DEBUG", "message":"init find form", "service.name":"spring-petclinic","process.thread.name":"http-nio-8080-exec-8","log.logger":"org.springframework.samples.petclinic.owner.OwnerController","transaction.id":"28b7fb8d5aba51f1","trace.id":"2869b25b5469590610fea49ac04af7da"}
-```
-
-## Ingesting logs
-
-There are several ways to ingest application logs into your project.
-Your specific situation helps determine the method that's right for you.
-
-### Plaintext logs
-
-With ((filebeat)) or ((agent)), you can ingest plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration.
-
-For plaintext logs to be useful, you need to use ((filebeat)) or ((agent)) to parse the log data.
-
-** Learn more in Plaintext logs**
-
-### ECS formatted logs
-
-Logs formatted in ECS don't require manual parsing and the configuration can be reused across applications. They also include log correlation. You can format your logs in ECS by using ECS logging plugins or ((apm-agent)) ECS reformatting.
-
-#### ECS logging plugins
-
-Add ECS logging plugins to your logging libraries to format your logs into ECS-compatible JSON that doesn't require parsing.
-
-To use ECS logging, you need to modify your application and its log configuration.
-
-** Learn more in ECS formatted logs**
-
-#### ((apm-agent)) log reformatting
-
-Some Elastic ((apm-agent))s can automatically reformat application logs to ECS format
-without adding an ECS logger dependency or modifying the application.
-
-This feature is supported for the following ((apm-agent))s:
-
-* [Ruby](((apm-ruby-ref))/configuration.html#config-log-ecs-formatting)
-* [Python](((apm-py-ref))/logs.html#log-reformatting)
-* [Java](((apm-java-ref))/logs.html#log-reformatting)
-
-** Learn more in ECS formatted logs**
-
-### ((apm-agent)) log sending
-
-Automatically capture and send logs directly to the managed intake service using the ((apm-agent)) without using ((filebeat)) or ((agent)).
-
-Log sending is supported in the Java ((apm-agent)).
-
-** Learn more in ((apm-agent)) log sending**
-
-## Log correlation
-
-
\ No newline at end of file
diff --git a/docs/en/serverless/logging/ecs-application-logs.mdx b/docs/en/serverless/logging/ecs-application-logs.mdx
deleted file mode 100644
index 0c7ce55372..0000000000
--- a/docs/en/serverless/logging/ecs-application-logs.mdx
+++ /dev/null
@@ -1,185 +0,0 @@
----
-slug: /serverless/observability/ecs-application-logs
-title: ECS formatted application logs
-description: Use an ECS logger or an ((apm-agent)) to format your logs in ECS format.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import InstallWidget from '../transclusion/observability/tab-widgets/filebeat-install/widget.mdx'
-import SetupWidget from '../transclusion/observability/tab-widgets/filebeat-setup/widget.mdx'
-import StartWidget from '../transclusion/observability/tab-widgets/filebeat-start/widget.mdx'
-import ConfigureFilebeat from '../transclusion/observability/tab-widgets/filebeat-logs/widget.mdx'
-
-
-
-Logs formatted in Elastic Common Schema (ECS) don't require manual parsing, and the configuration can be reused across applications. ECS-formatted logs, when paired with an ((apm-agent)), allow you to correlate logs to easily view logs that belong to a particular trace.
-
-You can format your logs in ECS format the following ways:
-* **ECS loggers:** plugins for your logging libraries that reformat your logs into ECS format.
-* **((apm-agent)) ECS reformatting:** Java, Ruby, and Python ((apm-agent))s automatically reformat application logs to ECS format without a logger.
-
-## ECS loggers
-
-ECS loggers reformat your application logs into ECS-compatible JSON, removing the need for manual parsing.
-ECS loggers require ((filebeat)) or ((agent)) configured to monitor and capture application logs.
-In addition, pairing ECS loggers with your framework's ((apm-agent)) allows you to correlate logs to easily view logs that belong to a particular trace.
-
-### Get started
-
-For more information on adding an ECS logger to your application, refer to the guide for your framework:
-
-* [.NET](((ecs-logging-dotnet-ref))/setup.html)
-* Go: [zap](((ecs-logging-go-zap-ref))/setup.html), [logrus](((ecs-logging-go-logrus-ref))/setup.html)
-* [Java](((ecs-logging-java-ref))/setup.html)
-* Node.js: [morgan](((ecs-logging-nodejs-ref))/morgan.html), [pino](((ecs-logging-nodejs-ref))/pino.html), [winston](((ecs-logging-nodejs-ref))/winston.html)
-* [PHP](((ecs-logging-php-ref))/setup.html)
-* [Python](((ecs-logging-python-ref))/installation.html)
-* [Ruby](((ecs-logging-ruby-ref))/setup.html)
-
-
-
-## APM agent ECS reformatting
-
-Java, Ruby, and Python ((apm-agent))s can automatically reformat application logs to ECS format without an ECS logger or the need to modify your application. The ((apm-agent)) also allows for log correlation so you can easily view logs that belong to a particular trace.
-
-To set up log ECS reformatting:
-
-1. Enable ((apm-agent)) reformatting
-1. Ingest logs with ((filebeat)) or ((agent)).
-1. View logs in Logs Explorer
-
-### Enable log ECS reformatting
-
-Log ECS reformatting is controlled by the `log_ecs_reformatting` configuration option, and is disabled by default. Refer to the guide for your framework for information on enabling:
-
-* [Java](((apm-java-ref))/config-logging.html#config-log-ecs-reformatting)
-* [Ruby](((apm-ruby-ref))/configuration.html#config-log-ecs-formatting)
-* [Python](((apm-py-ref))/configuration.html#config-log_ecs_reformatting)
-
-### Ingest logs
-
-After enabling log ECS reformatting, send your application logs to your project using one of the following shipping tools:
-
-* **((filebeat)):** A lightweight data shipper that sends log data to your project.
-* **((agent)):** A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage ((agent)) policies and lifecycles directly from your project.
-
-### Ingest logs with ((filebeat))
-
-
-Use ((filebeat)) version 8.11+ for the best experience when ingesting logs with ((filebeat)).
-
-
-Follow these steps to ingest application logs with ((filebeat)).
-
-#### Step 1: Install ((filebeat))
-
-Install ((filebeat)) on the server you want to monitor by running the commands that align with your system:
-
-
-
-#### Step 2: Connect to your project
-
-Connect to your project using an API key to set up ((filebeat)). Set the following information in the `filebeat.yml` file:
-
-```yaml
-output.elasticsearch:
- hosts: ["your-projects-elasticsearch-endpoint"]
- api_key: "id:api_key"
-```
-
-1. Set the `hosts` to your project's ((es)) endpoint. Locate your project's endpoint by clicking the help icon () and selecting **Endpoints**. Add the **((es)) endpoint** to your configuration.
-1. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using:
-
- ```shell
- POST /_security/api_key
- {
- "name": "filebeat_host001",
- "role_descriptors": {
- "filebeat_writer": {
- "cluster": ["manage"],
- "index": [
- {
- "names": ["filebeat-*"],
- "privileges": ["manage"]
- }
- ]
- }
- }
- }
- ```
-
- Refer to [Grant access using API keys](((filebeat-ref))/beats-api-keys.html) for more information.
-
-#### Step 3: Configure ((filebeat))
-
-Add the following configuration to your `filebeat.yaml` file to start collecting log data.
-
-
-
-#### Step 4: Set up and start ((filebeat))
-
-From the ((filebeat)) installation directory, set the [index template](((ref))/index-templates.html) by running the command that aligns with your system:
-
-
-
-From the ((filebeat)) installation directory, start filebeat by running the command that aligns with your system:
-
-
-
-### Ingest logs with ((agent))
-
-Add the custom logs integration to ingest and centrally manage your logs using ((agent)) and ((fleet)):
-
-#### Step 1: Add the custom logs integration to your project
-
-To add the custom logs integration to your project:
-
-1. In your ((observability)) project, go to **Project Settings** → **Integrations**.
-1. Type `custom` in the search bar and select **Custom Logs**.
-1. Click **Install ((agent))** at the bottom of the page, and follow the instructions for your system to install the ((agent)). If you've already installed an ((agent)), you'll be taken directly to configuring your integration.
-1. After installing the ((agent)), click **Save and continue** to configure the integration from the **Add Custom Logs integration** page.
-1. Give your integration a meaningful name and description.
-1. Add the **Log file path**. For example, `/var/log/your-logs.log`.
-1. Under **Custom log file**, click **Advanced options**.
-
-1. In the **Processors** text box, add the following YAML configuration to add processors that enhance your data. See [processors](((filebeat-ref))/filtering-and-enhancing-data.html) to learn more.
-
- ```yaml
- processors:
- - add_host_metadata: ~
- - add_cloud_metadata: ~
- - add_docker_metadata: ~
- - add_kubernetes_metadata: ~
- ```
-1. Under **Custom configurations**, add the following YAML configuration to collect data.
-
- ```yaml
- json:
- overwrite_keys: true [^1]
- add_error_key: true [^2]
- expand_keys: true [^3]
- keys_under_root: true [^4]
- fields_under_root: true [^5]
- fields:
- service.name: your_service_name [^6]
- service.version: your_service_version [^6]
- service.environment: your_service_environment [^6]
- ```
- [^1]: Values from the decoded JSON object overwrite the fields that ((agent)) normally adds (type, source, offset, etc.) in case of conflicts.
- [^2]: ((agent)) adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- [^3]: ((agent)) will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- [^4]: By default, the decoded JSON is placed under a "json" key in the output document. When set to `true`, the keys are copied top level in the output document.
- [^5]: When set to `true`, custom fields are stored as top-level fields in the output document instead of being grouped under a fields sub-dictionary.
- [^6]: The `service.name` (required), `service.version` (optional), and `service.environment` (optional) of the service you're collecting logs from, used for Log correlation.
-1. An agent policy is created that defines the data your ((agent)) collects. If you've previously installed an ((agent)) on the host you're collecting logs from, you can select the **Existing hosts** tab and use an existing agent policy.
-1. Click **Save and continue**.
-
-## View logs
-
-Use Logs Explorer to search, filter, and visualize your logs. Refer to the filter and aggregate logs documentation for more information.
\ No newline at end of file
diff --git a/docs/en/serverless/logging/filter-and-aggregate-logs.mdx b/docs/en/serverless/logging/filter-and-aggregate-logs.mdx
deleted file mode 100644
index 87a2804d49..0000000000
--- a/docs/en/serverless/logging/filter-and-aggregate-logs.mdx
+++ /dev/null
@@ -1,341 +0,0 @@
----
-slug: /serverless/observability/filter-and-aggregate-logs
-title: Filter and aggregate logs
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data.
-
-This guide shows you how to:
-
-* Filter logs: Narrow down your log data by applying specific criteria.
-* Aggregate logs: Analyze and summarize data to find patterns and gain insight.
-
-
-
-## Before you get started
-
-import Roles from '../partials/roles.mdx'
-
-
-
-The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven't used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the Parse and organize logs documentation.
-
-Set the ingest pipeline with the following command:
-
-```console
-PUT _ingest/pipeline/logs-example-default
-{
- "description": "Extracts the timestamp log level and host ip",
- "processors": [
- {
- "dissect": {
- "field": "message",
- "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}"
- }
- }
- ]
-}
-```
-
-Set the index template with the following command:
-
-```console
-PUT _index_template/logs-example-default-template
-{
- "index_patterns": [ "logs-example-*" ],
- "data_stream": { },
- "priority": 500,
- "template": {
- "settings": {
- "index.default_pipeline":"logs-example-default"
- }
- },
- "composed_of": [
- "logs-mappings",
- "logs-settings",
- "logs@custom",
- "ecs@dynamic_templates"
- ],
- "ignore_missing_component_templates": ["logs@custom"]
-}
-```
-
-
-
-## Filter logs
-
-Filter your data using the fields you've extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways:
-
-- Filter logs in Logs Explorer: Filter and visualize log data in Logs Explorer.
-- Filter logs with Query DSL: Filter log data from Developer Tools using Query DSL.
-
-
-
-### Filter logs in Logs Explorer
-
-Logs Explorer is a tool that automatically provides views of your log data based on integrations and data streams. To open Logs Explorer, go to **Discover** and select the **Logs Explorer** tab.
-
-From Logs Explorer, you can use the [((kib)) Query Language (KQL)](((kibana-ref))/kuery-query.html) in the search bar to narrow down the log data that's displayed.
-For example, you might want to look into an event that occurred within a specific time range.
-
-Add some logs with varying timestamps and log levels to your data stream:
-
-1. In your Observability project, go to **Developer Tools**.
-1. In the **Console** tab, run the following command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
-{ "create": {} }
-{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
-{ "create": {} }
-{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
-```
-
-For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer:
-
-1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`:
-
- ```text
- log.level: ("ERROR" or "WARN")
- ```
-
-1. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`.
-
-
-
-1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`.
-
-
-
-Under the **Documents** tab, you'll see the filtered log data matching your query.
-
-
-
-For more on using Logs Explorer, refer to the [Discover](((kibana-ref))/discover.html) documentation.
-
-
-
-### Filter logs with Query DSL
-
-[Query DSL](((ref))/query-dsl.html) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**.
-
-For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](((ref))/query-dsl-range-query.html) to filter for the specific timestamp range and a [term query](((ref))/query-dsl-term-query.html) to filter for `WARN` and `ERROR` log levels.
-
-First, from **Developer Tools**, add some logs with varying timestamps and log levels to your data stream with the following command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
-{ "create": {} }
-{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
-{ "create": {} }
-{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
-```
-
-Let's say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`.
-
-```console
-POST /logs-example-default/_search
-{
- "query": {
- "bool": {
- "filter": [
- {
- "range": {
- "@timestamp": {
- "gte": "2023-09-14T00:00:00",
- "lte": "2023-09-15T23:59:59"
- }
- }
- },
- {
- "terms": {
- "log.level": ["WARN", "ERROR"]
- }
- }
- ]
- }
- }
-}
-```
-
-The filtered results should show `WARN` and `ERROR` logs that occurred within the timestamp range:
-
-```JSON
-{
- ...
- "hits": {
- ...
- "hits": [
- {
- "_index": ".ds-logs-example-default-2023.09.25-000001",
- "_id": "JkwPzooBTddK4OtTQToP",
- "_score": 0,
- "_source": {
- "message": "192.168.1.101 Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- },
- "@timestamp": "2023-09-15T08:15:20.234Z"
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.09.25-000001",
- "_id": "A5YSzooBMYFrNGNwH75O",
- "_score": 0,
- "_source": {
- "message": "192.168.1.102 Critical system failure detected.",
- "log": {
- "level": "ERROR"
- },
- "@timestamp": "2023-09-14T10:30:45.789Z"
- }
- }
- ]
- }
-}
-```
-
-
-
-## Aggregate logs
-Use aggregation to analyze and summarize your log data to find patterns and gain insight. [Bucket aggregations](((ref))/search-aggregations-bucket.html) organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs.
-
-For example, you might want to understand error distribution by analyzing the count of logs per log level.
-
-First, from **Developer Tools**, add some logs with varying log levels to your data stream using the following command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
-{ "create": {} }
-{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." }
-{ "create": {} }
-{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." }
-{ "create": {} }
-{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
-{ "create": {} }
-{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
-{ "create": {} }
-{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." }
-```
-
-Next, run this command to aggregate your log data using the `log.level` field:
-
-```console
-POST logs-example-default/_search?size=0&filter_path=aggregations
-{
-"size": 0, [^1]
-"aggs": {
- "log_level_distribution": {
- "terms": {
- "field": "log.level"
- }
- }
- }
-}
-```
-[^1]: Searches with an aggregation return both the query results and the aggregation, so you would see the logs matching the data and the aggregation. Setting `size` to `0` limits the results to aggregations.
-
-The results should show the number of logs in each log level:
-
-```JSON
-{
- "aggregations": {
- "error_distribution": {
- "doc_count_error_upper_bound": 0,
- "sum_other_doc_count": 0,
- "buckets": [
- {
- "key": "ERROR",
- "doc_count": 2
- },
- {
- "key": "INFO",
- "doc_count": 2
- },
- {
- "key": "WARN",
- "doc_count": 2
- },
- {
- "key": "DEBUG",
- "doc_count": 1
- }
- ]
- }
- }
-}
-```
-
-You can also combine aggregations and queries. For example, you might want to limit the scope of the previous aggregation by adding a range query:
-
-```console
-GET /logs-example-default/_search
-{
- "size": 0,
- "query": {
- "range": {
- "@timestamp": {
- "gte": "2023-09-14T00:00:00",
- "lte": "2023-09-15T23:59:59"
- }
- }
- },
- "aggs": {
- "my-agg-name": {
- "terms": {
- "field": "log.level"
- }
- }
- }
-}
-```
-
-The results should show an aggregate of logs that occurred within your timestamp range:
-
-```JSON
-{
- ...
- "hits": {
- ...
- "hits": []
- },
- "aggregations": {
- "my-agg-name": {
- "doc_count_error_upper_bound": 0,
- "sum_other_doc_count": 0,
- "buckets": [
- {
- "key": "WARN",
- "doc_count": 2
- },
- {
- "key": "ERROR",
- "doc_count": 1
- },
- {
- "key": "INFO",
- "doc_count": 1
- }
- ]
- }
- }
-}
-```
-
-For more on aggregation types and available aggregations, refer to the [Aggregations](((ref))/search-aggregations.html) documentation.
diff --git a/docs/en/serverless/logging/get-started-with-logs.mdx b/docs/en/serverless/logging/get-started-with-logs.mdx
deleted file mode 100644
index f9fe23f26a..0000000000
--- a/docs/en/serverless/logging/get-started-with-logs.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-slug: /serverless/observability/get-started-with-logs
-title: Get started with system logs
-description: Learn how to onboard your system log data quickly.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-In this guide you'll learn how to onboard system log data from a machine or server,
-then observe the data in **Logs Explorer**.
-
-To onboard system log data:
-
-1. Create a new ((observability)) project, or open an existing one.
-1. In your ((observability)) project, go to **Add data**.
-1. Under **Collect and analyze logs**, click **Stream host system logs**.
-When the page loads, the system integration is installed automatically, and a new API key is created.
-Make sure you copy the API key and store it in a secure location.
-1. Follow the in-product steps to install and configure the ((agent)).
-Notice that you can choose to download the agent's config automatically to avoid adding it manually.
-
-After the agent is installed and successfully streaming log data, you can view the data in the UI:
-
-1. From the navigation menu, go to **Discover** and select the **Logs Explorer** tab. The view shows all log datasets.
-Notice you can add fields, change the view, expand a document to see details,
-and perform other actions to explore your data.
-1. Click **All log datasets** and select **System** → **syslog** to show syslog logs.
-
-
-
-## Next steps
-
-Now that you've added system logs and explored your data,
-learn how to onboard other types of data:
-
-*
-*
-
-To onboard other types of data, select **Add Data** from the main menu.
-
diff --git a/docs/en/serverless/logging/log-monitoring.mdx b/docs/en/serverless/logging/log-monitoring.mdx
deleted file mode 100644
index b8693a40b8..0000000000
--- a/docs/en/serverless/logging/log-monitoring.mdx
+++ /dev/null
@@ -1,97 +0,0 @@
----
-slug: /serverless/observability/log-monitoring
-title: Log monitoring
-description: Use Elastic to deploy and manage logs at a petabyte scale, and get insights from your logs in minutes.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Elastic Observability allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links:
-
-- Get started with system logs: Onboard system log data from a machine or server.
-- Stream any log file: Send log files to your Observability project using a standalone ((agent)).
-- Parse and route logs: Parse your log data and extract structured fields that you can use to analyze your data.
-- Filter and aggregate logs: Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently.
-- Explore logs: Find information on visualizing and analyzing logs.
-- Run pattern analysis on log data: Find patterns in unstructured log messages and make it easier to examine your data.
-- Troubleshoot logs: Find solutions for errors you might encounter while onboarding your logs.
-
-## Send logs data to your project
-
-You can send logs data to your project in different ways depending on your needs:
-
-- ((agent))
-- ((filebeat))
-
-When choosing between ((agent)) and ((filebeat)), consider the different features and functionalities between the two options.
-See [((beats)) and ((agent)) capabilities](((fleet-guide))/beats-agent-comparison.html) for more information on which option best fits your situation.
-
-### ((agent))
-
-((agent)) uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources.
-You have the following options when installing and managing an ((agent)):
-
-#### ((fleet))-managed ((agent))
-
-Install an ((agent)) and use ((fleet)) to define, configure, and manage your agents in a central location.
-
-See [install ((fleet))-managed ((agent))](((fleet-guide))/install-fleet-managed-elastic-agent.html).
-
-#### Standalone ((agent))
-
-Install an ((agent)) and manually configure it locally on the system where it’s installed.
-You are responsible for managing and upgrading the agents.
-
-See [install standalone ((agent))](((fleet-guide))/install-standalone-elastic-agent.html).
-
-#### ((agent)) in a containerized environment
-
-Run an ((agent)) inside of a container — either with ((fleet-server)) or standalone.
-
-See [install ((agent)) in containers](((fleet-guide))/install-elastic-agents-in-containers.html).
-
-### ((filebeat))
-
-((filebeat)) is a lightweight shipper for forwarding and centralizing log data.
-Installed as a service on your servers, ((filebeat)) monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing.
-
-- [((filebeat)) overview](((filebeat-ref))/filebeat-overview.html): General information on ((filebeat)) and how it works.
-- [((filebeat)) quick start](((filebeat-ref))/filebeat-installation-configuration.html): Basic installation instructions to get you started.
-- [Set up and run ((filebeat))](((filebeat-ref))/setting-up-and-running.html): Information on how to install, set up, and run ((filebeat)).
-
-## Configure logs
-
-The following resources provide information on configuring your logs:
-
-- [Data streams](((ref))/data-streams.html): Efficiently store append-only time series data in multiple backing indices partitioned by time and size.
-- [Data views](((kibana-ref))/data-views.html): Query log entries from the data streams of specific datasets or namespaces.
-- [Index lifecycle management](((ref))/example-using-index-lifecycle-policy.html): Configure the built-in logs policy based on your application's performance, resilience, and retention requirements.
-- [Ingest pipeline](((ref))/ingest.html): Parse and transform log entries into a suitable format before indexing.
-- [Mapping](((ref))/mapping.html): Define how data is stored and indexed.
-
-## View and monitor logs
-
-Use **Logs Explorer** to search, filter, and tail all your logs ingested into your project in one place.
-
-The following resources provide information on viewing and monitoring your logs:
-
-- Discover and explore: Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view.
-- Detect log anomalies: Use ((ml)) to detect log anomalies automatically.
-
-## Monitor data sets
-
-The **Data Set Quality** page provides an overview of your data sets and their quality.
-Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents.
-
-Monitor data sets
-
-## Application logs
-
-Application logs provide valuable insight into events that have occurred within your services and applications.
-See Application logs.
-
-{/* ## Create a logs threshold alert
-
-You can create a rule to send an alert when the log aggregation exceeds a threshold.
-See Create a logs threshold rule. */}
diff --git a/docs/en/serverless/logging/parse-log-data.mdx b/docs/en/serverless/logging/parse-log-data.mdx
deleted file mode 100644
index 9e457a60c0..0000000000
--- a/docs/en/serverless/logging/parse-log-data.mdx
+++ /dev/null
@@ -1,844 +0,0 @@
----
-slug: /serverless/observability/parse-log-data
-title: Parse and route logs
-# description: Description to be written
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-If your log data is unstructured or semi-structured, you can parse it and break it into meaningful fields. You can use those fields to explore and analyze your data. For example, you can find logs within a specific timestamp range or filter logs by log level to focus on potential issues.
-
-After parsing, you can use the structured fields to further organize your logs by configuring a reroute processor to send specific logs to different target data streams.
-
-Refer to the following sections for more on parsing and organizing your log data:
-
-* Extract structured fields: Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier.
-* Reroute log data to specific data streams: Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing.
-
-## Extract structured fields
-
-Make your logs more useful by extracting structured fields from your unstructured log data. Extracting structured fields makes it easier to search, analyze, and filter your log data.
-
-Follow the steps below to see how the following unstructured log data is indexed by default:
-
-```log
-2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
-```
-
-Start by storing the document in the `logs-example-default` data stream:
-
-1. In your Observability project, go to **Developer Tools**.
-1. In the **Console** tab, add the example log to your project using the following command:
-
- ```console
- POST logs-example-default/_doc
- {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%."
- }
- ```
-
-1. Then, you can retrieve the document with the following search:
-
- ```console
- GET /logs-example-default/_search
- ```
-
-The results should look like this:
-
-```json
-{
- ...
- "hits": {
- ...
- "hits": [
- {
- "_index": ".ds-logs-example-default-2023.08.09-000001",
- ...
- "_source": {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.",
- "@timestamp": "2023-08-09T17:19:27.73312243Z"
- }
- }
- ]
- }
-}
-```
-
-Your project indexes the `message` field by default and adds a `@timestamp` field. Since there was no timestamp set, it's set to `now`.
-At this point, you can search for phrases in the `message` field like `WARN` or `Disk usage exceeds`.
-For example, run the following command to search for the phrase `WARN` in the log's `message` field:
-
-```console
-GET logs-example-default/_search
-{
- "query": {
- "match": {
- "message": {
- "query": "WARN"
- }
- }
- }
-}
-```
-
-While you can search for phrases in the `message` field, you can't use this field to filter log data. Your message, however, contains all of the following potential fields you can extract and use to filter and aggregate your log data:
-
-- **@timestamp** (`2023-08-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened.
-- **log.level** (`WARN`): Extracting this field lets you filter logs by severity. This is helpful if you want to focus on high-severity WARN or ERROR-level logs, and reduce noise by filtering out low-severity INFO-level logs.
-- **host.ip** (`192.168.1.101`): Extracting this field lets you filter logs by the host IP addresses. This is helpful if you want to focus on specific hosts that you’re having issues with or if you want to find disparities between hosts.
-- **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field.
-
-
-These fields are part of the [Elastic Common Schema (ECS)](((ecs-ref))/ecs-reference.html). The ECS defines a common set of fields that you can use across your project when storing data, including log and metric data.
-
-
-### Extract the `@timestamp` field
-
-When you added the log to your project in the previous section, the `@timestamp` field showed when the log was added. The timestamp showing when the log actually occurred was in the unstructured `message` field:
-
-```json
- ...
- "_source": {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", [^1]
- "@timestamp": "2023-08-09T17:19:27.73312243Z" [^2]
- }
- ...
-```
-[^1]: The timestamp in the `message` field shows when the log occurred.
-[^2]: The timestamp in the `@timestamp` field shows when the log was added to your project.
-
-When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to your project.
-To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following:
-
-1. Use an ingest pipeline to extract the `@timestamp` field
-1. Test the pipeline with the simulate pipeline API
-1. Configure a data stream with an index template
-1. Create a data stream
-
-#### Use an ingest pipeline to extract the `@timestamp` field
-
-Ingest pipelines consist of a series of processors that perform common transformations on incoming documents before they are indexed.
-To extract the `@timestamp` field from the example log, use an ingest pipeline with a [dissect processor](((ref))/dissect-processor.html).
-The dissect processor extracts structured fields from unstructured log messages based on a pattern you set.
-
-Your project can parse string timestamps that are in `yyyy-MM-dd'T'HH:mm:ss.SSSZ` and `yyyy-MM-dd` formats into date fields.
-Since the log example's timestamp is in one of these formats, you don't need additional processors.
-More complex or nonstandard timestamps require a [date processor](((ref))/date-processor.html) to parse the timestamp into a date field.
-
-Use the following command to extract the timestamp from the `message` field into the `@timestamp` field:
-
-```console
-PUT _ingest/pipeline/logs-example-default
-{
- "description": "Extracts the timestamp",
- "processors": [
- {
- "dissect": {
- "field": "message",
- "pattern": "%{@timestamp} %{message}"
- }
- }
- ]
-}
-```
-
-The previous command sets the following values for your ingest pipeline:
-
-- `_ingest/pipeline/logs-example-default`: The name of the pipeline,`logs-example-default`, needs to match the name of your data stream. You'll set up your data stream in the next section. For more information, refer to the [data stream naming scheme](((fleet-guide))/data-streams.html#data-streams-naming-scheme).
-- `field`: The field you're extracting data from, `message` in this case.
-- `pattern`: The pattern of the elements in your log data. The `%{@timestamp} %{message}` pattern extracts the timestamp, `2023-08-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern.
-
-#### Test the pipeline with the simulate pipeline API
-
-The [simulate pipeline API](((ref))/simulate-pipeline-api.html#ingest-verbose-param) runs the ingest pipeline without storing any documents.
-This lets you verify your pipeline works using multiple documents.
-
-Run the following command to test your ingest pipeline with the simulate pipeline API.
-
-```console
-POST _ingest/pipeline/logs-example-default/_simulate
-{
- "docs": [
- {
- "_source": {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%."
- }
- }
- ]
-}
-```
-
-The results should show the `@timestamp` field extracted from the `message` field:
-
-```console
-{
- "docs": [
- {
- "doc": {
- "_index": "_index",
- "_id": "_id",
- "_version": "-3",
- "_source": {
- "message": "WARN 192.168.1.101 Disk usage exceeds 90%.",
- "@timestamp": "2023-08-08T13:45:12.123Z"
- },
- ...
- }
- }
- ]
-}
-```
-
-
-Make sure you've created the ingest pipeline using the `PUT` command in the previous section before using the simulate pipeline API.
-
-
-#### Configure a data stream with an index template
-
-After creating your ingest pipeline, run the following command to create an index template to configure your data stream's backing indices:
-
-```console
-PUT _index_template/logs-example-default-template
-{
- "index_patterns": [ "logs-example-*" ],
- "data_stream": { },
- "priority": 500,
- "template": {
- "settings": {
- "index.default_pipeline":"logs-example-default"
- }
- },
- "composed_of": [
- "logs@mappings",
- "logs@settings",
- "logs@custom",
- "ecs@mappings"
- ],
- "ignore_missing_component_templates": ["logs@custom"]
-}
-```
-
-The previous command sets the following values for your index template:
-
-- `index_pattern`: Needs to match your log data stream. Naming conventions for data streams are `--`. In this example, your logs data stream is named `logs-example-*`. Data that matches this pattern will go through your pipeline.
-- `data_stream`: Enables data streams.
-- `priority`: Sets the priority of your index templates. Index templates with a higher priority take precedence. If a data stream matches multiple index templates, your project uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates.
-- `index.default_pipeline`: The name of your ingest pipeline. `logs-example-default` in this case.
-- `composed_of`: Here you can set component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. Elastic has several built-in templates to help when ingesting your log data.
-
-The example index template above sets the following component templates:
-
-- `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](((ecs-ref))/ecs-data_stream.html).
-- `logs@settings`: general settings for log data streams including the following:
- * The default lifecycle policy that rolls over when the primary shard reaches 50 GB or after 30 days.
- * The default pipeline uses the ingest timestamp if there is no specified `@timestamp` and places a hook for the `logs@custom` pipeline. If a `logs@custom` pipeline is installed, it's applied to logs ingested into this data stream.
- * Sets the [`ignore_malformed`](((ref))/ignore-malformed.html) flag to `true`. When ingesting a large batch of log data, a single malformed field like an IP address can cause the entire batch to fail. When set to true, malformed fields with a mapping type that supports this flag are still processed.
- * `logs@custom`: a predefined component template that is not installed by default. Use this name to install a custom component template to override or extend any of the default mappings or settings.
- * `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](((ecs-ref))/ecs-reference.html).
-
-#### Create a data stream
-
-Create your data stream using the [data stream naming scheme](((fleet-guide))/data-streams.html#data-streams-naming-scheme). Name your data stream to match the name of your ingest pipeline, `logs-example-default` in this case. Post the example log to your data stream with this command:
-
-```console
-POST logs-example-default/_doc
-{
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%."
-}
-```
-
-View your documents using this command:
-
-```console
-GET /logs-example-default/_search
-```
-
-You should see the pipeline has extracted the `@timestamp` field:
-
-```json
-{
- ...
- {
- ...
- "hits": {
- ...
- "hits": [
- {
- "_index": ".ds-logs-example-default-2023.08.09-000001",
- "_id": "RsWy3IkB8yCtA5VGOKLf",
- "_score": 1,
- "_source": {
- "message": "WARN 192.168.1.101 Disk usage exceeds 90%.",
- "@timestamp": "2023-08-08T13:45:12.123Z" [^1]
- }
- }
- ]
- }
- }
-}
-```
-[^1]: The extracted `@timestamp` field.
-
-You can now use the `@timestamp` field to sort your logs by the date and time they happened.
-
-#### Troubleshoot the `@timestamp` field
-
-Check the following common issues and solutions with timestamps:
-
-- **Timestamp failure:** If your data has inconsistent date formats, set `ignore_failure` to `true` for your date processor. This processes logs with correctly formatted dates and ignores those with issues.
-- **Incorrect timezone:** Set your timezone using the `timezone` option on the [date processor](((ref))/date-processor.html).
-- **Incorrect timestamp format:** Your timestamp can be a Java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. For more information on timestamp formats, refer to the [mapping date format](((ref))/mapping-date-format.html).
-
-### Extract the `log.level` field
-
-Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log:
-
-```log
-2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
-```
-
-To extract and use the `log.level` field:
-
-1. Add the `log.level` field to the dissect processor pattern in your ingest pipeline.
-1. Test the pipeline with the simulate API.
-1. Query your logs based on the `log.level` field.
-
-#### Add `log.level` to your ingest pipeline
-
-Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the Extract the `@timestamp` field section with this command:
-
-```console
-PUT _ingest/pipeline/logs-example-default
-{
- "description": "Extracts the timestamp and log level",
- "processors": [
- {
- "dissect": {
- "field": "message",
- "pattern": "%{@timestamp} %{log.level} %{message}"
- }
- }
- ]
-}
-```
-
-Now your pipeline will extract these fields:
-
-- The `@timestamp` field: `2023-08-08T13:45:12.123Z`
-- The `log.level` field: `WARN`
-- The `message` field: `192.168.1.101 Disk usage exceeds 90%.`
-
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the Extract the `@timestamp` field section.
-
-#### Test the pipeline with the simulate API
-
-Test that your ingest pipeline works as expected with the [simulate pipeline API](((ref))/simulate-pipeline-api.html#ingest-verbose-param):
-
-```console
-POST _ingest/pipeline/logs-example-default/_simulate
-{
- "docs": [
- {
- "_source": {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%."
- }
- }
- ]
-}
-```
-
-The results should show the `@timestamp` and the `log.level` fields extracted from the `message` field:
-
-```json
-{
- "docs": [
- {
- "doc": {
- "_index": "_index",
- "_id": "_id",
- "_version": "-3",
- "_source": {
- "message": "192.168.1.101 Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- },
- "@timestamp": "2023-8-08T13:45:12.123Z",
- },
- ...
- }
- }
- ]
-}
-```
-
-#### Query logs based on `log.level`
-
-Once you've extracted the `log.level` field, you can query for high-severity logs like `WARN` and `ERROR`, which may need immediate attention, and filter out less critical `INFO` and `DEBUG` logs.
-
-Let's say you have the following logs with varying severities:
-
-```log
-2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
-2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
-2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
-2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture.
-```
-
-Add them to your data stream using this command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." }
-```
-
-Then, query for documents with a log level of `WARN` or `ERROR` with this command:
-
-```console
-GET logs-example-default/_search
-{
- "query": {
- "terms": {
- "log.level": ["WARN", "ERROR"]
- }
- }
-}
-```
-
-The results should show only the high-severity logs:
-
-```json
-{
-...
- },
- "hits": {
- ...
- "hits": [
- {
- "_index": ".ds-logs-example-default-2023.08.14-000001",
- "_id": "3TcZ-4kB3FafvEVY4yKx",
- "_score": 1,
- "_source": {
- "message": "192.168.1.101 Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- },
- "@timestamp": "2023-08-08T13:45:12.123Z"
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.08.14-000001",
- "_id": "3jcZ-4kB3FafvEVY4yKx",
- "_score": 1,
- "_source": {
- "message": "192.168.1.103 Database connection failed.",
- "log": {
- "level": "ERROR"
- },
- "@timestamp": "2023-08-08T13:45:14.003Z"
- }
- }
- ]
- }
-}
-```
-
-### Extract the `host.ip` field
-
-Extracting the `host.ip` field lets you filter logs by host IP addresses allowing you to focus on specific hosts that you're having issues with or find disparities between hosts.
-
-The `host.ip` field is part of the [Elastic Common Schema (ECS)](((ecs-ref))/ecs-reference.html). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](((ref))/ip.html). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet.
-
-This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields:
-
-```log
-2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
-2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
-2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
-2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture.
-```
-
-To extract and use the `host.ip` field:
-
-1. Add the `host.ip` field to your dissect processor in your ingest pipeline.
-1. Test the pipeline with the simulate API.
-1. Query your logs based on the `host.ip` field.
-
-#### Add `host.ip` to your ingest pipeline
-
-Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the Extract the `@timestamp` field section:
-
-```console
-PUT _ingest/pipeline/logs-example-default
-{
- "description": "Extracts the timestamp log level and host ip",
- "processors": [
- {
- "dissect": {
- "field": "message",
- "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}"
- }
- }
- ]
-}
-```
-
-Your pipeline will extract these fields:
-
-- The `@timestamp` field: `2023-08-08T13:45:12.123Z`
-- The `log.level` field: `WARN`
-- The `host.ip` field: `192.168.1.101`
-- The `message` field: `Disk usage exceeds 90%.`
-
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the Extract the `@timestamp` field section.
-
-#### Test the pipeline with the simulate API
-
-Test that your ingest pipeline works as expected with the [simulate pipeline API](((ref))/simulate-pipeline-api.html#ingest-verbose-param):
-
-```console
-POST _ingest/pipeline/logs-example-default/_simulate
-{
- "docs": [
- {
- "_source": {
- "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%."
- }
- }
- ]
-}
-```
-
-The results should show the `host.ip`, `@timestamp`, and `log.level` fields extracted from the `message` field:
-
-```json
-{
- "docs": [
- {
- "doc": {
- ...
- "_source": {
- "host": {
- "ip": "192.168.1.101"
- },
- "@timestamp": "2023-08-08T13:45:12.123Z",
- "message": "Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- }
- },
- ...
- }
- }
- ]
-}
-```
-
-#### Query logs based on `host.ip`
-
-You can query your logs based on the `host.ip` field in different ways, including using CIDR notation and range queries.
-
-Before querying your logs, add them to your data stream using this command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." }
-```
-
-##### CIDR notation
-
-You can use [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) to query your log data using a block of IP addresses that fall within a certain network segment. CIDR notations uses the format of `[IP address]/[prefix length]`. The following command queries IP addresses in the `192.168.1.0/24` subnet meaning IP addresses from `192.168.1.0` to `192.168.1.255`.
-
-```console
-GET logs-example-default/_search
-{
- "query": {
- "term": {
- "host.ip": "192.168.1.0/24"
- }
- }
-}
-```
-
-Because all of the example logs are in this range, you'll get the following results:
-
-```json
-{
- ...
- },
- "hits": {
- ...
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "ak4oAIoBl7fe5ItIixuB",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.101"
- },
- "@timestamp": "2023-08-08T13:45:12.123Z",
- "message": "Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- }
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "a04oAIoBl7fe5ItIixuC",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.103"
- },
- "@timestamp": "2023-08-08T13:45:14.003Z",
- "message": "Database connection failed.",
- "log": {
- "level": "ERROR"
- }
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "bE4oAIoBl7fe5ItIixuC",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.104"
- },
- "@timestamp": "2023-08-08T13:45:15.004Z",
- "message": "Debugging connection issue.",
- "log": {
- "level": "DEBUG"
- }
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "bU4oAIoBl7fe5ItIixuC",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.102"
- },
- "@timestamp": "2023-08-08T13:45:16.005Z",
- "message": "User changed profile picture.",
- "log": {
- "level": "INFO"
- }
- }
- }
- ]
- }
-}
-```
-
-##### Range queries
-
-Use [range queries](((ref))/query-dsl-range-query.html) to query logs in a specific range.
-
-The following command searches for IP addresses greater than or equal to `192.168.1.100` and less than or equal to `192.168.1.102`.
-
-```console
-GET logs-example-default/_search
-{
- "query": {
- "range": {
- "host.ip": {
- "gte": "192.168.1.100", [^1]
- "lte": "192.168.1.102" [^2]
- }
- }
- }
-}
-```
-[^1]: Greater than or equal to `192.168.1.100`.
-[^2]: Less than or equal to `192.168.1.102`.
-
-You'll get the following results only showing logs in the range you've set:
-
-```json
-{
- ...
- },
- "hits": {
- ...
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "ak4oAIoBl7fe5ItIixuB",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.101"
- },
- "@timestamp": "2023-08-08T13:45:12.123Z",
- "message": "Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- }
- }
- },
- {
- "_index": ".ds-logs-example-default-2023.08.16-000001",
- "_id": "bU4oAIoBl7fe5ItIixuC",
- "_score": 1,
- "_source": {
- "host": {
- "ip": "192.168.1.102"
- },
- "@timestamp": "2023-08-08T13:45:16.005Z",
- "message": "User changed profile picture.",
- "log": {
- "level": "INFO"
- }
- }
- }
- ]
- }
-}
-```
-
-## Reroute log data to specific data streams
-
-By default, an ingest pipeline sends your log data to a single data stream. To simplify log data management, use a [reroute processor](((ref))/reroute-processor.html) to route data from the generic data stream to a target data stream. For example, you might want to send high-severity logs to a specific data stream to help with categorization.
-
-This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream:
-
-```log
-2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
-2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
-2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
-2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture.
-```
-
-
-When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](((ref))/size-your-shards.html) documentation.
-
-
-To use a reroute processor:
-
-1. Add a reroute processor to your ingest pipeline.
-1. Add the example logs to your data stream.
-1. Query your logs and verify the high-severity logs were routed to the new data stream.
-
-### Add a reroute processor to the ingest pipeline
-
-Add a reroute processor to your ingest pipeline with the following command:
-
-```console
-PUT _ingest/pipeline/logs-example-default
-{
- "description": "Extracts fields and reroutes WARN",
- "processors": [
- {
- "dissect": {
- "field": "message",
- "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}"
- }
- },
- {
- "reroute": {
- "tag": "high_severity_logs",
- "if" : "ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",
- "dataset": "critical"
- }
- }
- ]
-}
-```
-
-The previous command sets the following values for your reroute processor:
-
-- `tag`: Identifier for the processor that you can use for debugging and metrics. In the example, the tag is set to `high_severity_logs`.
-- `if`: Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`.
-- `dataset`: the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream.
-
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the Extract the `@timestamp` field section.
-
-### Add logs to a data stream
-
-Add the example logs to your data stream with this command:
-
-```console
-POST logs-example-default/_bulk
-{ "create": {} }
-{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." }
-{ "create": {} }
-{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." }
-```
-
-### Verify the reroute processor worked
-
-The reroute processor should route any logs with a `log.level` of `WARN` or `ERROR` to the `logs-critical-default` data stream. Query the data stream using the following command to verify the log data was routed as intended:
-
-```console
-GET logs-critical-default/_search
-```
-
-Your should see similar results to the following showing that the high-severity logs are now in the `critical` dataset:
-
-```json
-{
- ...
- "hits": {
- ...
- "hits": [
- ...
- "_source": {
- "host": {
- "ip": "192.168.1.101"
- },
- "@timestamp": "2023-08-08T13:45:12.123Z",
- "message": "Disk usage exceeds 90%.",
- "log": {
- "level": "WARN"
- },
- "data_stream": {
- "namespace": "default",
- "type": "logs",
- "dataset": "critical"
- },
- {
- ...
- "_source": {
- "host": {
- "ip": "192.168.1.103"
- },
- "@timestamp": "2023-08-08T13:45:14.003Z",
- "message": "Database connection failed.",
- "log": {
- "level": "ERROR"
- },
- "data_stream": {
- "namespace": "default",
- "type": "logs",
- "dataset": "critical"
- }
- }
- }
- ]
- }
-}
-```
diff --git a/docs/en/serverless/logging/plaintext-application-logs.mdx b/docs/en/serverless/logging/plaintext-application-logs.mdx
deleted file mode 100644
index d49066e7f6..0000000000
--- a/docs/en/serverless/logging/plaintext-application-logs.mdx
+++ /dev/null
@@ -1,251 +0,0 @@
----
-slug: /serverless/observability/plaintext-application-logs
-title: Plaintext application logs
-description: Parse and ingest raw, plain-text application logs using a log shipper like Filebeat.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import ApplicationLogsCorrelateLogs from '../transclusion/observability/application-logs/correlate-logs.mdx'
-import InstallWidget from '../transclusion/observability/tab-widgets/filebeat-install/widget.mdx'
-import SetupWidget from '../transclusion/observability/tab-widgets/filebeat-setup/widget.mdx'
-import StartWidget from '../transclusion/observability/tab-widgets/filebeat-start/widget.mdx'
-
-
-
-Ingest and parse plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration.
-
-Plaintext logs require some additional setup that structured logs do not require:
-
-* To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications.
-* To correlate plaintext logs, you need to inject IDs into log messages and parse them using an ingest pipeline.
-
-To ingest, parse, and correlate plaintext logs:
-
-1. Ingest plaintext logs with ((filebeat)) or ((agent)) and parse them before indexing with an ingest pipeline.
-1. Correlate plaintext logs with an ((apm-agent)).
-1. View logs in Logs Explorer
-
-## Ingest logs
-
-Send application logs to your project using one of the following shipping tools:
-
-* **((filebeat)):** A lightweight data shipper that sends log data to your project.
-* **((agent)):** A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage ((agent)) policies and lifecycles directly from your project.
-
-### Ingest logs with ((filebeat))
-
-
-Use ((filebeat)) version 8.11+ for the best experience when ingesting logs with ((filebeat)).
-
-
-Follow these steps to ingest application logs with ((filebeat)).
-
-#### Step 1: Install ((filebeat))
-
-Install ((filebeat)) on the server you want to monitor by running the commands that align with your system:
-
-
-
-#### Step 2: Connect to your project
-
-Connect to your project using an API key to set up ((filebeat)). Set the following information in the `filebeat.yml` file:
-
-```yaml
-output.elasticsearch:
- hosts: ["your-projects-elasticsearch-endpoint"]
- api_key: "id:api_key"
-```
-
-1. Set the `hosts` to your project's ((es)) endpoint. Locate your project's endpoint by clicking the help icon () and selecting **Endpoints**. Add the **((es)) endpoint** to your configuration.
-1. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using:
-
- ```shell
- POST /_security/api_key
- {
- "name": "your_api_key",
- "role_descriptors": {
- "filebeat_writer": {
- "cluster": ["manage"],
- "index": [
- {
- "names": ["filebeat-*"],
- "privileges": ["manage", "create_doc"]
- }
- ]
- }
- }
- }
- ```
-
- Refer to [Grant access using API keys](((filebeat-ref))/beats-api-keys.html) for more information.
-
-#### Step 3: Configure ((filebeat))
-
-Add the following configuration to the `filebeat.yaml` file to start collecting log data.
-
-```yaml
-filebeat.inputs:
-- type: filestream [^1]
- enabled: true
- paths: /path/to/logs.log [^2]
-```
-[^1]: Reads lines from an active log file.
-[^2]: Paths that you want ((filebeat)) to crawl and fetch logs from.
-
-You can add additional settings to the `filebeat.yml` file to meet the needs of your specific set up. For example, the following settings would add a parser to manage messages that span multiple lines and add service fields:
-
-```yaml
- parsers:
- - multiline:
- type: pattern
- pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
- negate: true
- match: after
- fields_under_root: true
- fields:
- service.name: your_service_name
- service.environment: your_service_environment
- event.dataset: your_event_dataset
-```
-
-#### Step 4: Set up and start ((filebeat))
-
-From the ((filebeat)) installation directory, set the [index template](((ref))/index-templates.html) by running the command that aligns with your system:
-
-
-
-from the ((filebeat)) installation directory, start filebeat by running the command that aligns with your system:
-
-
-
-#### Step 5: Parse logs with an ingest pipeline
-
-Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](((ecs-ref))/ecs-reference.html)-compatible fields.
-
-Create an ingest pipeline with a [dissect processor](((ref))/dissect-processor.html) to extract structured ECS fields from your log messages. In your project, go to **Developer Tools** and use a command similar to the following example:
-
-```shell
-PUT _ingest/pipeline/filebeat* [^1]
-{
- "description": "Extracts the timestamp log level and host ip",
- "processors": [
- {
- "dissect": { [^2]
- "field": "message", [^3]
- "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" [^4]
- }
- }
- ]
-}
-```
-[^1]: `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](((fleet-guide))/data-streams.html#data-streams-naming-scheme).
-[^2]: `processors.dissect`: Adds a [dissect processor](((ref))/dissect-processor.html) to extract structured fields from your log message.
-[^3]: `field`: The field you're extracting data from, `message` in this case.
-[^4]: `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{message}` are common [ECS](((ecs-ref))/ecs-reference.html) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
-
-Refer to Extract structured fields for more on using ingest pipelines to parse your log data.
-
-After creating your pipeline, specify the pipeline for filebeat in the `filebeat.yml` file:
-
-```yaml
-output.elasticsearch:
- hosts: ["your-projects-elasticsearch-endpoint"]
- api_key: "id:api_key"
- pipeline: "your-pipeline" [^1]
-```
-[^1]: Add the pipeline output and the name of your pipeline to the output.
-
-### Ingest logs with ((agent))
-
-Follow these steps to ingest and centrally manage your logs using ((agent)) and ((fleet)).
-
-#### Step 1: Add the custom logs integration to your project
-
-To add the custom logs integration to your project:
-
-1. In your ((observability)) project, go to **Project Settings** → **Integrations**.
-1. Type `custom` in the search bar and select **Custom Logs**.
-1. Click **Add Custom Logs**.
-1. Click **Install ((agent))** at the bottom of the page, and follow the instructions for your system to install the ((agent)).
-1. After installing the ((agent)), configure the integration from the **Add Custom Logs integration** page.
-1. Give your integration a meaningful name and description.
-1. Add the **Log file path**. For example, `/var/log/your-logs.log`.
-1. An agent policy is created that defines the data your ((agent)) collects. If you've previously installed an ((agent)) on the host you're collecting logs from, you can select the **Existing hosts** tab and use an existing agent policy.
-1. Click **Save and continue**.
-
-You can add additional settings to the integration under **Custom log file** by clicking **Advanced options** and adding YAML configurations to the **Custom configurations**. For example, the following settings would add a parser to manage messages that span multiple lines and add service fields. Service fields are used for Log correlation.
-
-```yaml
- parsers:
- - multiline:
- type: pattern
- pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
- negate: true
- match: after
- fields_under_root: true
- fields:
- service.name: your_service_name [^1]
- service.version: your_service_version [^1]
- service.environment: your_service_environment [^1]
-```
-[^1]: for Log correlation, add the `service.name` (required), `service.version` (optional), and `service.environment` (optional) of the service you're collecting logs from.
-
-#### Step 2: Add an ingest pipeline to your integration
-
-To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](((ecs-ref))/ecs-reference.html)-compatible fields.
-
-1. From the custom logs integration, select **Integration policies** tab.
-1. Select the integration policy you created in the previous section.
-1. Click **Change defaults** → **Advanced options**.
-1. Under **Ingest pipelines**, click **Add custom pipeline**.
-1. Create an ingest pipeline with a [dissect processor](((ref))/dissect-processor.html) to extract structured fields from your log messages.
-
- Click **Import processors** and add a similar JSON to the following example:
-
- ```JSON
- {
- "description": "Extracts the timestamp log level and host ip",
- "processors": [
- {
- "dissect": { [^1]
- "field": "message", [^2]
- "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" [^3]
- }
- }
- ]
- }
- ```
- [^1]: `processors.dissect`: Adds a [dissect processor](((ref))/dissect-processor.html) to extract structured fields from your log message.
- [^2]: `field`: The field you're extracting data from, `message` in this case.
- [^3]: `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{message}` are common [ECS](((ecs-ref))/ecs-reference.html) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
-1. Click **Create pipeline**.
-1. Save and deploy your integration.
-
-## Correlate logs
-Correlate your application logs with trace events to:
-
-* view the context of a log and the parameters provided by a user
-* view all logs belonging to a particular trace
-* easily move between logs and traces when debugging application issues
-
-Log correlation works on two levels:
-
-- at service level: annotation with `service.name`, `service.version`, and `service.environment` allow you to link logs with APM services
-- at trace level: annotation with `trace.id` and `transaction.id` allow you to link logs with traces
-
-Learn about correlating plaintext logs in the agent-specific ingestion guides:
-
-* [Go](((apm-go-ref))/logs.html)
-* [Java](((apm-java-ref))/logs.html#log-correlation-ids)
-* [.NET](((apm-dotnet-ref))/log-correlation.html)
-* [Node.js](((apm-node-ref))/log-correlation.html)
-* [Python](((apm-py-ref))/logs.html#log-correlation-ids)
-* [Ruby](((apm-ruby-ref))/log-correlation.html)
-
-## View logs
-
-To view logs ingested by ((filebeat)), go to **Discover**. Create a data view based on the `filebeat-*` index pattern. Refer to [Create a data view](((kibana-ref))/data-views.html) for more information.
-
-To view logs ingested by ((agent)), go to **Discover** and select the **Logs Explorer** tab. Refer to the Filter and aggregate logs documentation for more on viewing and filtering your log data.
\ No newline at end of file
diff --git a/docs/en/serverless/logging/run-log-pattern-analysis.mdx b/docs/en/serverless/logging/run-log-pattern-analysis.mdx
deleted file mode 100644
index 7f335bac4f..0000000000
--- a/docs/en/serverless/logging/run-log-pattern-analysis.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
----
-slug: /serverless/observability/run-log-pattern-analysis
-title: Run a pattern analysis on log data
-description: Find patterns in unstructured log messages.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-
-
-Log pattern analysis helps you find patterns in unstructured log messages and makes it easier to examine your data.
-When you run a pattern analysis, it performs categorization analysis on a selected field,
-creates categories based on the data, and then displays them together in a chart that shows the distribution of each category and an example document that matches the category.
-Log pattern analysis is useful when you want to examine how often different types of logs appear in your data set.
-It also helps you group logs in ways that go beyond what you can achieve with a terms aggregation.
-
-Log pattern analysis works on every text field.
-
-To run a log pattern analysis:
-
-1. In your ((observability)) project, go to **Discover** and select the **Logs Explorer** tab.
-
-1. Select an integration, for example **Elastic APM error_logs**, and apply any filters that you want.
-
-1. If you don't see any results, expand the time range, for example, to **Last 15 days**.
-
-1. In the **Available fields** list, select the text field you want to analyze, then click **Run pattern analysis**.
-
-
-
- The results of the analysis are shown in a table:
-
- 
-
-1. (Optional) Select one or more patterns, then choose to filter for (or filter out) documents that match the selected patterns.
-**Logs Explorer** only displays documents that match (or don't match) the selected patterns.
-The filter options enable you to remove unimportant messages and focus on the more important, actionable data during troubleshooting.
diff --git a/docs/en/serverless/logging/send-application-logs.mdx b/docs/en/serverless/logging/send-application-logs.mdx
deleted file mode 100644
index 63b43764be..0000000000
--- a/docs/en/serverless/logging/send-application-logs.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
----
-slug: /serverless/observability/send-application-logs
-title: ((apm-agent)) log sending
-description: Use the Java ((apm-agent)) to capture and send logs.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import ApplicationLogsApmAgentLogSending from '../transclusion/observability/application-logs/apm-agent-log-sending.mdx'
-
-
-
-
-
-## Get started
-
-See the [Java agent](((apm-java-ref))/logs.html#log-sending) documentation to get started.
\ No newline at end of file
diff --git a/docs/en/serverless/logging/stream-log-files.mdx b/docs/en/serverless/logging/stream-log-files.mdx
deleted file mode 100644
index d7553b82d4..0000000000
--- a/docs/en/serverless/logging/stream-log-files.mdx
+++ /dev/null
@@ -1,289 +0,0 @@
----
-slug: /serverless/observability/stream-log-files
-title: Stream any log file
-description: Send a log file to your Observability project using the standalone ((agent)).
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import DownloadWidget from '../transclusion/fleet/tab-widgets/download-widget.mdx'
-import RunStandaloneWidget from '../transclusion/fleet/tab-widgets/run-standalone-widget.mdx'
-import AgentLocationWidget from '../transclusion/observability/tab-widgets/logs/agent-location/widget.mdx'
-import StopWidget from '../transclusion/fleet/tab-widgets/stop-widget.mdx'
-import StartWidget from '../transclusion/fleet/tab-widgets/start-widget.mdx'
-import Roles from '../partials/roles.mdx'
-
-
-
-This guide shows you how to send a log file to your Observability project using a standalone ((agent)) and configure the ((agent)) and your data streams using the `elastic-agent.yml` file, and query your logs using the data streams you've set up.
-
-The quickest way to get started is to:
-
-1. Open your Observability project. If you don't have one, .
-1. Go to **Add Data**.
-1. Under **Collect and analyze logs**, click **Stream log files**.
-
-This will kick off a set of guided instructions that walk you through configuring the standalone ((agent)) and sending log data to your project.
-
-To install and configure the ((agent)) manually, refer to Manually install and configure the standalone ((agent)).
-
-## Configure inputs and integration
-
-Enter a few configuration details in the guided instructions.
-
-{/* Do we want to include a screenshot or will it be too difficult to maintain? */}
-
-
-**Configure inputs**
-
-* **Log file path**: The path to your log files.
- You can also use a pattern like `/var/log/your-logs.log*`.
- Click **Add row** to add more log file paths.
-
- This will be passed to the `paths` field in the generated `elastic-agent.yml` file in a future step.
-
-
-* **Service name**: Provide a service name to allow for distributed services running on
- multiple hosts to correlate the related instances.
-
-{/* Advanced settings? */}
-
-**Configure integration**
-
-Elastic creates an integration to streamline connecting your log data to Elastic.
-
-* **Integration name**: Give your integration a name.
- This is a unique identifier for your stream of log data that you can later use to filter data in Logs Explorer.
- The value must be unique within your project, all lowercase, and max 100 chars. Special characters will be replaced with `_`.
-
- This will be passed to the `streams.id` field in the generated `elastic-agent.yml` file in a future step.
-
- The integration name will be used in Logs Explorer.
- It will appear in the "All logs" dropdown menu.
-
-
-
-
-* **Dataset name**: Give your integration's dataset a name.
- The name for your dataset data stream. Name this data stream anything that signifies the source of the data.
- The value must be all lowercase and max 100 chars. Special characters will be replaced with `_`.
-
- This will be passed to the `data_stream.dataset` field in the generated `elastic-agent.yml` file in a future step.
-
-## Install the ((agent))
-
-After configuring the inputs and integration, you'll continue in the guided instructions to
-install and configure the standalone ((agent)).
-
-Run the command under **Install the ((agent))** that corresponds with your system to download, extract, and install the ((agent)).
-Turning on **Automatically download the agent's config** includes your updated ((agent)) configuration file in the download.
-
-If you do not want to automatically download the configuration, click **Download config file** to download it manually and
-add it to `/opt/Elastic/Agent/elastic-agent.yml` on the host where you installed the ((agent)).
-The values you provided in Configure inputs and integration will be prepopulated in the generated configuration file.
-
-
-
-## Manually install and configure the standalone ((agent))
-
-If you're not using the guided instructions, follow these steps to manually install and configure your the ((agent)).
-
-### Step 1: Download and extract the ((agent)) installation package
-
-On your host, download and extract the installation package that corresponds with your system:
-
-
-
-### Step 2: Install and start the ((agent))
-After downloading and extracting the installation package, you're ready to install the ((agent)).
-From the agent directory, run the install command that corresponds with your system:
-
-
-On macOS, Linux (tar package), and Windows, run the `install` command to
-install and start ((agent)) as a managed service and start the service. The DEB and RPM
-packages include a service unit for Linux systems with
-systemd, For these systems, you must enable and start the service.
-
-
-
-
-
-
-During installation, you'll be prompted with some questions:
-
-1. When asked if you want to install the agent as a service, enter `Y`.
-1. When asked if you want to enroll the agent in Fleet, enter `n`.
-
-### Step 3: Configure the ((agent))
-
-After your agent is installed, configure it by updating the `elastic-agent.yml` file.
-
-#### Locate your configuration file
-
-You'll find the `elastic-agent.yml` in one of the following locations according to your system:
-
-
-
-#### Update your configuration file
-
-Update the default configuration in the `elastic-agent.yml` file manually.
-It should look something like this:
-
-```yaml
-outputs:
- default:
- type: elasticsearch
- hosts: ':'
- api_key: 'your-api-key'
-inputs:
- - id: your-log-id
- type: filestream
- streams:
- - id: your-log-stream-id
- data_stream:
- dataset: example
- paths:
- - /var/log/your-logs.log
-```
-
-You need to set the values for the following fields:
-
-
-
- `hosts`
-
- Copy the ((es)) endpoint from your project's page and add the port (the default port is `443`). For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
-
- If you're following the guided instructions in your project,
- the ((es)) endpoint will be prepopulated in the configuration file.
-
-
- If you need to find your project's ((es)) endpoint outside the guided instructions:
-
- 1. Go to the **Projects** page that lists all your projects.
- 1. Click **Manage** next to the project you want to connect to.
- 1. Click **View** next to _Endpoints_.
- 1. Copy the _Elasticsearch endpoint_.
-
-
-
- 
-
-
-
-
- `api-key`
-
- Use an API key to grant the agent access to your project.
- The API key format should be `:`.
-
- If you're following the guided instructions in your project, an API key will be autogenerated
- and will be prepopulated in the downloadable configuration file.
-
-
-
- If configuring the ((agent)) manually, create an API key:
-
- 1. Navigate to **Project settings** → **Management** → **API keys** and click **Create API key**.
- 1. Select **Restrict privileges** and add the following JSON to give privileges for ingesting logs.
- ```json
- {
- "standalone_agent": {
- "cluster": [
- "monitor"
- ],
- "indices": [
- {
- "names": [
- "logs-*-*"
- ],
- "privileges": [
- "auto_configure", "create_doc"
- ]
- }
- ]
- }
- }
- ```
- 1. You _must_ set the API key to configure ((beats)).
- Immediately after the API key is generated and while it is still being displayed, click the
- **Encoded** button next to the API key and select **Beats** from the list in the tooltip.
- Base64 encoded API keys are not currently supported in this configuration.
-
- 
-
-
-
- `inputs.id`
-
- A unique identifier for your input.
-
-
-
- `type`
-
- The type of input. For collecting logs, set this to `filestream`.
-
-
-
- `streams.id`
-
- A unique identifier for your stream of log data.
-
- If you're following the guided instructions in your project, this will be prepopulated with
- the value you specified in Configure inputs and integration.
-
-
-
- `data_stream.dataset`
-
- The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`.
-
- If you're following the guided instructions in your project, this will be prepopulated with
- the value you specified in Configure inputs and integration.
-
-
-
- `paths`
-
- The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`.
-
- If you're following the guided instructions in your project, this will be prepopulated with
- the value you specified in Configure inputs and integration.
-
-
-
-
-#### Restart the ((agent))
-
-After updating your configuration file, you need to restart the ((agent)).
-
-First, stop the ((agent)) and its related executables using the command that works with your system:
-
-
-
-
-
-Next, restart the ((agent)) using the command that works with your system:
-
-
-
-## Troubleshoot your ((agent)) configuration
-
-If you're not seeing your log files in your project, verify the following in the `elastic-agent.yml` file:
-
-- The path to your logs file under `paths` is correct.
-- Your API key is in `:` format. If not, your API key may be in an unsupported format, and you'll need to create an API key in **Beats** format.
-
-If you're still running into issues, refer to [((agent)) troubleshooting](((fleet-guide))/fleet-troubleshooting.html) and [Configure standalone Elastic Agents](((fleet-guide))/elastic-agent-configuration.html).
-
-## Next steps
-
-After you have your agent configured and are streaming log data to your project:
-
-- Refer to the Parse and organize logs documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data.
-- Refer to the Filter and aggregate logs documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently.
diff --git a/docs/en/serverless/logging/troubleshoot-logs.mdx b/docs/en/serverless/logging/troubleshoot-logs.mdx
deleted file mode 100644
index 5bf09218ee..0000000000
--- a/docs/en/serverless/logging/troubleshoot-logs.mdx
+++ /dev/null
@@ -1,113 +0,0 @@
----
-slug: /serverless/observability/troubleshoot-logs
-title: Troubleshoot logs
-description: Find solutions to errors you might encounter while onboarding your logs.
-tags: [ 'serverless', 'observability', 'troubleshooting' ]
----
-
-
-
-This section provides possible solutions for errors you might encounter while onboarding your logs.
-
-## User does not have permissions to create API key
-
-When adding a new data using the guided instructions in your project (**Add data** → **Collect and analyze logs** → **Stream log files**),
-if you don't have the required privileges to create an API key, you'll see the following error message:
-
->You need permission to manage API keys
-
-### Solution
-
-You need to either:
-
-* Ask an administrator to update your user role to at least **Deployment access** → **Admin**. Read more about user roles in . After your use role is updated, restart the onboarding flow.
-* Get an API key from an administrator and manually add the API to the ((agent)) configuration. See Configure the ((agent)) for more on manually updating the configuration and adding the API key.
-
-{/* Not sure if these are different in serverless... */}
-
-{/* ## Failed to create API key
-
-If you don't have the privileges to create `savedObjects` in a project, you'll see the following error message:
-
-```plaintext
-Failed to create API key
-
-Something went wrong: Unable to create observability-onboarding-state
-```
-
-### Solution
-
-You need an administrator to give you the `Saved Objects Management` ((kib)) privilege to generate the required `observability-onboarding-state` flow state.
-Once you have the necessary privileges, restart the onboarding flow. */}
-
-## Observability project not accessible from host
-
-If your Observability project is not accessible from the host, you'll see the following error message after pasting the **Install the ((agent))** instructions into the host:
-
-```plaintext
-Failed to connect to {host} port {port} after 0 ms: Connection refused
-```
-
-### Solution
-
-The host needs access to your project. Port `443` must be open and the project's ((es)) endpoint must be reachable. You can locate your project's endpoint by clicking the help icon () and selecting **Endpoints**. Run the following command, replacing the URL with your endpoint, and you should get an authentication error with more details on resolving your issue:
-
-```shell
-curl https://your-endpoint.elastic.cloud
-```
-
-## Download ((agent)) failed
-
-If the host was able to download the installation script but cannot connect to the public artifact repository, you'll see the following error message:
-
-```plaintext
-Download Elastic Agent
-
-Failed to download Elastic Agent, see script for error.
-```
-
-### Solutions
-
-* If the combination of the ((agent)) version and operating system architecture is not available, you'll see the following error message:
-
- ```plaintext
- The requested URL returned error: 404
- ```
-
- To fix this, update the ((agent)) version in the installation instructions to a known version of the ((agent)).
-
-* If the ((agent)) was fully downloaded previously, you'll see the following error message:
-
- ```plaintext
- Error: cannot perform installation as Elastic Agent is already running from this directory
- ```
-
- To fix this, delete previous downloads and restart the onboarding.
-
-* You're an Elastic Cloud Enterprise user without access to the Elastic downloads page.
-
-## Install ((agent)) failed
-
-If an ((agent)) already exists on your host, you'll see the following error message:
-
-```plaintext
-Install Elastic Agent
-
-Failed to install Elastic Agent, see script for error.
-```
-
-### Solution
-
-You can uninstall the current ((agent)) using the `elastic-agent uninstall` command, and run the script again.
-
-
-Uninstalling the current ((agent)) removes the entire current setup, including the existing configuration.
-
-
-## Waiting for Logs to be shipped... step never completes
-
-If the **Waiting for Logs to be shipped...** step never completes, logs are not being shipped to your Observability project, and there is most likely an issue with your ((agent)) configuration.
-
-### Solution
-
-Inspect the ((agent)) logs for errors. See the [Debug standalone ((agent))s](((fleet-guide))/debug-standalone-agents.html#inspect-standalone-agent-logs) documentation for more on finding errors in ((agent)) logs.
diff --git a/docs/en/serverless/logging/view-and-monitor-logs.mdx b/docs/en/serverless/logging/view-and-monitor-logs.mdx
deleted file mode 100644
index 1c8f2dc9c9..0000000000
--- a/docs/en/serverless/logging/view-and-monitor-logs.mdx
+++ /dev/null
@@ -1,90 +0,0 @@
----
-slug: /serverless/observability/discover-and-explore-logs
-title: Explore logs
-description: Visualize and analyze logs.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-With **Logs Explorer**, based on Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization.
-You can also customize and save your searches and place them on a dashboard.
-Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view.
-
-Go to Logs Explorer by opening **Discover** from the navigation menu, and selecting the **Logs Explorer** tab.
-
-
-
-## Required ((kib)) privileges
-
-Viewing data in Logs Explorer requires `read` privileges for **Discover** and **Integrations**.
-For more on assigning Kibana privileges, refer to the [((kib)) privileges](((kibana-ref))/kibana-privileges.html) docs.
-
-## Find your logs
-
-By default, Logs Explorer shows all of your logs according to the index patterns set in the **logs source** advanced setting.
-Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_.
-
-If you need to focus on logs from a specific integrations, select the integration from the logs menu:
-
-
-
-Once you have the logs you want to focus on displayed, you can drill down further to find the information you need.
-For more on filtering your data in Logs Explorer, refer to Filter logs in Logs Explorer.
-
-## Review log data in the documents table
-
-The documents table in Logs Explorer functions similarly to the table in Discover.
-You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover.
-
-Refer to the [Discover](((kibana-ref))/discover.html) documentation for more information on updating the table.
-
-### Analyze data with smart fields
-
-Smart fields are dynamic fields that provide valuable insight on where your log documents come from, what information they contain, and how you can interact with them.
-The following sections detail the smart fields available in Logs Explorer.
-
-#### Resource smart field
-
-The resource smart field shows where your logs are coming from by displaying fields like `service.name`, `container.name`, `orchestrator.namespace`, `host.name`, and `cloud.instance.id`.
-Use this information to see where issues are coming from and if issues are coming from the same source.
-
-#### Content smart field
-
-The content smart field shows your logs' `log.level` and `message` fields.
-If neither of these fields are available, the content smart field will show the `error.message` or `event.original` field.
-Use this information to see your log content and inspect issues.
-
-#### Actions smart field
-
-The actions smart field provides access to additional information about your logs.
-
-**Expand:** () Open the log details to get an in-depth look at an individual log file.
-
-**Degraded document indicator:** () Shows if any of the document's fields were ignored when it was indexed.
-Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored.
-
-**Stacktrace indicator:** () Shows if the document contains stack traces.
-This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces.
-
-## View log details
-
-Click the expand icon () in the **Actions** column to get an in-depth look at an individual log file.
-
-These details provide immediate feedback and context for what's happening and where it's happening for each log.
-From here, you can quickly debug errors and investigate the services where errors have occurred.
-
-The following actions help you filter and focus on specific fields in the log details:
-
-* **Filter for value ():** Show logs that contain the specific field value.
-* **Filter out value ():** Show logs that do _not_ contain the specific field value.
-* **Filter for field present ():** Show logs that contain the specific field.
-* **Toggle column in table ():** Add or remove a column for the field to the main Logs Explorer table.
-
-## View log quality issues
-
-From the log details of a document with ignored fields, as shown by the degraded document indicator (()), expand the **Quality issues** section to see the name and value of the fields that were ignored.
-Select **Data set details** to open the **Data Set Quality** page. Here you can monitor your data sets and investigate any issues.
-
-The **Data Set Details** page is also accessible from **Project settings** → **Management** → **Data Set Quality**.
-Refer to Monitor data sets for more information.
\ No newline at end of file
diff --git a/docs/en/serverless/monitor-datasets.mdx b/docs/en/serverless/monitor-datasets.mdx
deleted file mode 100644
index a6d14454cc..0000000000
--- a/docs/en/serverless/monitor-datasets.mdx
+++ /dev/null
@@ -1,63 +0,0 @@
----
-id: serverlessObservabilityMonitorDatasets
-slug: /serverless/observability/monitor-datasets
-title: Data set quality monitoring
-description: Monitor data sets to find degraded documents.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-The **Data Set Quality** page provides an overview of your log, metric, trace, and synthetic data sets.
-Use this information to get an idea of your overall data set quality and find data sets that contain incorrectly parsed documents.
-
-Access the Data Set Quality page from the main menu at **Project settings** → **Management** → **Data Set Quality**.
-By default, the page only shows log data sets. To see other data set types, select them from the **Type** menu.
-
-
- Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](((ref))/security-privileges.html#privileges-list-indices) for the `logs-*-*` index.
-
-
-The quality of your data sets is based on the percentage of degraded documents in each data set.
-A degraded document in a data set contains the [`_ignored`](((ref))/mapping-ignored-field.html) property because one or more of its fields were ignored during indexing.
-Fields are ignored for a variety of reasons.
-For example, when the [`ignore_malformed`](((ref))/mapping-ignored-field.html) parameter is set to true, if a document field contains the wrong data type, the malformed field is ignored and the rest of the document is indexed.
-
-From the data set table, you'll find information for each data set such as its namespace, when the data set was last active, and the percentage of degraded docs.
-The percentage of degraded documents determines the data set's quality according to the following scale:
-
-* Good (): 0% of the documents in the data set are degraded.
-* Degraded (): Greater than 0% and up to 3% of the documents in the data set are degraded.
-* Poor (): Greater than 3% of the documents in the data set are degraded.
-
-Opening the details of a specific data set shows the degraded documents history, a summary for the data set, and other details that can help you determine if you need to investigate any issues.
-
-## Investigate issues
-The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues.
-From the data set table, you can open the data set's details page, and view commonly ignored fields and information about those fields.
-Open a logs data set in Logs Explorer or other data set types in Discover to find ignored fields in individual documents.
-
-### Find ignored fields in data sets
-To open the details page for a data set with poor or degraded quality and view ignored fields:
-
-1. From the data set table, click next to a data set with poor or degraded quality.
-1. From the details, scroll down to **Quality issues**.
-
-The **Quality issues** section shows fields that have been ignored, the number of documents that contain ignored fields, and the timestamp of last occurrence of the field being ignored.
-
-### Find ignored fields in individual logs
-To use Logs Explorer or Discover to find ignored fields in individual logs:
-
-1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table.
-1. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover.
-
-The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly.
-Under the **actions** column, you'll find the degraded document icon ().
-
-Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue:
-
-1. Under the **actions** column, click to open the document details.
-1. Select the **JSON** tab.
-1. Scroll towards the end of the JSON to find the `ignored_field_values`.
-
-Here, you'll find all of the `_ignored` fields in the document and their values, which should provide some clues as to why the fields were ignored.
\ No newline at end of file
diff --git a/docs/en/serverless/observability-overview.mdx b/docs/en/serverless/observability-overview.mdx
deleted file mode 100644
index 49871d18a3..0000000000
--- a/docs/en/serverless/observability-overview.mdx
+++ /dev/null
@@ -1,136 +0,0 @@
----
-slug: /serverless/observability/serverless-observability-overview
-title: Observability overview
-description: Learn how to accelerate problem resolution with open, flexible, and unified observability powered by advanced machine learning and analytics.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-
-
-((observability)) provides granular insights and context into the behavior of applications running in your environments.
-It's an important part of any system that you build and want to monitor.
-Being able to detect and fix root cause events quickly within an observable system is a minimum requirement for any analyst.
-
-((observability)) provides a single stack to unify your logs, metrics, and application traces.
-Ingest your data directly to your Observability project, where you can further process and enhance the data,
-before visualizing it and adding alerts.
-
-
-
-
-
-## Log monitoring
-
-Analyze log data from your hosts, services, Kubernetes, Apache, and many more.
-
-In **Logs Explorer** (powered by Discover), you can quickly search and filter your log data,
-get information about the structure of the fields, and display your findings in a visualization.
-
-
-
-Learn more about log monitoring →
-
-
-
-{/* RUM is not supported for this release. */}
-
-{/* Synthetic monitoring is not supported for this release. */}
-
-{/* Universal Profiling is not supported for this release. */}
-
-## Application performance monitoring (APM)
-
-Instrument your code and collect performance data and errors at runtime by installing APM agents like Java, Go, .NET, and many more.
-Then use ((observability)) to monitor your software services and applications in real time:
-
-* Visualize detailed performance information on your services.
-* Identify and analyze errors.
-* Monitor host-level and APM agent-specific metrics like JVM and Go runtime metrics.
-
-The **Service** inventory provides a quick, high-level overview of the health and general performance of all instrumented services.
-
-
-
-Learn more about Application performance monitoring (APM) →
-
-
-
-## Infrastructure monitoring
-
-Monitor system and service metrics from your servers, Docker, Kubernetes, Prometheus, and other services and applications.
-
-The **Infrastructure** UI provides a couple ways to view and analyze metrics across your infrastructure:
-
-The **Inventory** page provides a view of your infrastructure grouped by resource type.
-
-
-
-The **Hosts** page provides a dashboard-like view of your infrastructure and is backed by an easy-to-use interface called Lens.
-
-
-
-From either page, you can view health and performance metrics to get visibility into the overall health of your infrastructure.
-You can also drill down into details about a specific host, including performance metrics, host metadata, running processes,
-and logs.
-
-Learn more about infrastructure monitoring →
-
-## Synthetic monitoring
-
-Simulate actions and requests that an end user would perform on your site at predefined intervals and in a controlled environment.
-The end result is rich, consistent, and repeatable data that you can trend and alert on.
-
-For more information, see Synthetic monitoring.
-
-## Alerting
-
-Stay aware of potential issues in your environments with ((observability))’s alerting
-and actions feature that integrates with log monitoring and APM.
-It provides a set of built-in actions and specific threshold rules
-and enables central management of all rules.
-
-On the **Alerts** page, the **Alerts** table provides a snapshot of alerts occurring within the specified time frame. The table includes the alert status, when it was last updated, the reason for the alert, and more.
-
-
-
-Learn more about alerting →
-
-## Service-level objectives (SLOs)
-
-Set clear, measurable targets for your service performance,
-based on factors like availability, response times, error rates, and other key metrics.
-Then monitor and track your SLOs in real time,
-using detailed dashboards and alerts that help you quickly identify and troubleshoot issues.
-
-From the SLO overview list, you can see all of your SLOs and a quick summary of what’s happening in each one:
-
-
-
-Learn more about SLOs →
-
-## Cases
-
-Collect and share information about observability issues by creating cases.
-Cases allow you to track key investigation details,
-add assignees and tags to your cases, set their severity and status, and add alerts,
-comments, and visualizations. You can also send cases to third-party systems,
-such as ServiceNow and Jira.
-
-
-
-Learn more about cases →
-
-## AIOps
-
-Reduce the time and effort required to detect, understand, investigate, and resolve incidents at scale
-by leveraging predictive analytics and machine learning:
-
-* Detect anomalies by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
-* Find and investigate the causes of unusual spikes or drops in log rates.
-* Detect distribution changes, trend changes, and other statistically significant change points in a metric of your time series data.
-
-
-
-Learn more about AIOps →
diff --git a/docs/en/serverless/partials/apm-agent-warning.mdx b/docs/en/serverless/partials/apm-agent-warning.mdx
deleted file mode 100644
index 49b4b09d1d..0000000000
--- a/docs/en/serverless/partials/apm-agent-warning.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-
- Not all APM agent configuration options are compatible with Elastic Cloud serverless.
-
\ No newline at end of file
diff --git a/docs/en/serverless/partials/feature-beta.mdx b/docs/en/serverless/partials/feature-beta.mdx
deleted file mode 100644
index 3736786360..0000000000
--- a/docs/en/serverless/partials/feature-beta.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-
- The {props.feature} functionality is in beta and is subject to change. The design and code is less mature than official generally available features and is being provided as-is with no warranties.
-
\ No newline at end of file
diff --git a/docs/en/serverless/partials/roles.mdx b/docs/en/serverless/partials/roles.mdx
deleted file mode 100644
index d7d302aa28..0000000000
--- a/docs/en/serverless/partials/roles.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-
- The **{props.role}** role or higher is required to {props.goal}. To learn more, refer to .
-
\ No newline at end of file
diff --git a/docs/en/serverless/projects/billing.mdx b/docs/en/serverless/projects/billing.mdx
deleted file mode 100644
index f3b6f3f4d8..0000000000
--- a/docs/en/serverless/projects/billing.mdx
+++ /dev/null
@@ -1,24 +0,0 @@
----
-slug: /serverless/observability/observability-billing
-title: Observability billing dimensions
-description: Learn about how Observability usage affects pricing.
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Elastic Observability severless projects provide you with all the capabilities of Elastic Observability to monitor critical applications.
-Projects are provided using a Software as a Service (SaaS) model, and pricing is entirely consumption-based.
-
-Your monthly bill is based on the capabilities you use.
-When you use Elastic Observability, your bill is calculated based on data volume, which has these components:
-
-* **Ingest** — Measured by the number of GB of log/event/info data that you send to your Observability project over the course of a month.
-* **Storage/Retention** — This is known as Search AI Lake.
-* In addition to the core ingest and retention dimensions, there is an optional charge to execute synthetic monitors on our testing infrastructure.
-Browser (journey) based tests are charged on a per-test-run basis,
-and Ping (lightweight) tests have an all-you-can-use model per location used.
-
-For more information, refer to .
-
-For detailed Observability serverless project rates, check the [Observability Serverless pricing page](https://www.elastic.co/pricing/serverless-observability).
diff --git a/docs/en/serverless/projects/create-an-observability-project.mdx b/docs/en/serverless/projects/create-an-observability-project.mdx
deleted file mode 100644
index dac7e20c1b..0000000000
--- a/docs/en/serverless/projects/create-an-observability-project.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-slug: /serverless/observability/create-an-observability-project
-title: Create an ((observability)) project
-description: Create a fully-managed ((observability)) project to monitor the health of your applications.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-import Roles from '../partials/roles.mdx'
-
-
-
-
-
-An ((observability)) project allows you to run ((observability)) in an autoscaled and fully-managed environment,
-where you don't have to manage the underlying ((es)) cluster or ((kib)) instances.
-
-1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/) and log in to your account.
-1. Within **Serverless projects**, click **Create project**.
-1. Under **Observability**, click **Next**.
-1. Enter a name for your project.
-1. (Optional) Click **Edit settings** to change your project settings:
- * **Cloud provider**: The cloud platform where you’ll deploy your project. We currently support Amazon Web Services (AWS).
- * **Region**: The where your project will live.
-1. Click **Create project**. It takes a few minutes to create your project.
-1. When the project is ready, click **Continue**.
-
-From here, you can start adding logs and other observability data.
-
-
- To return to the onboarding page later, select **Add data** from the main menu.
-
-
-## Next steps
-
-Learn how to add data to your project and start using ((observability)) features:
-
-*
-*
-*
diff --git a/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx b/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx
deleted file mode 100644
index 62ca87950e..0000000000
--- a/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-slug: /serverless/observability/quickstarts/k8s-logs-metrics
-title: Monitor your Kubernetes cluster with Elastic Agent
-description: Learn how to monitor your cluster infrastructure running on Kubernetes.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-In this quickstart guide, you'll learn how to create the Kubernetes resources that are required to monitor your cluster infrastructure.
-
-This new approach requires minimal configuration and provides you with an easy setup to monitor your infrastructure. You no longer need to download, install, or configure the Elastic Agent, everything happens automatically when you run the kubectl command.
-
-The kubectl command installs the standalone Elastic Agent in your Kubernetes cluster, downloads all the Kubernetes resources needed to collect metrics from the cluster, and sends it to Elastic.
-
-## Prerequisites
-
-- An ((observability)) project. To learn more, refer to .
-- A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to .
-- A running Kubernetes cluster.
-- [Kubectl](https://kubernetes.io/docs/reference/kubectl/).
-
-## Collect your data
-
-1. Create a new ((observability)) project, or open an existing one.
-1. In your ((observability)) project, go to **Add Data**.
-1. Select **Monitor infrastructure**, and then select **Kubernetes**.
-
- 
-1. To install the Elastic Agent on your host, copy and run the install command.
-
- You will use the kubectl command to download a manifest file, inject user's API key generated by Kibana, and create the Kubernetes resources.
-
-1. Go back to the **Add Observability Data** page.
- There might be a slight delay before data is ingested. When ready, you will see the message **We are monitoring your cluster**.
-
-1. Click **Explore Kubernetes cluster** to navigate to dashboards and explore your data.
-
-## Visualize your data
-
-After installation is complete and all relevant data is flowing into Elastic,
-the **Visualize your data** section allows you to access the Kubernetes Cluster Overview dashboard that can be used to monitor the health of the cluster.
-
-
-
-Furthermore, you can access other useful prebuilt dashboards for monitoring Kubernetes resources, for example running pods per namespace, as well as the resources they consume, like CPU and memory.
-
-Refer to for a description of other useful features.
diff --git a/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx b/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx
deleted file mode 100644
index 72f1ecf6e1..0000000000
--- a/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx
+++ /dev/null
@@ -1,115 +0,0 @@
----
-slug: /serverless/observability/quickstarts/monitor-hosts-with-elastic-agent
-title: Monitor hosts with ((agent))
-description: Learn how to scan your hosts to detect and collect logs and metrics.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-In this quickstart guide, you'll learn how to scan your host to detect and collect logs and metrics,
-then navigate to dashboards to further analyze and explore your observability data.
-You'll also learn how to get value out of your observability data.
-
-To scan your host, you'll run an auto-detection script that downloads and installs ((agent)),
-which is used to collect observability data from the host and send it to Elastic.
-
-The script also generates an ((agent)) configuration file that you can use with your existing Infrastructure-as-Code tooling.
-
-## Prerequisites
-
-- An ((observability)) project. To learn more, refer to .
-- A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to .
-- Root privileges on the host—required to run the auto-detection script used in this quickstart.
-
-## Limitations
-
-- The auto-detection script currently scans for metrics and logs from Apache, Docker, Nginx, and the host system.
- It also scans for custom log files.
-- The auto-detection script works on Linux and MacOS only. Support for the `lsof` command is also required if you want to detect custom log files.
-- If you've installed Apache or Nginx in a non-standard location, you'll need to specify log file paths manually when you run the scan.
-- Because Docker Desktop runs in a VM, its logs are not auto-detected.
-
-## Collect your data
-
-1. Create a new ((observability)) project, or open an existing one.
-1. In your ((observability)) project, go to **Add Data**.
-1. Select **Collect and analyze logs**, and then select **Auto-detect logs and metrics**.
-1. Copy the command that's shown. For example:
-
- 
-
- You'll run this command to download the auto-detection script and scan your system for observability data.
-1. Open a terminal on the host you want to scan, and run the command.
-1. Review the list of log files:
- - Enter `Y` to ingest all the log files listed.
- - Enter `n` to either exclude log files or specify additional log paths. Enter `Y` to confirm your selections.
-
- When the script is done, you'll see a message like "((agent)) is configured and running."
-
-There might be a slight delay before logs and other data are ingested.
-
-
- You can re-run the script on the same host to detect additional logs.
- The script will scan the host and reconfigure ((agent)) with any additional logs that are found.
- If the script misses any custom logs, you can add them manually by entering `n` after the script has finished scanning the host.
-
-
-## Visualize your data
-
-After installation is complete and all relevant data is flowing into Elastic,
-the **Visualize your data** section will show links to assets you can use to analyze your data.
-Depending on what type of observability data was collected,
-the page may link to the following integration assets:
-
-
-
- **System**
- Prebuilt dashboard for monitoring host status and health using system metrics.
-
-
- **Apache**
- Prebuilt dashboard for monitoring Apache HTTP server health using error and access log data.
-
-
- **Docker**
- Prebuilt dashboard for monitoring the status and health of Docker containers.
-
-
- **Nginx**
- Prebuilt dashboard for monitoring Nginx server health using error and access log data.
-
-
- **Custom .log files**
- Logs Explorer for analyzing custom logs.
-
-
-
-For example, you can navigate the **Host overview** dashboard to explore detailed metrics about system usage and throughput.
-Metrics that indicate a possible problem are highlighted in red.
-
-
-
-## Get value out of your data
-
-After using the dashboards to examine your data and confirm you've ingested all the host logs and metrics you want to monitor,
-you can use ((observability)) to gain deeper insight into your data.
-
-For host monitoring, the following capabilities and features are recommended:
-
-- In the Infrastructure UI, analyze and compare data collected from your hosts.
-You can also:
- - Detect anomalies for memory usage and network traffic on hosts.
- - Create alerts that notify you when an anomaly is detected or a metric exceeds a given value.
-- In the Logs Explorer, search and filter your log data,
-get information about the structure of log fields, and display your findings in a visualization.
-You can also:
- - Monitor log data set quality to find degraded documents.
- - Run a pattern analysis to find patterns in unstructured log messages.
- - Create alerts that notify you when an Observability data type reaches or exceeds a given value.
-- Use AIOps features to apply predictive analytics and machine learning to your data:
- - Detect anomalies by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
- - Analyze log spikes and drops.
- - Detect change points in your time series data.
-
-Refer to for a description of other useful features.
diff --git a/docs/en/serverless/quickstarts/overview.mdx b/docs/en/serverless/quickstarts/overview.mdx
deleted file mode 100644
index 970addc46f..0000000000
--- a/docs/en/serverless/quickstarts/overview.mdx
+++ /dev/null
@@ -1,20 +0,0 @@
----
-slug: /serverless/observability/quickstarts/overview
-title: Quickstarts
-description: Learn how to ingest your observability data and get immediate value.
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-Our quickstarts dramatically reduce your time-to-value by offering a fast path to ingest and visualize your Observability data.
-Each quickstart provides:
-
-- A highly opinionated, fast path to data ingestion
-- Sensible configuration defaults with minimal configuration required
-- Auto-detection of logs and metrics for monitoring hosts
-- Quick access to related dashboards and visualizations
-
-## Available quickstarts
-
--
--
-
diff --git a/docs/en/serverless/serverless-observability.docnav.json b/docs/en/serverless/serverless-observability.docnav.json
deleted file mode 100644
index ed45485402..0000000000
--- a/docs/en/serverless/serverless-observability.docnav.json
+++ /dev/null
@@ -1,667 +0,0 @@
-{
- "mission": "Elastic Observability",
- "id": "serverless-observability",
- "landingPageSlug": "/serverless/observability/what-is-observability-serverless",
- "icon": "logoObservability",
- "description": "Description to be written",
- "items": [
- {
- "slug": "/serverless/observability/serverless-observability-overview",
- "classic-sources": [ "enObservabilityObservabilityIntroduction" ],
- "classic-skip": true
- },
- {
- "slug": "/serverless/observability/quickstarts/overview",
- "items": [
- {
- "slug": "/serverless/observability/quickstarts/monitor-hosts-with-elastic-agent"
- },
- {
- "slug": "/serverless/observability/quickstarts/k8s-logs-metrics"
- }
- ]
- },
- {
- "slug": "/serverless/observability/observability-billing"
- },
- {
- "label": "Create an Observability project",
- "slug": "/serverless/observability/create-an-observability-project"
- },
- {
- "slug": "/serverless/observability/log-monitoring",
- "classic-sources": ["enObservabilityLogsObservabilityOverview"],
- "items": [
- {
- "slug": "/serverless/observability/get-started-with-logs"
- },
- {
- "slug": "/serverless/observability/stream-log-files",
- "classic-sources": ["enObservabilityLogsStream"]
- },
- {
- "slug": "/serverless/observability/correlate-application-logs",
- "classic-sources": [ "enObservabilityApplicationLogs" ],
- "items": [
- {
- "slug": "/serverless/observability/plaintext-application-logs",
- "classic-sources": [
- "enObservabilityPlaintextLogs"
- ]
- },
- {
- "slug": "/serverless/observability/ecs-application-logs",
- "classic-sources": [
- "enObservabilityEcsLoggingLogs"
- ]
- },
- {
- "slug": "/serverless/observability/send-application-logs",
- "classic-sources": [
- "enObservabilityApmAgentLogSending"
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/parse-log-data",
- "classic-sources": ["enObservabilityLogsParse"]
- },
- {
- "slug": "/serverless/observability/filter-and-aggregate-logs",
- "classic-sources": ["enObservabilityLogsFilterAndAggregate"]
- },
- {
- "slug": "/serverless/observability/discover-and-explore-logs",
- "classic-sources": ["enObservabilityMonitorLogs"],
- "classic-skip": true
- },
- {
- "slug": "/serverless/observability/add-logs-service-name",
- "classic-sources": ["enObservabilityAddLogsServiceName"],
- "classic-skip": true
- },
- {
- "slug": "/serverless/observability/run-log-pattern-analysis",
- "classic-sources": ["enKibanaRunPatternAnalysisDiscover"]
- },
- {
- "slug": "/serverless/observability/troubleshoot-logs",
- "classic-sources": ["enObservabilityLogsTroubleshooting"]
- }
- ]
- },
- {
- "label": "Inventory",
- "slug": "/serverless/observability/inventory"
- },
- {
- "slug": "/serverless/observability/apm",
- "classic-sources": [ "enApmGuideApmOverview" ],
- "items": [
- {
- "slug": "/serverless/observability/apm-get-started",
- "classic-sources": [ "enObservabilityIngestTraces" ],
- "classic-skip": true
- },
- {
- "slug": "/serverless/observability/apm-send-data-to-elastic",
- "classic-sources": [],
- "items": [
- {
- "slug": "/serverless/observability/apm-agents-elastic-apm-agents"
- },
- {
- "slug": "/serverless/observability/apm-agents-opentelemetry",
- "classic-sources": [ "enApmGuideOpenTelemetry" ],
- "items": [
- {
- "slug": "/serverless/observability/apm-agents-opentelemetry-opentelemetry-native-support",
- "classic-sources": [
- "enApmGuideOpenTelemetryDirect"
- ]
- },
- {
- "slug": "/serverless/observability/apm-agents-opentelemetry-collect-metrics",
- "classic-sources": [
- "enApmGuideOpenTelemetryCollectMetrics"
- ]
- },
- {
- "slug": "/serverless/observability/apm-agents-opentelemetry-limitations",
- "classic-sources": [
- "enApmGuideOpenTelemetryKnownLimitations"
- ]
- },
- {
- "slug": "/serverless/observability/apm-agents-opentelemetry-resource-attributes",
- "classic-sources": [
- "enApmGuideOpenTelemetryResourceAttributes"
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/apm-agents-aws-lambda-functions",
- "classic-sources": [
- "enApmGuideMonitoringAwsLambda",
- "enApmLambdaAwsLambdaArch"
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/apm-view-and-analyze-traces",
- "classic-sources": [
- "enKibanaXpackApm"
- ],
- "items": [
- {
- "slug": "/serverless/observability/apm-find-transaction-latency-and-failure-correlations",
- "classic-sources": [
- "enKibanaCorrelations"
- ]
- },
- {
- "slug": "/serverless/observability/apm-integrate-with-machine-learning",
- "classic-sources": [
- "enKibanaMachineLearningIntegration"
- ]
- },
- {
- "slug": "/serverless/observability/apm-create-custom-links",
- "classic-sources": [
- "enKibanaCustomLinks"
- ]
- },
- {
- "slug": "/serverless/observability/apm-track-deployments-with-annotations",
- "classic-sources": [
- "enKibanaTransactionsAnnotations"
- ]
- },
- {
- "slug": "/serverless/observability/apm-query-your-data",
- "classic-sources": [
- "enKibanaAdvancedQueries"
- ]
- },
- {
- "slug": "/serverless/observability/apm-filter-your-data",
- "classic-sources": [
- "enKibanaFilters"
- ]
- },
- {
- "slug": "/serverless/observability/apm-observe-lambda-functions",
- "classic-sources": [
- "enKibanaApmLambda"
- ]
- },
- {
- "slug": "/serverless/observability/apm-ui-overview",
- "classic-sources": [
- "enKibanaApmGettingStarted"
- ],
- "items": [
- {
- "slug": "/serverless/observability/apm-services",
- "classic-sources": [
- "enKibanaServices"
- ]
- },
- {
- "slug": "/serverless/observability/apm-traces",
- "classic-sources": [
- "enKibanaTraces"
- ]
- },
- {
- "slug": "/serverless/observability/apm-dependencies",
- "classic-sources": [
- "enKibanaDependencies"
- ]
- },
- {
- "slug": "/serverless/observability/apm-service-map",
- "classic-sources": [
- "enKibanaServiceMaps"
- ]
- },
- {
- "slug": "/serverless/observability/apm-service-overview",
- "classic-sources": [
- "enKibanaServiceOverview"
- ]
- },
- {
- "slug": "/serverless/observability/apm-transactions",
- "classic-sources": [
- "enKibanaTransactions"
- ]
- },
- {
- "slug": "/serverless/observability/apm-trace-sample-timeline",
- "classic-sources": [
- "enKibanaSpans"
- ]
- },
- {
- "slug": "/serverless/observability/apm-errors",
- "classic-sources": [
- "enKibanaErrors"
- ]
- },
- {
- "slug": "/serverless/observability/apm-metrics",
- "classic-sources": [
- "enKibanaMetrics"
- ]
- },
- {
- "slug": "/serverless/observability/apm-infrastructure",
- "classic-sources": [
- "enKibanaInfrastructure"
- ]
- }, {
- "slug": "/serverless/observability/apm-logs",
- "classic-sources": [
- "enKibanaLogs"
- ]
- }
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/apm-data-types",
- "classic-sources": [ "" ]
- },
- {
- "slug": "/serverless/observability/apm-distributed-tracing",
- "classic-sources": [
- "enApmGuideApmDistributedTracing"
- ]
- },
- {
- "slug": "/serverless/observability/apm-reduce-your-data-usage",
- "classic-sources": [ "" ],
- "items": [
- {
- "slug": "/serverless/observability/apm-transaction-sampling",
- "classic-sources": [
- "enApmGuideSampling",
- "enApmGuideConfigureHeadBasedSampling"
- ]
- },
- {
- "slug": "/serverless/observability/apm-compress-spans",
- "classic-sources": [
- "enApmGuideSpanCompression"
- ]
- },
- {
- "slug": "/serverless/observability/apm-stacktrace-collection"
- }
- ]
- },
- {
- "slug": "/serverless/observability/apm-keep-data-secure",
- "classic-sources": [ "enApmGuideSecureAgentCommunication" ]
- },
- {
- "slug": "/serverless/observability/apm-troubleshooting",
- "classic-sources": [
- "enApmGuideTroubleshootApm",
- "enApmGuideCommonProblems",
- "enApmGuideServerEsDown",
- "enApmGuideCommonResponseCodes",
- "enApmGuideProcessingAndPerformance"
- ]
- },
- {
- "slug": "/serverless/observability/apm-reference",
- "classic-sources": [],
- "items": [
- {
- "slug": "/serverless/observability/apm-kibana-settings"
- },
- {
- "slug": "/serverless/observability/apm-server-api",
- "classic-sources": [
- "enApmGuideApi",
- "enApmGuideApiEvents",
- "enApmGuideApiMetadata",
- "enApmGuideApiTransaction",
- "enApmGuideApiSpan",
- "enApmGuideApiError",
- "enApmGuideApiMetricset",
- "enApmGuideApiConfig",
- "enApmGuideApiInfo",
- "enApmGuideApiOtlp"
- ]
- }
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/infrastructure-monitoring",
- "classic-sources": ["enObservabilityAnalyzeMetrics"],
- "items": [
- {
- "slug": "/serverless/observability/get-started-with-metrics"
- },
- {
- "slug": "/serverless/observability/view-infrastructure-metrics",
- "classic-sources": ["enObservabilityViewInfrastructureMetrics"]
- },
- {
- "slug": "/serverless/observability/analyze-hosts",
- "classic-sources": ["enObservabilityAnalyzeHosts"]
- },
- {
- "slug": "/serverless/observability/detect-metric-anomalies",
- "classic-sources": ["enObservabilityInspectMetricAnomalies"]
- },
- {
- "slug": "/serverless/observability/configure-intra-settings",
- "classic-sources": ["enObservabilityConfigureSettings"]
- },
- {
- "slug": "/serverless/observability/troubleshooting-infrastructure-monitoring",
- "items": [
- {
- "slug": "/serverless/observability/handle-no-results-found-message"
- }
- ]
- },
- {
- "slug": "/serverless/observability/metrics-reference",
- "classic-sources": ["enObservabilityMetricsReference"],
- "items": [
- {
- "slug": "/serverless/observability/host-metrics",
- "classic-sources": ["enObservabilityHostMetrics"]
- },
- {
- "slug": "/serverless/observability/container-metrics",
- "classic-sources": ["enObservabilityDockerContainerMetrics"]
- },
- {
- "slug": "/serverless/observability/kubernetes-pod-metrics",
- "classic-sources": ["enObservabilityKubernetesPodMetrics"]
- },
- {
- "slug": "/serverless/observability/aws-metrics",
- "classic-sources": ["enObservabilityAwsMetrics"]
- }
- ]
- },
- {
- "slug": "/serverless/observability/infrastructure-monitoring-required-fields",
- "classic-sources": ["enObservabilityMetricsAppFields"]
- }
- ]
- },
- {
- "label": "Synthetic monitoring",
- "slug": "/serverless/observability/monitor-synthetics",
- "classic-sources": ["enObservabilityMonitorUptimeSynthetics"],
- "items": [
- {
- "label": "Get started",
- "slug": "/serverless/observability/synthetics-get-started",
- "classic-sources": ["enObservabilitySyntheticsGetStarted"],
- "items": [
- {
- "label": "Use a Synthetics project",
- "slug": "/serverless/observability/synthetics-get-started-project",
- "classic-sources": ["enObservabilitySyntheticsGetStartedProject"]
- },
- {
- "label": "Use the Synthetics UI",
- "slug": "/serverless/observability/synthetics-get-started-ui",
- "classic-sources": ["enObservabilitySyntheticsGetStartedUi"]
- }
- ]
- },
- {
- "label": "Scripting browser monitors",
- "slug": "/serverless/observability/synthetics-journeys",
- "classic-sources": ["enObservabilitySyntheticsJourneys"],
- "items": [
- {
- "label": "Write a synthetic test",
- "slug": "/serverless/observability/synthetics-create-test",
- "classic-sources": ["enObservabilitySyntheticsCreateTest"]
- },
- {
- "label": "Configure individual monitors",
- "slug": "/serverless/observability/synthetics-monitor-use",
- "classic-sources": ["enObservabilitySyntheticsMonitorUse"]
- },
- {
- "label": "Use the Synthetics Recorder",
- "slug": "/serverless/observability/synthetics-recorder",
- "classic-sources": ["enObservabilitySyntheticsRecorder"]
- }
- ]
- },
- {
- "label": "Configure lightweight monitors",
- "slug": "/serverless/observability/synthetics-lightweight",
- "classic-sources": ["enObservabilitySyntheticsLightweight"]
- },
- {
- "label": "Manage monitors",
- "slug": "/serverless/observability/synthetics-manage-monitors",
- "classic-sources": ["enObservabilitySyntheticsManageMonitors"]
- },
- {
- "label": "Work with params and secrets",
- "slug": "/serverless/observability/synthetics-params-secrets",
- "classic-sources": ["enObservabilitySyntheticsParamsSecrets"]
- },
- {
- "label": "Analyze monitor data",
- "slug": "/serverless/observability/synthetics-analyze",
- "classic-sources": ["enObservabilitySyntheticsAnalyze"]
- },
- {
- "label": "Monitor resources on private networks",
- "slug": "/serverless/observability/synthetics-private-location",
- "classic-sources": ["enObservabilitySyntheticsPrivateLocation"]
- },
- {
- "label": "Use the CLI",
- "slug": "/serverless/observability/synthetics-command-reference",
- "classic-sources": ["enObservabilitySyntheticsCommandReference"]
- },
- {
- "label": "Configure a Synthetics project",
- "slug": "/serverless/observability/synthetics-configuration",
- "classic-sources": ["enObservabilitySyntheticsConfiguration"]
- },
- {
- "label": "Multifactor Authentication for browser monitors",
- "slug": "/serverless/observability/synthetics-mfa",
- "classic-sources": ["enObservabilitySyntheticsMFA"]
- },
- {
- "label": "Configure Synthetics settings",
- "slug": "/serverless/observability/synthetics-settings",
- "classic-sources": ["enObservabilitySyntheticsSettings"]
- },
- {
- "label": "Grant users access to secured resources",
- "slug": "/serverless/observability/synthetics-feature-roles",
- "classic-sources": ["enObservabilitySyntheticsFeatureRoles"]
- },
- {
- "label": "Manage data retention",
- "slug": "/serverless/observability/synthetics-manage-retention",
- "classic-sources": ["enObservabilitySyntheticsManageRetention"]
- },
- {
- "label": "Scale and architect a deployment",
- "slug": "/serverless/observability/synthetics-scale-and-architect",
- "classic-sources": ["enObservabilitySyntheticsScaleAndArchitect"]
- },
- {
- "label": "Synthetics Encryption and Security",
- "slug": "/serverless/observability/synthetics-security-encryption",
- "classic-sources": ["enObservabilitySyntheticsSecurityEncryption"]
- },
- {
- "label": "Troubleshooting",
- "slug": "/serverless/observability/synthetics-troubleshooting",
- "classic-sources": ["enObservabilitySyntheticsTroubleshooting"]
- }
- ]
- },
- {
- "slug": "/serverless/observability/dashboards"
- },
- {
- "slug": "/serverless/observability/alerting",
- "classic-sources": ["enObservabilityCreateAlerts"],
- "items": [
- {
- "slug": "/serverless/observability/create-manage-rules",
- "classic-sources": ["enKibanaCreateAndManageRules"],
- "items": [
- {
- "label": "Anomaly detection",
- "slug": "/serverless/observability/aiops-generate-anomaly-alerts"
- },
- {
- "label": "APM anomaly",
- "slug": "/serverless/observability/create-anomaly-alert-rule"
- },
- {
- "label": "Custom threshold",
- "slug": "/serverless/observability/create-custom-threshold-alert-rule"
- },
- {
- "label": "Elasticsearch query",
- "slug": "/serverless/observability/create-elasticsearch-query-rule",
- "classic-sources": ["enKibanaRuleTypeEsQuery"]
- },
- {
- "label": "Error count threshold",
- "slug": "/serverless/observability/create-error-count-threshold-alert-rule"
- },
- {
- "label": "Failed transaction rate threshold",
- "slug": "/serverless/observability/create-failed-transaction-rate-threshold-alert-rule"
- },
- {
- "label": "Inventory",
- "slug": "/serverless/observability/create-inventory-threshold-alert-rule",
- "classic-sources": ["enObservabilityInfrastructureThresholdAlert"]
- },
- {
- "label": "Latency threshold",
- "slug": "/serverless/observability/create-latency-threshold-alert-rule"
- },
- {
- "label": "SLO burn rate",
- "slug": "/serverless/observability/create-slo-burn-rate-alert-rule",
- "classic-sources": [ "enObservabilitySloBurnRateAlert" ]
- },
- {
- "label": "Synthetic monitor status",
- "slug": "/serverless/observability/monitor-status-alert"
- }
- ]
- },
- {
- "slug": "/serverless/observability/aggregationOptions",
- "items": [
- {
- "slug": "/serverless/observability/rateAggregation"
- }
- ]
- },
- {
- "slug": "/serverless/observability/view-alerts",
- "classic-sources": ["enObservabilityViewObservabilityAlerts"],
- "items": [
- {
- "slug": "/serverless/observability/triage-slo-burn-rate-breaches",
- "label": "SLO burn rate breaches"
- },
- {
- "slug": "/serverless/observability/triage-threshold-breaches",
- "label": "Threshold breaches"
- }
- ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/slos",
- "classic-sources": [ "enObservabilitySlo" ],
- "items": [
- {
- "slug": "/serverless/observability/create-an-slo",
- "classic-sources": [ "enObservabilitySloCreate" ]
- }
- ]
- },
- {
- "slug": "/serverless/observability/cases",
- "classic-sources": [ "enObservabilityCreateCases" ],
- "items": [
- {
- "slug": "/serverless/observability/create-a-new-case",
- "classic-sources": [ "enObservabilityManageCases" ]
- },
- {
- "slug": "/serverless/observability/case-settings"
- }
- ]
- },
- {
- "slug": "/serverless/observability/aiops",
- "items": [
- {
- "slug": "/serverless/observability/aiops-detect-anomalies",
- "classic-sources": [ "enMachineLearningMlAdFindingAnomalies" ],
- "classic-skip": true,
- "items": [
- {
- "slug": "/serverless/observability/aiops-tune-anomaly-detection-job"
- },
- {
- "slug": "/serverless/observability/aiops-forecast-anomalies"
- }
- ]
- },
- {
- "slug": "/serverless/observability/aiops-analyze-spikes",
- "classic-sources": [ "enKibanaXpackMlAiops" ]
- },
- {
- "slug": "/serverless/observability/aiops-detect-change-points"
- }
- ]
- },
- {
- "slug": "/serverless/observability/monitor-datasets",
- "classic-sources": ["enObservabilityMonitorDatasets"],
- "classic-skip": true
- },
- {
- "slug": "/serverless/observability/ai-assistant",
- "classic-sources": [ "enObservabilityObsAiAssistant" ]
- },
- {
- "slug": "/serverless/observability/elastic-entity-model"
- },
- {
- "slug": "/serverless/observability/observability-technical-preview-limitations"
- }
- ]
-}
\ No newline at end of file
diff --git a/docs/en/serverless/slos/create-an-slo.mdx b/docs/en/serverless/slos/create-an-slo.mdx
deleted file mode 100644
index ef8e04a8aa..0000000000
--- a/docs/en/serverless/slos/create-an-slo.mdx
+++ /dev/null
@@ -1,237 +0,0 @@
----
-slug: /serverless/observability/create-an-slo
-title: Create an SLO
-description: Learn how to define a service-level indicator (SLI), set an objective, and create a service-level objective (SLO).
-tags: [ 'serverless', 'observability', 'how-to' ]
----
-
-
-
-import Roles from '../partials/roles.mdx'
-
-
-
-To create an SLO, in your ((observability)) project, go to **Observability** → **SLOs**:
-
-* If you're creating your first SLO, you'll see an introductory page. Click the **Create SLO** button.
-* If you've created SLOs before, click the **Create new SLO** button in the upper-right corner of the page.
-
-From here, complete the following steps:
-
-1. Define your service-level indicator (SLI).
-1. Set your objectives.
-1. Describe your SLO.
-
-
-
-## Define your SLI
-
-The type of SLI to use depends on the location of your data:
-
-* Custom KQL: Create an SLI based on raw logs coming from your services.
-* Timeslice metric: Create an SLI based on a custom equation that uses multiple aggregations.
-* Custom metric: Create an SLI to define custom equations from metric fields in your indices.
-* Histogram metric: Create an SLI based on histogram metrics.
-* APM latency and APM availability: Create an SLI based on services using application performance monitoring (APM).
-
-
-
-### Custom KQL
-
-Create an indicator based on any of your ((es)) indices or data views. You define two queries: one that yields the good events from your index, and one that yields the total events from your index.
-
-**Example:** You can define a custom KQL indicator based on the `service-logs` index with the **good query** defined as `nested.field.response.latency <= 100 and nested.field.env : “production”` and the **total query** defined as `nested.field.env : “production”`.
-
-When defining a custom KQL SLI, set the following fields:
-
-* **Index:** The data view or index pattern you want to base the SLI on. For example, `service-logs`.
-* **Timestamp field:** The timestamp field used by the index.
-* **Query filter:** A KQL filter to specify relevant criteria by which to filter the index documents.
-* **Good query:** The query yielding events that are considered good or successful. For example, `nested.field.response.latency <= 100 and nested.field.env : “production”`.
-* **Total query:** The query yielding all events to take into account for computing the SLI. For example, `nested.field.env : “production”`.
-* **Group by:** The field used to group the data based on the values of the specific field. For example, you could group by the `url.domain` field, which would create individual SLOs for each value of the selected field.
-
-
-
-### Custom metric
-
-Create an indicator to define custom equations from metric fields in your indices.
-
-**Example:** You can define **Good events** as the sum of the field `processor.processed` with a filter of `"processor.outcome: \"success\""`, and the **Total events** as the sum of `processor.processed` with a filter of `"processor.outcome: *"`.
-
-When defining a custom metric SLI, set the following fields:
-
-* **Source**
- * **Index:** The data view or index pattern you want to base the SLI on. For example, `my-service-*`.
- * **Timestamp field:** The timestamp field used by the index.
- * **Query filter:** A KQL filter to specify relevant criteria by which to filter the index documents. For example, `'field.environment : "production" and service.name : "my-service"'`.
-* **Good events**
- * **Metric [A-Z]:** The field that is aggregated using the `sum` aggregation for good events. For example, `processor.processed`.
- * **Filter [A-Z]:** The filter to apply to the metric for good events. For example, `"processor.outcome: \"success\""`.
- * **Equation:** The equation that calculates the good metric. For example, `A`.
-* **Total events**
- * **Metric [A-Z]:** The field that is aggregated using the `sum` aggregation for total events. For example, `processor.processed`.
- * **Filter [A-Z]:** The filter to apply to the metric for total events. For example, `"processor.outcome: *"`.
- * **Equation:** The equation that calculates the total metric. For example, `A`.
-* **Group by:** The field used to group the data based on the values of the specific field. For example, you could group by the `url.domain` field, which would create individual SLOs for each value of the selected field.
-
-
-
-### Timeslice metric
-
-Create an indicator based on a custom equation that uses statistical aggregations and a threshold to determine whether a slice is good or bad.
-Supported aggregations include `Average`, `Max`, `Min`, `Sum`, `Cardinality`, `Last value`, `Std. deviation`, `Doc count`, and `Percentile`.
-The equation supports basic math and logic.
-
-
- This indicator requires you to use the `Timeslices` budgeting method.
-
-
-**Example:** You can define an indicator to determine whether a Kubernetes StatefulSet is healthy.
-First you set the query filter to `orchestrator.cluster.name: "elastic-k8s" AND kubernetes.namespace: "my-ns" AND data_stream.dataset: "kubernetes.state_statefulset"`.
-Then you define an equation that compares the number of ready (healthy) replicas to the number of observed replicas:
-`A == B ? 1 : 0`, where `A` retrieves the last value of `kubernetes.statefulset.replicas.ready` and `B` retrieves the last value of `kubernetes.statefulset.replicas.observed`.
-The equation returns `1` if the condition `A == B` is true (indicating the same number of replicas) or `0` if it's false. If the value is less than 1, you can determine that the Kubernetes StatefulSet is unhealthy.
-
-When defining a timeslice metric SLI, set the following fields:
-
-* **Source**
- * **Index:** The data view or index pattern you want to base the SLI on. For example, `metrics-*:metrics-*`.
- * **Timestamp field:** The timestamp field used by the index.
- * **Query filter:** A KQL filter to specify relevant criteria by which to filter the index documents. For example, `orchestrator.cluster.name: "elastic-k8s" AND kubernetes.namespace: "my-ns" AND data_stream.dataset: "kubernetes.state_statefulset"`.
-* **Metric definition**
- * **Aggregation [A-Z]:** The type of aggregation to use.
- * **Field [A-Z]:** The field to use in the aggregation. For example, `kubernetes.statefulset.replicas.ready`.
- * **Filter [A-Z]:** The filter to apply to the metric.
- * **Equation:** The equation that calculates the total metric. For example, `A == B ? 1 : 0`.
- * **Comparator:** The type of comparison to perform.
- * **Threshold:** The value to use along with the comparator to determine if the slice is good or bad.
-
-
-
-### Histogram metric
-
-Histograms record data in a compressed format and can record latency and delay metrics. You can create an SLI based on histogram metrics using a `range` aggregation or a `value_count` aggregation for both the good and total events. Filtering with KQL queries is supported on both event types.
-
-When using a `range` aggregation, both the `from` and `to` thresholds are required for the range and the events are the total number of events within that range. The range includes the `from` value and excludes the `to` value.
-
-**Example:** You can define your **Good events** using the `processor.latency` field with a filter of `"processor.outcome: \"success\""`, and your **Total events** using the `processor.latency` field with a filter of `"processor.outcome: *"`.
-
-When defining a histogram metric SLI, set the following fields:
-
-* **Source**
- * **Index:** The data view or index pattern you want to base the SLI on. For example, `my-service-*`.
- * **Timestamp field:** The timestamp field used by the index.
- * **Query filter:** A KQL filter to specify relevant criteria by which to filter the index documents. For example, `field.environment : "production" and service.name : "my-service"`.
-* **Good events**
- * **Aggregation:** The type of aggregation to use for good events, either **Value count** or **Range**.
- * **Field:** The field used to aggregate events considered good or successful. For example, `processor.latency`.
- * **From:** (`range` aggregation only) The starting value of the range for good events. For example, `0`.
- * **To:** (`range` aggregation only) The ending value of the range for good events. For example, `100`.
- * **KQL filter:** The filter for good events. For example, `"processor.outcome: \"success\""`.
-* **Total events**
- * **Aggregation:** The type of aggregation to use for total events, either **Value count** or **Range**.
- * **Field:** The field used to aggregate total events. For example, `processor.latency`.
- * **From:** (`range` aggregation only) The starting value of the range for total events. For example, `0`.
- * **To:** (`range` aggregation only) The ending value of the range for total events. For example, `100`.
- * **KQL filter:** The filter for total events. For example, `"processor.outcome : *"`.
-* **Group by:** The field used to group the data based on the values of the specific field. For example, you could group by the `url.domain` field, which would create individual SLOs for each value of the selected field.
-
-
-
-### APM latency and APM availability
-
-There are two types of SLI you can create based on services using application performance monitoring (APM): APM latency and APM availability.
-
-Use **APM latency** to create an indicator based on latency data received from your instrumented services and a latency threshold.
-
-**Example:** You can define an indicator on an APM service named `banking-service` for the `production` environment, and the transaction name `POST /deposit` with a latency threshold value of 300ms.
-
-Use **APM availability** to create an indicator based on the availability of your instrumented services.
-Availability is determined by calculating the percentage of successful transactions (`event.outcome : "success"`) out of the total number of successful and failed transactions—unknown outcomes are excluded.
-
-**Example:** You can define an indicator on an APM service named `search-service` for the `production` environment, and the transaction name `POST /search`.
-
-When defining either an APM latency or APM availability SLI, set the following fields:
-
-* **Service name:** The APM service name.
-* **Service environment:** Either `all` or the specific environment.
-* **Transaction type:** Either `all` or the specific transaction type.
-* **Transaction name:** Either `all` or the specific transaction name.
-* **Threshold (APM latency only):** The latency threshold in milliseconds (ms) to consider the request as good.
-* **Query filter:** An optional query filter on the APM data.
-
-
-
-### Synthetics availability
-
-Create an indicator based on the availability of your synthetic monitors.
-Availability is determined by calculating the percentage of checks that are successful (`monitor.status : "up"`)
-out of the total number of checks.
-
-**Example**: You can define an indicator based on a HTTP monitor being "up" for at least 99% of the time.
-
-When defining a Synthetics availability SLI, set the following fields:
-
-* **Monitor name** — The name of one or more synthetic monitors.
-* **Project** — The ID of one or more projects containing synthetic monitors.
-* **Tags** — One or more tags assigned to synthetic monitors.
-* **Query filter** — An optional KQL query used to filter the Synthetics checks on some relevant criteria.
-
-
- Synthetics availability SLIs are automatically grouped by monitor and location.
-
-
-
-
-## Set your objectives
-
-After defining your SLI, you need to set your objectives. To set your objectives, complete the following:
-
-1. Select your budgeting method
-1. Set your time window
-1. Set your target/SLO percentage
-
-
-
-### Set your time window and duration
-
-Select the durations over which you want to compute your SLO. You can select either a **rolling** or **calendar aligned** time window:
-
-| | |
-|---|---|
-| **Rolling** | Uses data from a specified duration that depends on when the SLO was created, for example the last 30 days. |
-| **Calendar aligned** | Uses data from a specified duration that aligns with calendar, for example weekly or monthly. |
-
-
-
-### Select your budgeting method
-
-You can select either an **occurrences** or a **timeslices** budgeting method:
-
-| | |
-|---|---|
-| **Occurrences** | Uses the number of good events and the number of total events to compute the SLI. |
-| **Timeslices** | Breaks the overall time window into smaller slices of a defined duration, and uses the number of good slices over the number of total slices to compute the SLI. |
-
-
-
-### Set your target/SLO (%)
-
-The SLO target objective as a percentage.
-
-
-
-## Describe your SLO
-
-After setting your objectives, give your SLO a name, a short description, and add any relevant tags.
-
-
-
-## SLO burn rate alert rule
-
-When you use the UI to create an SLO, a default SLO burn rate alert rule is created automatically.
-The burn rate rule will use the default configuration and no connector.
-You must configure a connector if you want to receive alerts for SLO breaches.
-
-For more information about configuring the rule, see Create an SLO burn rate rule.
diff --git a/docs/en/serverless/slos/slos.mdx b/docs/en/serverless/slos/slos.mdx
deleted file mode 100644
index 3e104c3498..0000000000
--- a/docs/en/serverless/slos/slos.mdx
+++ /dev/null
@@ -1,90 +0,0 @@
----
-slug: /serverless/observability/slos
-title: SLOs
-description: Set clear, measurable targets for your service performance with service-level objectives (SLOs).
-tags: [ 'serverless', 'observability', 'overview' ]
----
-
-
-
-Service-level objectives (SLOs) allow you to set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics.
-You can define SLOs based on different types of data sources, such as custom KQL queries and APM latency or availability data.
-
-Once you've defined your SLOs, you can monitor them in real time, with detailed dashboards and alerts that help you quickly identify and troubleshoot any issues that may arise.
-You can also track your progress against your SLO targets over time, with a clear view of your error budgets and burn rates.
-
-
-
-## Important concepts
-The following table lists some important concepts related to SLOs:
-
-| | |
-|---|---|
-| **Service-level indicator (SLI)** | The measurement of your service's performance, such as service latency or availability. |
-| **SLO** | The target you set for your SLI. It specifies the level of performance you expect from your service over a period of time. |
-| **Error budget** | The amount of time that your SLI can fail to meet the SLO target before it violates your SLO. |
-| **Burn rate** | The rate at which your service consumes your error budget. |
-
-
-
-## SLO overview
-
-From the SLO overview, you can see all of your SLOs and a quick summary of what's happening in each one:
-
-
-
-Select an SLO from the overview to see additional details including:
-
-* **Burn rate:** the percentage of bad events over different time periods (1h, 6h, 24h, 72h) and the risk of exhausting your error budget within those time periods.
-* **Historical SLI:** the SLI value and how it's trending over the SLO time window.
-* **Error budget burn down:** the remaining error budget and how it's trending over the SLO time window.
-* **Alerts:** active alerts if you've set any SLO burn rate alert rules for the SLO.
-
-
-
-
-
-## Search and filter SLOs
-
-You can apply searches and filters to quickly find the SLOs you're interested in.
-
-
-
-* **Apply structured filters:** Next to the search field, click the **Add filter** icon to add a custom filter. Notice that you can use `OR` and `AND` to combine filters. The structured filter can be disabled, inverted, or pinned across all apps.
-* **Enter a semi-structured search:** In the search field, start typing a field name to get suggestions for field names and operators that you can use to build a structured query. The semi-structured search will filter SLOs for matches, and only return matching SLOs.
-* Use the **Status** and **Tags** menus to include or exclude SLOs from the view based on the status or defined tags.
-
-There are also options to sort and group the SLOs displayed in the overview:
-
-
-
-* **Sort by**: SLI value, SLO status, Error budget consumed, or Error budget remaining.
-* **Group by**: None, Tags, Status, or SLI type.
-* Click icons to switch between a card view (), list view (), or compact view ().
-
-## SLO dashboard panels
-
-SLO data is also available as Dashboard _panels_.
-Panels allow you to curate custom data views and visualizations to bring clarity to your data.
-
-Available SLO panels include:
-
-* **SLO Overview**: Visualize a selected SLO's health, including name, current SLI value, target, and status.
-* **SLO Alerts**: Visualize one or more SLO alerts, including status, rule name, duration, and reason. In addition, configure and update alerts, or create cases directly from the panel.
-
-
-
-To learn more about Dashboards, see Dashboards.
-
-
-
-## Next steps
-
-Get started using SLOs to measure your service performance:
-
-{/* TODO: Find out if any special privileges are required to grant access to SLOs and document as required. Classic doclink was Configure SLO access */}
-
-*
-*
-*
-*
diff --git a/docs/en/serverless/synthetics/synthetics-analyze.mdx b/docs/en/serverless/synthetics/synthetics-analyze.mdx
deleted file mode 100644
index 9f0be4eb41..0000000000
--- a/docs/en/serverless/synthetics/synthetics-analyze.mdx
+++ /dev/null
@@ -1,372 +0,0 @@
----
-slug: /serverless/observability/synthetics-analyze
-title: Analyze data from synthetic monitors
-# description: Description to be written
-tags: []
----
-
-
-
-
-
-The Synthetics UI in Observability projects both provides a high-level overview of your service's
-availability and allows you to dig into details to diagnose what caused downtime.
-
-
-
-## Overview
-
-The Synthetics **Overview** tab provides you with a high-level view of all the services you are monitoring
-to help you quickly diagnose outages and other connectivity issues within your network.
-
-To access this page in your Observability project, go to **Synthetics** → **Overview**.
-
-This overview includes a snapshot of the current status of all monitors, the number of errors that
-occurred over the last 6 hours, and the number of alerts over the last 12 hours.
-All monitors created using a Synthetics project or using the UI will be listed below with information
-about the location, current status, and duration average.
-
-
-
-When you use a single monitor configuration to create monitors in multiple locations, each location
-is listed as a separate monitor as they run as individual monitors and the status and duration average
-can vary by location.
-
-
-
-
-
-To get started with your analysis in the Overview tab, you can search for monitors or
-use the filter options including current status (up, down, or disabled),
-monitor type (for example, journey or HTTP), location, and more.
-
-Then click an individual monitor to see some details in a flyout.
-From there, you can click **Go to monitor** to go to an individual monitor's page
-to see more details (as described below).
-
-
-
-## All monitor types
-
-When you go to an individual monitor's page, you'll see much more detail about the monitor's
-performance over time. The details vary by monitor type, but for every monitor at the top of the
-page you'll see:
-
-* The monitor's **name** with a down arrow icon that you can use to quickly move between monitors.
-* The **location** of the monitor. If the same monitor configuration was used to create monitors in
- multiple locations, you'll also see a down arrow icon that you can use to quickly move between
- locations that use the same configuration.
-
-* The latest **status** and when the monitor was **last run**.
-* The ** Run test manually** button that allows you to run the test on
- demand before the next scheduled run.
-
-
-
- This is only available for monitors running on Elastic's global managed testing infrastructure.
- It is not available for monitors running on ((private-location))s.
-
-
-
-* The ** Edit monitor** button that allows you to edit the monitor's
- configuration.
-
-
-
-Each individual monitor's page has three tabs: Overview, History, and Errors.
-
-
-
-### Overview
-
-The **Overview** tab has information about the monitor availability, duration, and any errors
-that have occurred since the monitor was created.
-The _Duration trends_ chart displays the timing for each check that was performed in the last 30 days.
-This visualization helps you to gain insights into how quickly requests resolve by the targeted endpoint
-and gives you a sense of how frequently a host or endpoint was down.
-
-
-
-
-
-### History
-
-The **History** tab has information on every time the monitor has run.
-It includes some high-level stats and a complete list of all test runs.
-Use the calendar icon () and search bar
-to filter for runs that occurred in a specific time period.
-
-{/* What you might do with this info */}
-{/* ... */}
-
-For browser monitors, you can click on any run in the **Test runs** list
-to see the details for that run. Read more about what information is
-included the in Details for one run section below.
-
-
-
-If the monitor is configured to retest on failure,
-you'll see retests listed in the **Test runs** table. Runs that are retests include a
-rerun icon () next to the result badge.
-
-
-
-
-
-### Errors
-
-The **Errors** tab has information on failed runs.
-If the monitor is configured to retest on failure,
-failed runs will only result in an error if both the initial run and the rerun fail.
-This can reduce noise related to transient problems.
-
-The Errors tab includes a high-level overview of all alerts and a complete list of all failures.
-Use the calendar icon () and search bar
-to filter for runs that occurred in a specific time period.
-
-{/* What you might do with this info */}
-{/* ... */}
-
-For browser monitors, you can click on any run in the **Error** list
-to open an **Error details** page that includes most of the same information
-that is included the in Details for one run section below.
-
-
-
-
-
-## Browser monitors
-
-For browser monitors, you can look at results at various levels of granularity:
-
-* See an overview of journey runs over time.
-* Drill down into the details of a single run.
-* Drill down further into the details of a single _step_ within a journey.
-
-
-
-### Journey runs over time
-
-The journey page on the Overview tab includes:
-
-* An overview of the **last test run** including high-level information for each step.
-* **Alerts** to date including both active and recovered alerts.
-* **Duration by step** over the last 24 hours.
-* A list of the **last 10 test runs** that link to the details for each run.
-
-
-
-From here, you can either drill down into:
-
-* The latest run of the full journey by clicking ** View test run**
- or a past run in the list of **Last 10 test runs**.
- This will take you to the view described below in Details for one run.
-
-* An individual step in this run by clicking the performance breakdown icon
- () next to one of the steps.
- This will take you to the view described below in Details for one step.
-
-
-
-### Details for one run
-
-The page detailing one run for a journey includes more information on each step in the current run
-and opportunities to compare each step to the same step in previous runs.
-
-{/* What info it includes */}
-At the top of the page, see the _Code executed_ and any _Console_ output for each step.
-If the step failed, this will also include a _Stacktrace_ tab that you can use to
-diagnose the cause of errors.
-
-Navigate through each step using ** Previous** and
-**Next **.
-
-{/* Screenshot of the viz */}
-
-
-{/* What info it includes */}
-Scroll down to dig into the steps in this journey run.
-Click the icon next to the step number to show details.
-The details include metrics for the step in the current run and the step in the last successful run.
-Read more about step-level metrics below in Timing and
-Metrics.
-
-{/* What you might do with this info */}
-This is particularly useful to compare the metrics for a failed step to the last time it completed successfully
-when trying to diagnose the reason it failed.
-
-{/* Screenshot of the viz */}
-
-
-Drill down to see even more details for an individual step by clicking the performance breakdown icon
-() next to one of the steps.
-This will take you to the view described below in Details for one step.
-
-
-
-### Details for one step
-
-After clicking the performance breakdown icon ()
-you'll see more detail for an individual step.
-
-
-
-#### Screenshot
-
-{/* What info it includes */}
-By default the synthetics library will capture a screenshot for each step regardless of
-whether the step completed or failed.
-
-
-
-Customize screenshot behavior for all monitors in the configuration file,
-for one monitor using `monitor.use`, or for a run using
-the CLI.
-
-
-
-{/* What you might do with this info */}
-Screenshots can be particularly helpful to identify what went wrong when a step fails because of a change to the UI.
-You can compare the failed step to the last time the step successfully completed.
-
-{/* Screenshot of the viz */}
-
-
-
-
-#### Timing
-
-The **Timing** visualization shows a breakdown of the time spent in each part of
-the resource loading process for the step including:
-
-* **Blocked**: The request was initiated but is blocked or queued.
-* **DNS**: The DNS lookup to convert the hostname to an IP Address.
-* **Connect**: The time it took the request to connect to the server.
- Lengthy connections could indicate network issues, connection errors, or an overloaded server.
-
-* **TLS**: If your page is loading resources securely over TLS, this is the time it took to set up that connection.
-* **Wait**: The time it took for the response generated by the server to be received by the browser.
- A lengthy Waiting (TTFB) time could indicate server-side issues.
-
-* **Receive**: The time it took to receive the response from the server,
- which can be impacted by the size of the response.
-
-* **Send**: The time spent sending the request data to the server.
-
-Next to each network timing metric, there's an icon that indicates whether the value is
-higher (),
-lower (),
-or the same ()
-compared to the median of all runs in the last 24 hours.
-Hover over the icon to see more details in a tooltip.
-
-{/* What you might do with this info */}
-This gives you an overview of how much time is spent (and how that time is spent) loading resources.
-This high-level information may not help you diagnose a problem on its own, but it could act as a
-signal to look at more granular information in the Network requests section.
-
-{/* Screenshot of the viz */}
-
-
-
-
-#### Metrics
-
-{/* What info it includes */}
-The **Metrics** visualization gives you insight into the performance of the web page visited in
-the step and what a user would experience when going through the current step.
-Metrics include:
-
-* **First contentful paint (FCP)** focuses on the initial rendering and measures the time from
- when the page starts loading to when any part of the page's content is displayed on the screen.
-
-* **Largest contentful paint (LCP)** measures loading performance. To provide a good user experience,
- LCP should occur within 2.5 seconds of when the page first starts loading.
-
-* **Cumulative layout shift (CLS)** measures visual stability. To provide a good user experience,
- pages should maintain a CLS of less than 0.1.
-
-* **`DOMContentLoaded` event (DCL)** is triggered when the browser completes parsing the document.
- Helpful when there are multiple listeners, or logic is executed:
- `domContentLoadedEventEnd - domContentLoadedEventStart`.
-
-* **Transfer size** represents the size of the fetched resource. The size includes the response header
- fields plus the response payload body.
-
-
-
-Largest contentful paint and Cumulative layout shift are part of Google's
-[Core Web Vitals](https://web.dev/vitals/), an initiative that introduces a set of metrics
-that help categorize good and bad sites by quantifying the real-world user experience.
-
-
-
-Next to each metric, there's an icon that indicates whether the value is
-higher (),
-lower (),
-or the same ()
-compared to all runs over the last 24 hours.
-Hover over the icon to see more details in a tooltip.
-
-{/* Screenshot of the viz */}
-
-
-
-
-#### Object weight and count
-
-{/* What info it includes */}
-The **Object weight** visualization shows the cumulative size of downloaded resources by type,
-and **Object count** shows the number of individual resources by type.
-
-{/* What you might do with this info */}
-This provides a different kind of analysis.
-For example, you might have a large number of JavaScript files,
-each of which will need a separate download, but they may be collectively small.
-This could help you identify an opportunity to improve efficiency by combining multiple files into one.
-
-{/* Screenshot of the viz */}
-
-
-
-
-#### Network requests
-
-{/* What info it includes */}
-The **Network requests** visualization is a waterfall chart that shows every request
-the page made when a user executed it.
-Each line in the chart represents an HTTP network request and helps you quickly identify
-what resources are taking the longest to load and in what order they are loading.
-
-The colored bars within each line indicate the time spent per resource.
-Each color represents a different part of that resource's loading process
-(as defined in the Timing section above) and
-includes the time spent downloading content for specific
-Multipurpose Internet Mail Extensions (MIME) types:
-HTML, JS, CSS, Media, Font, XHR, and Other.
-
-Understanding each phase of a request can help you improve your site's speed by
-reducing the time spent in each phase.
-
-{/* Screenshot of the viz */}
-
-
-Without leaving the waterfall chart, you can view data points relating to each resource:
-resource details, request headers, response headers, and certificate headers.
-On the waterfall chart, select a resource name, or any part of each row,
-to display the resource details overlay.
-
-For additional analysis, whether to check the content of a CSS file or to view a specific image,
-click the icon located beside each resource,
-to view its content in a new tab.
-
-You can also navigate between steps and checks at the top of the page to
-view the corresponding waterfall charts.
-
-{/* [discrete] */}
-{/* */}
-{/* = Anomalies */}
-
-{/* [discrete] */}
-{/* */}
-{/* = Alerts */}
diff --git a/docs/en/serverless/synthetics/synthetics-command-reference.mdx b/docs/en/serverless/synthetics/synthetics-command-reference.mdx
deleted file mode 100644
index 736d7edb16..0000000000
--- a/docs/en/serverless/synthetics/synthetics-command-reference.mdx
+++ /dev/null
@@ -1,393 +0,0 @@
----
-slug: /serverless/observability/synthetics-command-reference
-title: Use the Synthetics CLI
-# description: Description to be written
-tags: []
----
-
-
-
-
-
-
-
-## `@elastic/synthetics`
-
-Elastic uses the [@elastic/synthetics](https://www.npmjs.com/package/@elastic/synthetics)
-library to run synthetic browser tests and report the test results.
-The library also provides a CLI to help you scaffold, develop/run tests locally, and push tests to Elastic.
-
-```sh
-npx @elastic/synthetics [options] [files] [dir]
-```
-
-You will not need to use most command line flags.
-However, there are some you may find useful:
-
-
- `--match `
-
- Run tests with a name or tags that match the given glob pattern.
-
-
- `--tags Array`
-
- Run tests with the given tags that match the given glob pattern.
-
-
- `--pattern `
-
- RegExp pattern to match journey files in the current working directory. Defaults
- to `/*.journey.(ts|js)$/`, which matches files ending with `.journey.ts` or `.journey.js`.
-
-
- `--params `
-
- JSON object that defines any variables your tests require.
- Read more in Work with params and secrets.
-
- Params passed will be merged with params defined in your
- `synthetics.config.js` file.
- Params defined via the CLI take precedence.
-
-
- `--playwright-options `
-
- JSON object to pass in custom Playwright options for the agent.
- For more details on relevant Playwright options, refer to the
- the configuration docs.
-
- Options passed will be merged with Playwright options defined in your
- `synthetics.config.js` file.
- Options defined via the CLI take precedence.
-
-
- `--screenshots `
-
- Control whether or not to capture screenshots at the end of each step.
- Options include `'on'`, `'off'`, or `'only-on-failure'`.
-
- This can also be set in the configuration file using
- `monitor.screenshot`.
- The value defined via the CLI will take precedence.
-
-
- `-c, --config `
-
- Path to the configuration file. By default, test runner looks for a
- `synthetics.config.(js|ts)` file in the current directory. Synthetics
- configuration provides options to configure how your tests are run and pushed to
- Elastic. Allowed options are described in the
-
-
- `--reporter `
-
- One of `json`, `junit`, `buildkite-cli`, or `default`. Use the JUnit or Buildkite
- reporter to provide easily parsed output to CI systems.
-
-
- `--inline`
-
- Instead of reading from a file, `cat` inline scripted journeys and pipe them through `stdin`.
- For example, `cat path/to/file.js | npx @elastic/synthetics --inline`.
-
-
- `--no-throttling`
-
- Does not apply throttling.
-
- Throttling can also be disabled in the configuration file using
- `monitor.throttling`.
- The value defined via the CLI will take precedence.
-
-
- Network throttling for browser based monitors is disabled.
- See this [documention](https://github.com/elastic/synthetics/blob/main/docs/throttling.md) for more details.
-
-
-
- `--no-headless`
-
- Runs with the browser in headful mode.
-
- This is the same as setting [Playwright's `headless` option](https://playwright.dev/docs/api/class-testoptions#test-options-headless) to `false` by running `--playwright-options '{"headless": false}'`.
-
-
- Headful mode should only be used locally to see the browser and interact with DOM elements directly for testing purposes. Do not attempt to run in headful mode when running through Elastic's global managed testing infrastructure or ((private-location))s as this is not supported.
-
-
-
- `-h, --help`
-
- Shows help for the `npx @elastic/synthetics` command.
-
-
-
-
-
- The `--pattern`, `--tags`, and `--match` flags for filtering are only supported when you
- run synthetic tests locally or push them to Elastic. Filtering is _not_ supported in any other subcommands
- like `init` and `locations`.
-
-
-
- For debugging synthetic tests locally, you can set an environment variable,
- `DEBUG=synthetics npx @elastic/synthetics`, to capture Synthetics agent logs.
-
-
-
-
-## `@elastic/synthetics init`
-
-Scaffold a new Synthetics project using Elastic Synthetics.
-
-This will create a template Node.js project that includes the synthetics agent, required dependencies,
-a synthetics configuration file, and example browser and lightweight monitor files.
-These files can be edited and then pushed to Elastic to create monitors.
-
-```sh
-npx @elastic/synthetics init
-```
-
-Read more about what's included in a template Synthetics project in Create a Synthetics project.
-
-
-
-## `@elastic/synthetics push`
-
-Create monitors in by using your local journeys. By default, running
-`push` command will use the `project` settings field from the `synthetics.config.ts`
-file, which is set up using the `init` command. However, you can override these
-settings using the CLI flags.
-
-```sh
-SYNTHETICS_API_KEY= npx @elastic/synthetics push --url --id
-```
-
-
- The `push` command includes interactive prompts to prevent you from accidentally deleting or duplicating monitors.
- You will see a prompt when:
-
- * You `push` a project that used to contain one or more monitors but either no longer
- contains previously running monitors or has any monitors.
- Select `yes` to delete the monitors associated with the project ID being pushed.
- * You `push` a Synthetics project that's already been pushed using one Synthetics project ID and then try to `push`
- it using a _different_ ID.
- Select `yes` to create duplicates of all monitors in the project.
- You can set `DEBUG=synthetics` environment variable to capture the deleted monitors.
-
-
-
- If the journey contains external NPM packages other than the `@elastic/synthetics`,
- those packages will be bundled along with the journey code when the `push` command is invoked.
- However there are some limitations when using external packages:
-
- * Bundled journeys after compression should not be more than 1500 Kilobytes.
- * Native node modules will not work as expected due to platform inconsistency.
- * Uploading files in journey scripts(via locator.setInputFiles) is not supported.
-
-
-
- `--auth `
-
- API key used for authentication. You can also set the API key via the `SYNTHETICS_API_KEY` environment variable.
-
- To create an API key, you must be logged in as a user with
- Editor access.
-
-
- `--id `
-
- A unique id associated with your Synthetics project.
- It will be used for logically grouping monitors.
-
- If you used `init` to create a Synthetics project, this is the `` you specified.
-
- This can also be set in the configuration file using
- `project.id`.
- The value defined via the CLI will take precedence.
-
-
- `--url `
-
- The URL for the Observability project to which you want to upload the monitors.
-
- This can also be set in the configuration file using
- `project.url`.
- The value defined via the CLI will take precedence.
-
-
- `--schedule `
-
- The interval (in minutes) at which the monitor should run.
-
- This can also be set in the configuration file using
- `monitor.schedule`.
- The value defined via the CLI will take precedence.
-
-
- [`--locations Array`](https://github.com/elastic/synthetics/blob/((synthetics_version))/src/locations/public-locations.ts#L28-L37)
-
- Where to deploy the monitor. Monitors can be deployed in multiple locations so that you can detect differences in availability and response times across those locations.
-
- To list available locations, refer to `@elastic/synthetics locations`.
-
- This can also be set in the configuration file using
- `monitor.locations` in the configuration file.
- The value defined via the CLI will take precedence.
-
-
- `--private-locations Array`
-
- The ((private-location))s to which the monitors will be deployed. These ((private-location))s refer to locations hosted and managed by you, whereas
- `locations` are hosted by Elastic. You can specify a ((private-location)) using the location's name.
-
- To list available ((private-location))s, refer to `@elastic/synthetics locations`.
-
- This can also be set in the configuration file using
- `monitor.privateLocations` in the configuration file.
- The value defined via the CLI will take precedence.
-
-
- `--fields `
-
- A list of key-value pairs that will be sent with each monitor event.
- The `fields` are appended to ((es)) documents as `labels`,
- and those labels are displayed in ((kib)) in the _Monitor details_ panel in the individual monitor's _Overview_ tab.
-
- Example: `--fields '{ "foo": bar", "team": "synthetics" }'`
-
- This can also be set in the configuration file using the `monitor.fields` option.
- The value defined via the CLI will take precedence.
-
-
- `--yes`
-
- The `push` command includes interactive prompts to prevent you from accidentally deleting or duplicating monitors.
- If running the CLI non-interactively, you can override these prompts using the `--yes` option.
- When the `--yes` option is passed to `push`:
-
- * If you `push` a Synthetics project that used to contain one or more monitors but no longer contains any monitors,
- all monitors associated with the Synthetics project ID being pushed will be deleted.
-
- * If you `push` a Synthetics project that's already been pushed using one Synthetics project ID and then try to `push`
- it using a _different_ ID, it will create duplicates of all monitors in the Synthetics project.
-
-
-
-
-## Tag monitors
-
-Synthetics journeys can be tagged with one or more tags. Use tags to
-filter journeys when running tests locally or pushing them to Elastic.
-
-To add tags to a single journey, add the `tags` parameter to the `journey` function or
-use the `monitor.use` method.
-
-```js
-import {journey, monitor} from "@elastic/synthetics";
-journey({name: "example journey", tags: ["env:qa"] }, ({ page }) => {
- monitor.use({
- tags: ["env:qa"]
- })
- // Add steps here
-});
-```
-
-For lightweight monitors, use the `tags` field in the yaml configuration file.
-```yaml
-name: example monitor
-tags:
- - env:qa
-```
-
-To apply tags to all browser and lightweight monitors, configure using the `monitor.tags` field in the `synthetics.config.ts` file.
-
-## Filter monitors
-
-When running the `npx @elastic/synthetics push` command, you can filter the monitors that are pushed to Elastic using the following flags:
-
-
- `--tags Array`
-
- Push monitors with the given tags that match the glob pattern.
-
-
- `--match `
-
- Push monitors with a name or tags that match the glob pattern.
-
-
- `--pattern `
-
- RegExp pattern to match the journey files in the current working directory.
- Defaults to `/*.journey.(ts|js)$/` for browser monitors and `/.(yml|yaml)$/` for
- lightweight monitors.
-
-
-
-You can combine these techniques and push the monitors to different projects based on the tags by using multiple configuration files.
-
-```sh
-npx @elastic/synthetics push --config synthetics.qa.config.ts --tags env:qa
-npx @elastic/synthetics push --config synthetics.prod.config.ts --tags env:prod
-```
-
-
-
-## `@elastic/synthetics locations`
-
-List all available locations for running synthetics monitors.
-
-```sh
-npx @elastic/synthetics locations --url --auth
-```
-
-Run `npx @elastic/synthetics locations` with no flags to list all the available global locations managed by Elastic for running synthetics monitors.
-
-To list both locations on Elastic's global managed infrastructure and ((private-location))s, include:
-
-
- `--url `
-
- The URL for the Observability project from which to fetch all available public and ((private-location))s.
-
-
- `--auth `
-
- API key used for authentication.
-
-
-
-{/*
- If an administrator has disabled Elastic managed locations for the role you are assigned
- and you do _not_ include `--url` and `--auth`, all global locations managed by Elastic will be listed.
- However, you will not be able to push to these locations with your API key and will see an error:
- _You don't have permission to use Elastic managed global locations_. For more details, refer to the
- troubleshooting docs.
- */}
-
-## `@elastic/synthetics totp `
-
-Generate a Time-based One-Time Password (TOTP) for multifactor authentication(MFA) in Synthetics.
-
-```sh
-npx @elastic/synthetics totp --issuer --label