diff --git a/solutions/images/observability-help-icon.png b/solutions/images/observability-help-icon.png
deleted file mode 100644
index 49eefac61a..0000000000
Binary files a/solutions/images/observability-help-icon.png and /dev/null differ
diff --git a/solutions/images/observability-help-icon.svg b/solutions/images/observability-help-icon.svg
new file mode 100644
index 0000000000..41c126555f
--- /dev/null
+++ b/solutions/images/observability-help-icon.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/solutions/images/observability-transaction-icon.png b/solutions/images/observability-transaction-icon.png
deleted file mode 100644
index 3534e6915d..0000000000
Binary files a/solutions/images/observability-transaction-icon.png and /dev/null differ
diff --git a/solutions/images/observability-transaction-icon.svg b/solutions/images/observability-transaction-icon.svg
new file mode 100644
index 0000000000..96c3bdddef
--- /dev/null
+++ b/solutions/images/observability-transaction-icon.svg
@@ -0,0 +1,6 @@
+
diff --git a/solutions/observability/apm/api-keys.md b/solutions/observability/apm/api-keys.md
index c27586dac2..7377f6c1ab 100644
--- a/solutions/observability/apm/api-keys.md
+++ b/solutions/observability/apm/api-keys.md
@@ -199,7 +199,6 @@ APM Server provides a command line interface for creating, retrieving, invalidat
| `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys |
2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges:
-
* To **receive Agent configuration**, assign `config_agent:read`.
* To **ingest agent data**, assign `event:write`.
* To **upload source maps**, assign `sourcemap:write`.
diff --git a/solutions/observability/apm/apm-server-command-reference.md b/solutions/observability/apm/apm-server-command-reference.md
index 4e1ecfd6c3..a7ab50e1dd 100644
--- a/solutions/observability/apm/apm-server-command-reference.md
+++ b/solutions/observability/apm/apm-server-command-reference.md
@@ -75,7 +75,6 @@ apm-server apikey SUBCOMMAND [FLAGS]
| `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys |
2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges:
-
* To **receive Agent configuration**, assign `config_agent:read`.
* To **ingest agent data**, assign `event:write`.
* To **upload source maps**, assign `sourcemap:write`.
diff --git a/solutions/observability/apm/get-started-apm-server-binary.md b/solutions/observability/apm/get-started-apm-server-binary.md
index 5a4d7dc7b9..9e57af02cb 100644
--- a/solutions/observability/apm/get-started-apm-server-binary.md
+++ b/solutions/observability/apm/get-started-apm-server-binary.md
@@ -67,8 +67,7 @@ tar xzvf apm-server-{{apm_server_version}}-darwin-x86_64.tar.gz
$$$apm-installing-on-windows$$$
**Windows:**
-1. Download the APM Server Windows zip file from the
-https://www.elastic.co/downloads/apm/apm-server[downloads page].
+1. Download the APM Server Windows zip file from the [downloads page](https://www.elastic.co/downloads/apm/apm-server).
1. Extract the contents of the zip file into `C:\Program Files`.
diff --git a/solutions/observability/apm/get-started-serverless.md b/solutions/observability/apm/get-started-serverless.md
index 45442cf781..fc0d938150 100644
--- a/solutions/observability/apm/get-started-serverless.md
+++ b/solutions/observability/apm/get-started-serverless.md
@@ -182,10 +182,9 @@ To send APM data to Elastic, you must install an APM agent and configure it to s
If you can’t find your distribution, you can install the agent by building it from the source. The following instructions will build the APM agent using the same docker environment that Elastic uses to build our official packages.
- ::::{note}
+ ```{note}
The agent is currently only available for Linux operating system.
-
- ::::
+ ```
1. Download the [agent source](https://github.com/elastic/apm-agent-php/).
2. Execute the following commands to build the agent and install it:
diff --git a/solutions/observability/apm/installation-layout.md b/solutions/observability/apm/installation-layout.md
index 87d7387b6c..5c875b036d 100644
--- a/solutions/observability/apm/installation-layout.md
+++ b/solutions/observability/apm/installation-layout.md
@@ -26,7 +26,7 @@ View the installation layout and default paths for both Fleet-managed APM Server
: Main {{agent}} {{fleet}} encrypted configuration
`/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
-: Log files for {{agent}} and {{beats}} shippers [1]
+: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1)
`/usr/bin/elastic-agent`
: Shell wrapper installed into PATH
@@ -45,7 +45,7 @@ You can install {{agent}} in a custom base path other than `/Library`. When ins
: Main {{agent}} {{fleet}} encrypted configuration
`/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
-: Log files for {{agent}} and {{beats}} shippers [1]
+: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1)
`/usr/bin/elastic-agent`
: Shell wrapper installed into PATH
@@ -64,7 +64,7 @@ You can install {{agent}} in a custom base path other than `/opt`. When install
: Main {{agent}} {{fleet}} encrypted configuration
`C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson`
-: Log files for {{agent}} and {{beats}} shippers [1]
+: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1)
You can install {{agent}} in a custom base path other than `C:\Program Files`. When installing {{agent}} with the `.\elastic-agent.exe install` command, use the `--base-path` CLI option to specify the custom base path.
::::::
@@ -80,7 +80,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`.
: Main {{agent}} {{fleet}} encrypted configuration
`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
-: Log files for {{agent}} and {{beats}} shippers [1]
+: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1)
`/usr/bin/elastic-agent`
: Shell wrapper installed into PATH
@@ -97,7 +97,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`.
: Main {{agent}} {{fleet}} encrypted configuration
`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
-: Log files for {{agent}} and {{beats}} shippers [1]
+: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1)
`/usr/bin/elastic-agent`
: Shell wrapper installed into PATH
@@ -150,3 +150,6 @@ For the deb and rpm distributions, these paths are set in the init script or in
::::::
:::::::
+
+$$$footnote-1$$$
+¹ Logs file names end with a date (`YYYYMMDD`) and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation.
\ No newline at end of file
diff --git a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md
index 4240e5559d..0af998c041 100644
--- a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md
+++ b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md
@@ -53,4 +53,4 @@ Go to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs
:alt: scale APM
:::
-Congratulations -- you now have the latest and greatest in Elastic APM!
+Congratulations — you now have the latest and greatest in Elastic APM!
diff --git a/solutions/observability/apm/trace-sample-timeline.md b/solutions/observability/apm/trace-sample-timeline.md
index b70c9d2bee..2e761247b6 100644
--- a/solutions/observability/apm/trace-sample-timeline.md
+++ b/solutions/observability/apm/trace-sample-timeline.md
@@ -55,7 +55,7 @@ As application architectures are shifting from monolithic to more distributed, s
:screenshot:
:::
-Don’t forget; by definition, a distributed trace includes more than one transaction. When viewing distributed traces in the timeline waterfall, you’ll see this icon: , which indicates the next transaction in the trace. For easier problem isolation, transactions can be collapsed in the waterfall by clicking the icon to the left of the transactions. Transactions can also be expanded and viewed in detail by clicking on them.
+Don’t forget; by definition, a distributed trace includes more than one transaction. When viewing distributed traces in the timeline waterfall, you’ll see this icon: , which indicates the next transaction in the trace. For easier problem isolation, transactions can be collapsed in the waterfall by clicking the icon to the left of the transactions. Transactions can also be expanded and viewed in detail by clicking on them.
After exploring these traces, you can return to the full trace by clicking **View full trace**.
diff --git a/solutions/observability/apm/transaction-sampling.md b/solutions/observability/apm/transaction-sampling.md
index a6d43e144f..4fa1d929ea 100644
--- a/solutions/observability/apm/transaction-sampling.md
+++ b/solutions/observability/apm/transaction-sampling.md
@@ -195,7 +195,7 @@ stack:
serverless:
```
-A sampled trace retains all data associated with it. A non-sampled trace drops all [span](/solutions/observability/apm/spans.md) and [transaction](/solutions/observability/apm/transactions.md) data1. Regardless of the sampling decision, all traces retain [error](/solutions/observability/apm/errors.md) data.
+A sampled trace retains all data associated with it. A non-sampled trace drops all [span](/solutions/observability/apm/spans.md) and [transaction](/solutions/observability/apm/transactions.md) data.[¹](#footnote-1) Regardless of the sampling decision, all traces retain [error](/solutions/observability/apm/errors.md) data.
Some visualizations in the {{apm-app}}, like latency, are powered by aggregated transaction and span [metrics](/solutions/observability/apm/metrics.md). The way these metrics are calculated depends on the sampling method used:
@@ -207,7 +207,7 @@ For all sampling methods, metrics are weighted by the inverse sampling rate of t
These calculation methods ensure that the APM app provides the most accurate metrics possible given the sampling strategy in use, while also accounting for the head-based sampling rate to estimate the full population of traces.
-1 Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped.
+¹ $$$footnote-1$$$ Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped.
## Sample rates [_sample_rates]
diff --git a/solutions/observability/applications/tutorial-monitor-java-application.md b/solutions/observability/applications/tutorial-monitor-java-application.md
index c96d55708f..4724463958 100644
--- a/solutions/observability/applications/tutorial-monitor-java-application.md
+++ b/solutions/observability/applications/tutorial-monitor-java-application.md
@@ -408,7 +408,9 @@ PS > cd 'C:\Program Files\Filebeat'
PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```
::::::
:::::::
@@ -1134,7 +1136,9 @@ tar xzvf metricbeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```
::::::
:::::::
@@ -1760,7 +1764,9 @@ tar xzvf heartbeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Heartbeat> .\install-service-heartbeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-heartbeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-heartbeat.ps1`.
+```
::::::
:::::::
diff --git a/solutions/observability/applications/user-experience.md b/solutions/observability/applications/user-experience.md
index 5e042b94c0..18f9e7bd8a 100644
--- a/solutions/observability/applications/user-experience.md
+++ b/solutions/observability/applications/user-experience.md
@@ -51,13 +51,13 @@ You won’t be able to fix any problems from viewing these metrics alone, but yo
::::{dropdown} Metric reference
First contentful paint
-: Focuses on the initial rendering and measures the time from when the page starts loading to when any part of the page’s content is displayed on the screen. The agent uses the [Paint timing API](https://www.w3.org/TR/paint-timing/#first-contentful-paint) available in the browser to capture the timing information. [2](https://developer.mozilla.org/en-US/docs/Glossary/First_contentful_paint)]
+: Focuses on the initial rendering and measures the time from when the page starts loading to when any part of the page’s content is displayed on the screen. The agent uses the [Paint timing API](https://www.w3.org/TR/paint-timing/#first-contentful-paint) available in the browser to capture the timing information.[¹](#footnote-1)
Total blocking time
-: The sum of the blocking time (duration above 50 ms) for each long task that occurs between the First contentful paint and the time when the transaction is completed. Total blocking time is a great companion metric for [Time to interactive](https://web.dev/tti/) (TTI) which is lab metric and not available in the field through browser APIs. The agent captures TBT based on the number of long tasks that occurred during the page load lifecycle. [3](https://web.dev/tbt/)]
+: The sum of the blocking time (duration above 50 ms) for each long task that occurs between the First contentful paint and the time when the transaction is completed. Total blocking time is a great companion metric for [Time to interactive](https://web.dev/tti/) (TTI) which is lab metric and not available in the field through browser APIs. The agent captures TBT based on the number of long tasks that occurred during the page load lifecycle.[²](#footnote-2)
`Long Tasks`
-: A long task is any user activity or browser task that monopolize the UI thread for extended periods (greater than 50 milliseconds) and block other critical tasks (frame rate or input latency) from being executed. [4](https://developer.mozilla.org/en-US/docs/Web/API/Long_Tasks_API)]
+: A long task is any user activity or browser task that monopolize the UI thread for extended periods (greater than 50 milliseconds) and block other critical tasks (frame rate or input latency) from being executed.[³](#footnote-3)
Number of long tasks
: The number of long tasks.
@@ -77,10 +77,10 @@ These metrics tell an important story about how users experience your website. B
[Core Web Vitals](https://web.dev/vitals/) is a recent initiative from Google to introduce a new set of metrics that better categorize good and bad sites by quantifying the real-world user experience. This is done by looking at three key metrics: loading performance, visual stability, and interactivity:
Largest contentful paint (LCP)
-: Loading performance. LCP is the timestamp when the main content of a page has likely loaded. To users, this is the *perceived* loading speed of your site. To provide a good user experience, Google recommends an LCP of fewer than 2.5 seconds. [5](https://web.dev/lcp/)]
+: Loading performance. LCP is the timestamp when the main content of a page has likely loaded. To users, this is the *perceived* loading speed of your site. To provide a good user experience, Google recommends an LCP of fewer than 2.5 seconds.[⁴](#footnote-4)
Interaction to next paint (INP)
-: Responsiveness to user interactions. The INP value comes from measuring the latency of all click, tap, and keyboard interactions that happen throughout a single page visit and choosing the longest interaction observed. To provide a good user experience, Google recommends an INP of fewer than 200 milliseconds. [6](https://web.dev/articles/inp)]
+: Responsiveness to user interactions. The INP value comes from measuring the latency of all click, tap, and keyboard interactions that happen throughout a single page visit and choosing the longest interaction observed. To provide a good user experience, Google recommends an INP of fewer than 200 milliseconds.[⁵](#footnote-5)
::::{note}
Previous {{kib}} versions included the metric [First input delay (FID)](https://web.dev/fid/) in the User Experience app. Starting with version 8.12, FID was replaced with *Interaction to next paint (INP)*. The APM RUM agent started collecting INP data in version 5.16.0. If you use an earlier version of the RUM agent with {{kib}} version 8.12 or later, it will *not* capture INP data and there will be *no data* to show in the User Experience app:
@@ -96,10 +96,10 @@ RUM agent version ≥ 5.16.0 will continue to collect FID metrics so, while FID
::::
Cumulative layout shift (CLS)
-: Visual stability. Is content moving around because of `async` resource loading or dynamic content additions? CLS measures these frustrating unexpected layout shifts. To provide a good user experience, Google recommends a CLS score of less than `.1`. [7](https://web.dev/cls/)]
+: Visual stability. Is content moving around because of `async` resource loading or dynamic content additions? CLS measures these frustrating unexpected layout shifts. To provide a good user experience, Google recommends a CLS score of less than `.1`.[⁶](#footnote-6)
::::{tip}
-[Beginning in May 2021](https://webmasters.googleblog.com/2020/11/timing-for-page-experience.md), Google will start using Core Web Vitals as part of their ranking algorithm and will open up the opportunity for websites to rank in the "top stories" position without needing to leverage [AMP](https://amp.dev/). [8](https://webmasters.googleblog.com/2020/05/evaluating-page-experience.md)]
+[Beginning in May 2021](https://webmasters.googleblog.com/2020/11/timing-for-page-experience.md), Google will start using Core Web Vitals as part of their ranking algorithm and will open up the opportunity for websites to rank in the "top stories" position without needing to leverage [AMP](https://amp.dev/).[⁷](#footnote-7)
::::
### Load/view distribution [user-experience-distribution]
@@ -130,3 +130,10 @@ Have a question? Want to leave feedback? Visit the [{{user-experience}} discussi
#### References [user-experience-references]
+¹ $$$footnote-1$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Glossary/First_contentful_paint)
+² $$$footnote-2$$$ More information: [web.dev](https://web.dev/tbt/)
+³ $$$footnote-3$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Web/API/Long_Tasks_API)
+⁴ $$$footnote-4$$$ Source: [web.dev](https://web.dev/lcp/)
+⁵ $$$footnote-5$$$ Source: [web.dev](https://web.dev/articles/inp)
+⁶ $$$footnote-6$$$ Source: [web.dev](https://web.dev/cls/)
+⁷ $$$footnote-7$$$ Source: [webmasters.googleblog.com](https://webmasters.googleblog.com/2020/05/evaluating-page-experience.md)
\ No newline at end of file
diff --git a/solutions/observability/cloud/gcp-dataflow-templates.md b/solutions/observability/cloud/gcp-dataflow-templates.md
index 955bac61cf..d9cf094226 100644
--- a/solutions/observability/cloud/gcp-dataflow-templates.md
+++ b/solutions/observability/cloud/gcp-dataflow-templates.md
@@ -49,7 +49,7 @@ To find the Cloud ID of your [deployment](https://cloud.elastic.co/deployments),

-Use [{{kib}}](../../../deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key) to create a Base64-encoded API key to authenticate on your deployment.
+Use [{{kib}}](/deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key) to create a Base64-encoded API key to authenticate on your deployment.
::::{important}
You can optionally restrict the privileges of your API Key; otherwise they’ll be a point in time snapshot of permissions of the authenticated user. For this tutorial the data is written to the `logs-gcp.audit-default` data streams.
diff --git a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md
index 716ee6bd4b..ed3ff317fb 100644
--- a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md
+++ b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md
@@ -60,7 +60,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a
:::::
-{{agent}} is currently the preferred way to add EC2 metrics. For other ways, refer to [Adding data to {{es}}](../../../manage-data/ingest.md).
+{{agent}} is currently the preferred way to add EC2 metrics. For other ways, refer to [Adding data to {{es}}](/manage-data/ingest.md).
## Dashboards [dashboard-ec2]
diff --git a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md
index 04b51302f4..0c8e9b916c 100644
--- a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md
+++ b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md
@@ -62,7 +62,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a
:::::
-{{agent}} is currently the preferred way to add Kinesis data stream metrics. For other ways, refer to [Adding data to {{es}}](../../../manage-data/ingest.md).
+{{agent}} is currently the preferred way to add Kinesis data stream metrics. For other ways, refer to [Adding data to {{es}}](/manage-data/ingest.md).
## Dashboards [dashboard-kinesis]
diff --git a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md
index 790a758004..02d9cc200f 100644
--- a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md
+++ b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md
@@ -58,7 +58,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a
:::::
-{{agent}} is currently the preferred way to add SQS metrics. For other ways, refer to [Adding data to {{es}}](../../../manage-data/ingest.md).
+{{agent}} is currently the preferred way to add SQS metrics. For other ways, refer to [Adding data to {{es}}](/manage-data/ingest.md).
## Dashboards [dashboard-sqs]
diff --git a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md
index f6bc070af9..216b25c54b 100644
--- a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md
+++ b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md
@@ -61,7 +61,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a
:::::
-{{agent}} is currently the preferred way to add S3 metrics. For other ways, refer to [Adding data to {{es}}](../../../manage-data/ingest.md).
+{{agent}} is currently the preferred way to add S3 metrics. For other ways, refer to [Adding data to {{es}}](/manage-data/ingest.md).
## Dashboards [dashboard-s3]
diff --git a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md
index 7365d266ea..73c3235358 100644
--- a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md
+++ b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md
@@ -53,7 +53,7 @@ For advanced use cases, source records can be transformed by invoking a custom L
From the **Destination settings** panel, specify the following settings:
* **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the {{ecloud}} Console and select **Connection details**. Make sure the endpoint is in the following format: `https://.es...elastic-cloud.com`.
-* **API key**: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under [API Keys](../../../deploy-manage/api-keys.md). If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
+* **API key**: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under [API Keys](/deploy-manage/api-keys.md). If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
* **Content encoding**: To reduce the data transfer costs, use GZIP encoding.
* **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
* **Parameters**:
diff --git a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats.md b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats.md
index 2afe6b11e9..ab1af1ffb5 100644
--- a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats.md
+++ b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats.md
@@ -169,7 +169,9 @@ tar xzvf filebeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```
::::::
:::::::
@@ -464,7 +466,9 @@ tar xzvf metricbeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```
::::::
:::::::
diff --git a/solutions/observability/cloud/monitor-google-cloud-platform-gcp.md b/solutions/observability/cloud/monitor-google-cloud-platform-gcp.md
index 2649d0b064..d95fd1bb20 100644
--- a/solutions/observability/cloud/monitor-google-cloud-platform-gcp.md
+++ b/solutions/observability/cloud/monitor-google-cloud-platform-gcp.md
@@ -163,7 +163,9 @@ tar xzvf metricbeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```
::::::
:::::::
@@ -361,7 +363,9 @@ tar xzvf filebeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
+```
::::::
:::::::
diff --git a/solutions/observability/cloud/monitor-microsoft-azure-openai.md b/solutions/observability/cloud/monitor-microsoft-azure-openai.md
index dbe4ce9e7e..fe1e674f20 100644
--- a/solutions/observability/cloud/monitor-microsoft-azure-openai.md
+++ b/solutions/observability/cloud/monitor-microsoft-azure-openai.md
@@ -260,7 +260,7 @@ From here, you’ll find visualizations of important metrics for your Azure Open

-For more on dashboards and visualization, refer to the [Dashboards and visualizations](../../../explore-analyze/dashboards.md) documentation.
+For more on dashboards and visualization, refer to the [Dashboards and visualizations](/explore-analyze/dashboards.md) documentation.
### View logs and metrics with Discover [azure-openai-discover]
@@ -276,7 +276,7 @@ From here, filter your data and dive deeper into individual logs to find informa
:screenshot:
:::
-For more on using Discover and creating data views, refer to the [Discover](../../../explore-analyze/discover.md) documentation.
+For more on using Discover and creating data views, refer to the [Discover](/explore-analyze/discover.md) documentation.
## Step 6: Monitor Microsoft Azure OpenAI APM with OpenTelemetry [azure-openai-apm]
@@ -451,7 +451,7 @@ After ingesting your data, you can filter and explore it using Discover in {{kib
:screenshot:
:::
-Then, use these fields to create visualizations and build dashboards. Refer to the [Dashboard and visualizations](../../../explore-analyze/dashboards.md) documentation for more information.
+Then, use these fields to create visualizations and build dashboards. Refer to the [Dashboard and visualizations](/explore-analyze/dashboards.md) documentation for more information.
:::{image} /solutions/images/observability-azure-openai-apm-dashboard.png
:alt: screenshot of the Azure OpenAI APM dashboard
@@ -465,4 +465,4 @@ Now that you know how to find and visualize your Azure OpenAI logs and metrics,
* **Alerts**: Create threshold rules to notify you when your metrics or logs reach or exceed a specified value: Refer to [Metric threshold](../incident-management/create-metric-threshold-rule.md) and [Log threshold](../incident-management/create-log-threshold-rule.md) for more on setting up alerts.
* **SLOs**: Set measurable targets for your Azure OpenAI service performance based on your metrics. Once defined, you can monitor your SLOs with dashboards and alerts and track their progress against your targets over time. Refer to [Service-level objectives (SLOs)](../incident-management/service-level-objectives-slos.md) for more on setting up and tracking SLOs.
-* **Machine learning (ML) jobs**: Set up ML jobs to find anomalous events and patterns in your Azure OpenAI data. Refer to [Finding anomalies](../../../explore-analyze/machine-learning/anomaly-detection/ml-ad-finding-anomalies.md) for more on setting up ML jobs.
\ No newline at end of file
+* **Machine learning (ML) jobs**: Set up ML jobs to find anomalous events and patterns in your Azure OpenAI data. Refer to [Finding anomalies](/explore-analyze/machine-learning/anomaly-detection/ml-ad-finding-anomalies.md) for more on setting up ML jobs.
\ No newline at end of file
diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md
index 9b5eef3c4c..1e6d7117c9 100644
--- a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md
+++ b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md
@@ -8,7 +8,7 @@ applies_to:
# Monitor Microsoft Azure with the Azure Native ISV Service [monitor-azure-native]
::::{note}
-The {{ecloud}} Azure Native ISV Service allows you to deploy managed instances of the {{stack}} directly in Azure, through the Azure integrated marketplace. The service includes native capabilities for consolidating Azure logs and metrics in Elastic. For more information, refer to [Azure Native ISV Service](../../../deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md).
+The {{ecloud}} Azure Native ISV Service allows you to deploy managed instances of the {{stack}} directly in Azure, through the Azure integrated marketplace. The service includes native capabilities for consolidating Azure logs and metrics in Elastic. For more information, refer to [Azure Native ISV Service](/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md).
**Using {{agent}} to monitor Azure?** Refer to [Monitor Microsoft Azure with {{agent}}](monitor-microsoft-azure-with-elastic-agent.md).
diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md b/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md
index 10d2828f7f..cb32d25550 100644
--- a/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md
+++ b/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md
@@ -230,7 +230,7 @@ tar xzvf metricbeat-{{stack-version}}-linux-x86_64.tar.gz
2. Extract the contents of the zip file into `C:\Program Files`.
-3. Rename the `metricbeat-{{stack-version}}-windows-x86_64` directory to `Metricbeat`.
+3. Rename the _metricbeat-{{stack-version}}-windows-x86\_64_ directory to _Metricbeat_.
4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select *Run As Administrator*).
@@ -241,7 +241,9 @@ tar xzvf metricbeat-{{stack-version}}-linux-x86_64.tar.gz
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
```
-NOTE: If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```{note}
+If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
+```
::::::
:::::::
diff --git a/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md b/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md
index bb49f32ea5..fb88507679 100644
--- a/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md
+++ b/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md
@@ -36,7 +36,7 @@ Make sure the deployment is on AWS, because the Firehose delivery stream connect
## Use Elastic Analytics Discover to manually analyze data [aws-firehose-discover]
-In Elastic Analytics, you can search and filter your data, get information about the structure of the fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. For more information, check the [Discover](../../../explore-analyze/discover.md) documentation.
+In Elastic Analytics, you can search and filter your data, get information about the structure of the fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. For more information, check the [Discover](/explore-analyze/discover.md) documentation.
For example, for your VPC flow logs you want to know:
@@ -77,7 +77,7 @@ If you select the destination port field, the pop-up shows that port `8081` is b
## Use Machine Learning to detect anomalies [aws-firehose-ml]
-Elastic Observability provides the ability to detect anomalies on logs using Machine Learning (ML). To learn more about how to use the ML analysis with your logs, check the [Machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) documentation. You can select the following options:
+Elastic Observability provides the ability to detect anomalies on logs using Machine Learning (ML). To learn more about how to use the ML analysis with your logs, check the [Machine learning](/explore-analyze/machine-learning/machine-learning-in-kibana.md) documentation. You can select the following options:
* Log rate: Automatically detects anomalous log entry rates
* Categorization: Automatically categorizes log messages
diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md
index 9f776c7be0..c0f5f882ee 100644
--- a/solutions/observability/data-set-quality-monitoring.md
+++ b/solutions/observability/data-set-quality-monitoring.md
@@ -17,7 +17,7 @@ To open **Data Set Quality**, find **Stack Management** in the main menu or use
::::{admonition} Requirements
:class: note
-Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index.
+Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index.
::::
diff --git a/solutions/observability/get-started.md b/solutions/observability/get-started.md
index 6742c26583..f3082669a9 100644
--- a/solutions/observability/get-started.md
+++ b/solutions/observability/get-started.md
@@ -19,9 +19,9 @@ New to Elastic {{observability}}? Discover more about our observability features
Learn about key features available to help you get value from your observability data:
-* [What is Elastic {{observability}}?](../../solutions/observability/get-started/what-is-elastic-observability.md)
+* [What is Elastic {{observability}}?](/solutions/observability/get-started/what-is-elastic-observability.md)
* [What’s new in Elastic Stack](https://www.elastic.co/guide/en/observability/current/whats-new.html)
-* [{{obs-serverless}} billing dimensions](../../deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md)
+* [{{obs-serverless}} billing dimensions](/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md)
## Get started with your use case [get-started-with-use-case]
@@ -37,7 +37,7 @@ Learn how to spin up a deployment on {{ech}} or create an Observability Serverle
3. **View your data.** Navigate seamlessly between Observabilty UIs and dashboards to identify and resolve problems quickly.
4. **Customize.** Expand your deployment and add features like alerting and anomaly detection.
-To get started with on serverless, [create an Observability project](../../solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](../../solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data.
+To get started with on serverless, [create an Observability project](/solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](/solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data.
### Quickstarts [quickstarts-overview]
@@ -50,11 +50,11 @@ Our quickstarts dramatically reduce your time-to-value by offering a fast path t
Follow the steps in these guides to get started quickly:
-* [Quickstart: Monitor hosts with {{agent}}](../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md)
-* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md)
-* [Quickstart: Monitor hosts with OpenTelemetry](../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md)
-* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md)
-* [Quickstart: Collect data with AWS Firehose](../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md)
+* [Quickstart: Monitor hosts with {{agent}}](/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md)
+* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md)
+* [Quickstart: Monitor hosts with OpenTelemetry](/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md)
+* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md)
+* [Quickstart: Collect data with AWS Firehose](/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md)
### Get started with other features [_get_started_with_other_features]
@@ -63,20 +63,20 @@ Want to use {{fleet}} or some other feature not covered in the quickstarts? Foll
% Stateful only for Universal profiling
-* [Get started with system metrics](../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md)
+* [Get started with system metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md)
* [Get started with application traces and APM](/solutions/observability/apm/get-started-fleet-managed-apm-server.md)
* [Get started with synthetic monitoring](/solutions/observability/synthetics/index.md)
-* [Get started with Universal Profiling](../../solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md)
+* [Get started with Universal Profiling](/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md)
## Additional guides [_additional_guides]
Ready to dig into more features of Elastic Observability? See these guides:
-* [Create an alert](../../solutions/observability/incident-management/alerting.md)
-* [Create a service-level objective (SLO)](../../solutions/observability/incident-management/create-an-slo.md)
+* [Create an alert](/solutions/observability/incident-management/alerting.md)
+* [Create a service-level objective (SLO)](/solutions/observability/incident-management/create-an-slo.md)
## Related content for Elastic Stack [_related_content]
* [Starting with the {{es}} Platform and its Solutions](/get-started/index.md) for new users
-* [Adding data to {{es}}](../../manage-data/ingest.md) for other ways to ingest data
\ No newline at end of file
+* [Adding data to {{es}}](/manage-data/ingest.md) for other ways to ingest data
\ No newline at end of file
diff --git a/solutions/observability/get-started/create-an-observability-project.md b/solutions/observability/get-started/create-an-observability-project.md
index 9181513d60..a5f24ffd3c 100644
--- a/solutions/observability/get-started/create-an-observability-project.md
+++ b/solutions/observability/get-started/create-an-observability-project.md
@@ -13,7 +13,7 @@ applies_to:
::::{note}
-The **Admin** role or higher is required to create projects. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
+The **Admin** role or higher is required to create projects. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
::::
@@ -27,7 +27,7 @@ An {{obs-serverless}} project allows you to run {{obs-serverless}} in an autosca
5. (Optional) Click **Edit settings** to change your project settings:
* **Cloud provider**: The cloud platform where you’ll deploy your project. We currently support Amazon Web Services (AWS).
- * **Region**: The [region](../../../deploy-manage/deploy/elastic-cloud/regions.md) where your project will live.
+ * **Region**: The [region](/deploy-manage/deploy/elastic-cloud/regions.md) where your project will live.
6. Click **Create project**. It takes a few minutes to create your project.
7. When the project is ready, click **Continue**.
diff --git a/solutions/observability/get-started/get-started-with-dashboards.md b/solutions/observability/get-started/get-started-with-dashboards.md
index aa37ec49fc..6a7c8c6191 100644
--- a/solutions/observability/get-started/get-started-with-dashboards.md
+++ b/solutions/observability/get-started/get-started-with-dashboards.md
@@ -31,7 +31,7 @@ To create a new dashboard, click **Create Dashboard** and begin adding visualiza
You can also add other types of panels — such as filters, links, and text — and add controls like time sliders.
-For more information about creating dashboards, refer to [Create your first dashboard](../../../explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md).
+For more information about creating dashboards, refer to [Create your first dashboard](/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md).
::::{note}
The tutorial about creating your first dashboard is written for {{kib}} users, but the steps for serverless are very similar. To load the sample data in serverless, go to **Project Settings** → **Integrations** in the navigation pane, then search for "sample data".
diff --git a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md
index 7f2ced0d9e..55f5fbfc79 100644
--- a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md
+++ b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md
@@ -74,12 +74,12 @@ Data collection with AWS Firehose is supported on {{ech}} deployments in AWS, Az
:sync: stack
* An [{{ech}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) deployment. The deployment includes an {{es}} cluster for storing and searching your data, and {{kib}} for visualizing and managing your data.
-* A user with the `superuser` [built-in role](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
+* A user with the `superuser` [built-in role](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
::::{dropdown} Expand to view required privileges
- * [**Cluster**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
- * [**Index**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
- * [**Kibana**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
+ * [**Cluster**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
+ * [**Index**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
+ * [**Kibana**](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
::::
@@ -94,8 +94,8 @@ The default CloudFormation stack is created in the AWS region selected for the u
::::{tab-item} Serverless
:sync: serverless
-* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md).
-* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](/solutions/observability/get-started/create-an-observability-project.md).
+* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
* An active AWS account and the necessary permissions to create delivery streams.
::::
@@ -167,7 +167,7 @@ The following table shows the type of data ingested by the supported AWS service
:::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. In your {{obs-serverless}} project, go to **Add Data**.
3. Under **What do you want to monitor?** select **Cloud**, **AWS**, and then select **AWS Firehose**.
@@ -199,4 +199,4 @@ Here is an example of the VPC Flow logs dashboard:
:screenshot:
:::
-Refer to [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
+Refer to [What is Elastic {{observability}}?](/solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md
index 7e59384014..53d10744d3 100644
--- a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md
+++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md
@@ -27,12 +27,12 @@ The script also generates an {{agent}} configuration file that you can use with
:sync: stack
* An {{es}} cluster for storing and searching your data, and {{kib}} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. To get started quickly, try out [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body).
-* A user with the `superuser` [built-in role](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
+* A user with the `superuser` [built-in role](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
::::{dropdown} Expand to view required privileges
- * [**Cluster**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
- * [**Index**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
- * [**Kibana**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
+ * [**Cluster**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
+ * [**Index**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
+ * [**Kibana**](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
::::
@@ -43,8 +43,8 @@ The script also generates an {{agent}} configuration file that you can use with
:::{tab-item} Serverless
:sync: serverless
-* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md).
-* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](/solutions/observability/get-started/create-an-observability-project.md).
+* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
* Root privileges on the host—required to run the auto-detection script used in this quickstart.
:::
@@ -90,7 +90,7 @@ The script also generates an {{agent}} configuration file that you can use with
:::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. In your {{obs-serverless}} project, go to **Add Data**.
3. Under **What do you want to monitor?** select **Host**, and then select **Elastic Agent: Logs & Metrics**.
@@ -155,22 +155,22 @@ After using the dashboards to examine your data and confirm you’ve ingested al
For host monitoring, the following capabilities and features are recommended:
-* In the [Infrastructure UI](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md), analyze and compare data collected from your hosts. You can also:
+* In the [Infrastructure UI](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md), analyze and compare data collected from your hosts. You can also:
- * [Detect anomalies](../../../solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts.
- * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an anomaly is detected or a metric exceeds a given value.
+ * [Detect anomalies](/solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts.
+ * [Create alerts](/solutions/observability/incident-management/alerting.md) that notify you when an anomaly is detected or a metric exceeds a given value.
-* In [Discover](../../../solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also:
+* In [Discover](/solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also:
- * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents.
- * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages.
- * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an Observability data type reaches or exceeds a given value.
+ * [Monitor log data set quality](/solutions/observability/data-set-quality-monitoring.md) to find degraded documents.
+ * [Run a pattern analysis](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages.
+ * [Create alerts](/solutions/observability/incident-management/alerting.md) that notify you when an Observability data type reaches or exceeds a given value.
-* Use [machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data:
+* Use [machine learning](/explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data:
- * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
- * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis).
- * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data.
+ * [Detect anomalies](/explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
+ * [Analyze log spikes and drops](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis).
+ * [Detect change points](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data.
-Refer to the [OBservability overview](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
+Refer to the [Observability overview](/solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md
index 5c968affbe..b7543cc55b 100644
--- a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md
+++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md
@@ -27,7 +27,7 @@ In this quickstart guide, you’ll learn how to monitor your hosts using the Ela
* An {{es}} cluster for storing and searching your data, and {{kib}} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. The quickest way to get started with this quickstart is using a trial project on [Elastic serverless](https://docs.elastic.co/serverless/quickstart-monitor-hosts-with-otel.html).
* This quickstart is only available for Linux and MacOS systems.
-* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [User roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md).
+* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [User roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md).
* Root privileges on the host—required to run the OpenTelemetry collector because of these components:
* `hostmetrics` receiver to read all system metrics (all processes, memory, etc.).
@@ -38,9 +38,9 @@ In this quickstart guide, you’ll learn how to monitor your hosts using the Ela
:::{tab-item} Serverless
:sync: serverless
-* An {{observability}} project. To learn more, refer to [Create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md).
+* An {{observability}} project. To learn more, refer to [Create an Observability project](/solutions/observability/get-started/create-an-observability-project.md).
* This quickstart is only available for Linux and MacOS systems.
-* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
* Root privileges on the host—required to run the OpenTelemetry collector because of these components:
* `hostmetrics` receiver to read all system metrics (all processes, memory, etc.).
@@ -94,7 +94,7 @@ Logs are collected from setup onward, so you won’t see logs that occurred befo
::::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. To open the quickstart, go to **Add Data**.
3. Select **Collect and analyze logs**, and then select **OpenTelemetry**.
4. Under **What do you want to monitor?** select **Host**, and then select **Elastic Agent: Logs & Metrics**.
@@ -125,22 +125,22 @@ Under **Visualize your data**, you’ll see links to **Discover** to view your l
After using the Hosts page and Discover to confirm you’ve ingested all the host logs and metrics you want to monitor, use Elastic {{observability}} to gain deeper insight into your host data with the following capabilities and features:
-* In the [Infrastructure UI](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md), analyze and compare data collected from your hosts. You can also:
+* In the [Infrastructure UI](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md), analyze and compare data collected from your hosts. You can also:
- * [Detect anomalies](../../../solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts.
- * [Create alerts](../../../solutions/observability/incident-management/create-manage-rules.md) that notify you when an anomaly is detected or a metric exceeds a given value.
+ * [Detect anomalies](/solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts.
+ * [Create alerts](/solutions/observability/incident-management/create-manage-rules.md) that notify you when an anomaly is detected or a metric exceeds a given value.
-* In [Discover](../../../solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also:
+* In [Discover](/solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also:
- * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents.
- * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages.
- * [Create alerts](../../../solutions/observability/incident-management/create-manage-rules.md) that notify you when an Observability data type reaches or exceeds a given value.
+ * [Monitor log data set quality](/solutions/observability/data-set-quality-monitoring.md) to find degraded documents.
+ * [Run a pattern analysis](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages.
+ * [Create alerts](/solutions/observability/incident-management/create-manage-rules.md) that notify you when an Observability data type reaches or exceeds a given value.
-* Use [machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data:
+* Use [machine learning](/explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data:
- * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
- * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis).
- * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data.
+ * [Detect anomalies](/explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns.
+ * [Analyze log spikes and drops](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis).
+ * [Detect change points](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data.
-Refer to the [Elastic Observability](../../../solutions/observability.md) for a description of other useful features.
\ No newline at end of file
+Refer to the [Elastic Observability](/solutions/observability.md) for a description of other useful features.
\ No newline at end of file
diff --git a/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md b/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md
index 7e13cf5c68..54b412bd73 100644
--- a/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md
+++ b/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md
@@ -25,12 +25,12 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu
:sync: stack
* An {{es}} cluster for storing and searching your data, and {{kib}} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. To get started quickly, try out [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body).
-* A user with the `superuser` [built-in role](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
+* A user with the `superuser` [built-in role](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) or the privileges required to onboard data.
::::{dropdown} Expand to view required privileges
- * [**Cluster**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
- * [**Index**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
- * [**Kibana**](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
+ * [**Cluster**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `['monitor', 'manage_own_api_key']`
+ * [**Index**](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `{ names: ['logs-*-*', 'metrics-*-*'], privileges: ['auto_configure', 'create_doc'] }`
+ * [**Kibana**](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md): `{ spaces: ['*'], feature: { fleet: ['all'], fleetv2: ['all'] } }`
::::
@@ -42,8 +42,8 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu
:::{tab-item} Serverless
:sync: serverless
-* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md).
-* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](/solutions/observability/get-started/create-an-observability-project.md).
+* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
* A running Kubernetes cluster.
* [Kubectl](https://kubernetes.io/docs/reference/kubectl/).
@@ -83,7 +83,7 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu
:::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. In your {{obs-serverless}} project, go to **Add Data**.
3. Under **What do you want to monitor?** select **Kubernetes**, and then select **Elastic Agent: Logs & Metrics**.
@@ -115,4 +115,4 @@ After installation is complete and all relevant data is flowing into Elastic, th
Furthermore, you can access other useful prebuilt dashboards for monitoring Kubernetes resources, for example running pods per namespace, as well as the resources they consume, like CPU and memory.
-Refer to [Observability overview](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
+Refer to [Observability overview](/solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
diff --git a/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md b/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md
index 1f26bd8011..f05435d369 100644
--- a/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md
+++ b/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md
@@ -45,7 +45,7 @@ For a more detailed description of the components and advanced configuration, re
:::{tab-item} Serverless
:sync: serverless
-* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md).
+* An {{obs-serverless}} project. To learn more, refer to [Create an Observability project](/solutions/observability/get-started/create-an-observability-project.md).
* A running Kubernetes cluster (v1.23 or newer).
* [Kubectl](https://kubernetes.io/docs/reference/kubectl/).
* [Helm](https://helm.sh/docs/intro/install/).
@@ -92,7 +92,7 @@ For a more detailed description of the components and advanced configuration, re
:::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. In your {{obs-serverless}} project, go to **Add Data**.
3. Under **What do you want to monitor?** select **Kubernetes**, and then select **OpenTelemetry: Full Observability**.
@@ -139,4 +139,4 @@ After installation is complete and all relevant data is flowing into Elastic, th
* To troubleshoot deployment and installation, refer to [installation verification](https://github.com/elastic/opentelemetry/tree/main/docs/kubernetes/operator#installation-verification).
* For application instrumentation details, refer to [Instrumenting applications with EDOT SDKs on Kubernetes](https://github.com/elastic/opentelemetry/blob/main/docs/kubernetes/operator/instrumenting-applications.md).
* To customize the configuration, refer to [custom configuration](https://github.com/elastic/opentelemetry/tree/main/docs/kubernetes/operator#custom-configuration).
-* Refer to [Observability overview](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
+* Refer to [Observability overview](/solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features.
\ No newline at end of file
diff --git a/solutions/observability/get-started/what-is-elastic-observability.md b/solutions/observability/get-started/what-is-elastic-observability.md
index e6ca116765..a071a4b58a 100644
--- a/solutions/observability/get-started/what-is-elastic-observability.md
+++ b/solutions/observability/get-started/what-is-elastic-observability.md
@@ -26,7 +26,7 @@ In **Discover**, you can quickly search and filter your log data, get informatio
:class: screenshot
:::
-[Learn more about log monitoring →](../../../solutions/observability/logs.md)
+[Learn more about log monitoring →](/solutions/observability/logs.md)
## Application performance monitoring (APM) [observability-serverless-observability-overview-application-performance-monitoring-apm]
@@ -60,7 +60,7 @@ On the {{observability}} **Overview** page, the **Hosts** table shows your top h
You can then drill down into the {{infrastructure-app}} by clicking **Show inventory**. Here you can monitor and filter your data by hosts, pods, containers,or EC2 instances and create custom groupings such as availability zones or namespaces.
-[Learn more about infrastructure monitoring → ](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md)
+[Learn more about infrastructure monitoring → ](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md)
% Stateful only for RUM.
@@ -92,7 +92,7 @@ Simulate actions and requests that an end user would perform on your site at pre
Build stack traces to get visibility into your system without application source code changes or instrumentation. Use flamegraphs to explore system performance and identify the most expensive lines of code, increase CPU resource efficiency, debug performance regressions, and reduce cloud spend.
-[Learn more about Universal Profiling →](../../../solutions/observability/infra-and-hosts/universal-profiling.md)
+[Learn more about Universal Profiling →](/solutions/observability/infra-and-hosts/universal-profiling.md)
## Alerting [observability-serverless-observability-overview-alerting]
@@ -106,7 +106,7 @@ On the **Alerts** page, the **Alerts** table provides a snapshot of alerts occur
:screenshot:
:::
-[Learn more about alerting → ](../../../solutions/observability/incident-management/alerting.md)
+[Learn more about alerting → ](/solutions/observability/incident-management/alerting.md)
## Service-level objectives (SLOs) [observability-serverless-observability-overview-service-level-objectives-slos]
@@ -120,7 +120,7 @@ From the SLO overview list, you can see all of your SLOs and a quick summary of
:screenshot:
:::
-[Learn more about SLOs → ](../../../solutions/observability/incident-management/service-level-objectives-slos.md)
+[Learn more about SLOs → ](/solutions/observability/incident-management/service-level-objectives-slos.md)
## Cases [observability-serverless-observability-overview-cases]
@@ -131,7 +131,7 @@ Collect and share information about observability issues by creating cases. Case
:screenshot:
:::
-[Learn more about cases → ](../../../solutions/observability/incident-management/cases.md)
+[Learn more about cases → ](/solutions/observability/incident-management/cases.md)
## Machine learning and AIOps [observability-serverless-observability-overview-aiops]
@@ -146,4 +146,4 @@ Reduce the time and effort required to detect, understand, investigate, and reso
:screenshot:
:::
-[Learn more about machine learning and AIOps →](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md)
\ No newline at end of file
+[Learn more about machine learning and AIOps →](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md)
\ No newline at end of file
diff --git a/solutions/observability/incident-management.md b/solutions/observability/incident-management.md
index 9f823da562..9693dc76d0 100644
--- a/solutions/observability/incident-management.md
+++ b/solutions/observability/incident-management.md
@@ -13,6 +13,6 @@ Explore the topics in this section to learn how to respond to incidents detected
| | |
| --- | --- |
-| [Alerting](../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. |
-| [Cases](../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. |
-| [Service-level objectives (SLOs)](../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. |
\ No newline at end of file
+| [Alerting](/solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. |
+| [Cases](/solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. |
+| [Service-level objectives (SLOs)](/solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. |
\ No newline at end of file
diff --git a/solutions/observability/incident-management/aggregation-options.md b/solutions/observability/incident-management/aggregation-options.md
index 050e015928..8bff1855af 100644
--- a/solutions/observability/incident-management/aggregation-options.md
+++ b/solutions/observability/incident-management/aggregation-options.md
@@ -18,5 +18,5 @@ The following aggregations are available in some rules:
| Max | Highest value of a numeric field. |
| Min | Lowest value of a numeric field. |
| Percentile | Numeric value which represents the point at which n% of all values in the selected dataset are lower (choices are 95th or 99th). |
-| Rate | Rate at which a specific field changes over time. To learn about how the rate is calculated, refer to [Rate aggregation](../../../solutions/observability/incident-management/rate-aggregation.md). |
+| Rate | Rate at which a specific field changes over time. To learn about how the rate is calculated, refer to [Rate aggregation](/solutions/observability/incident-management/rate-aggregation.md). |
| Sum | Total of a numeric field in the selected dataset. |
\ No newline at end of file
diff --git a/solutions/observability/incident-management/alerting.md b/solutions/observability/incident-management/alerting.md
index ea3b4a0d49..030090d22f 100644
--- a/solutions/observability/incident-management/alerting.md
+++ b/solutions/observability/incident-management/alerting.md
@@ -13,7 +13,7 @@ Alerting enables you to define *rules*, which detect complex conditions within d
Alerting works by running checks on a schedule to detect conditions defined by a rule. You can define rules at different levels (service, environment, transaction) or use custom KQL queries. When a condition is met, the rule tracks it as an *alert* and responds by triggering one or more *actions*.
-Actions typically involve interaction with Elastic services or third-party integrations. [Connectors](../../../deploy-manage/manage-connectors.md) enable actions to talk to these services and integrations.
+Actions typically involve interaction with Elastic services or third-party integrations. [Connectors](/deploy-manage/manage-connectors.md) enable actions to talk to these services and integrations.
Once you’ve defined your rules, you can monitor any alerts triggered by these rules in real time, with detailed dashboards that help you quickly identify and troubleshoot any issues that may arise. You can also extend your alerts with notifications via services or third-party incident management systems.
@@ -27,10 +27,10 @@ On the **Alerts** page, the Alerts table provides a snapshot of alerts occurring
:screenshot:
:::
-You can filter this table by alert status or time period, customize the visible columns, and search for specific alerts (for example, alerts related to a specific service or environment) using KQL. Select **View alert detail** from the **More actions** menu , or click the Reason link for any alert to [view alert](../../../solutions/observability/incident-management/view-alerts.md) in detail, and you can then either **View in app** or **View rule details**.
+You can filter this table by alert status or time period, customize the visible columns, and search for specific alerts (for example, alerts related to a specific service or environment) using KQL. Select **View alert detail** from the **More actions** menu , or click the Reason link for any alert to [view alert](/solutions/observability/incident-management/view-alerts.md) in detail, and you can then either **View in app** or **View rule details**.
## Next steps [observability-alerting-next-steps]
-* [Create and manage rules](../../../solutions/observability/incident-management/create-manage-rules.md)
-* [View alerts](../../../solutions/observability/incident-management/view-alerts.md)
\ No newline at end of file
+* [Create and manage rules](/solutions/observability/incident-management/create-manage-rules.md)
+* [View alerts](/solutions/observability/incident-management/view-alerts.md)
\ No newline at end of file
diff --git a/solutions/observability/incident-management/cases.md b/solutions/observability/incident-management/cases.md
index e17e9d9709..6bf4129357 100644
--- a/solutions/observability/incident-management/cases.md
+++ b/solutions/observability/incident-management/cases.md
@@ -6,7 +6,7 @@ mapped_pages:
# Cases [observability-cases]
-Collect and share information about observability issues by creating a case. Cases allow you to track key investigation details, add assignees and tags to your cases, set their severity and status, and add alerts, comments, and visualizations. You can also send cases to third-party systems by [configuring external connectors](../../../solutions/observability/incident-management/configure-case-settings.md).
+Collect and share information about observability issues by creating a case. Cases allow you to track key investigation details, add assignees and tags to your cases, set their severity and status, and add alerts, comments, and visualizations. You can also send cases to third-party systems by [configuring external connectors](/solutions/observability/incident-management/configure-case-settings.md).
:::{image} /solutions/images/observability-cases.png
:alt: Cases page
diff --git a/solutions/observability/incident-management/configure-access-to-cases.md b/solutions/observability/incident-management/configure-access-to-cases.md
index 8b9d93f701..9b4495d0bc 100644
--- a/solutions/observability/incident-management/configure-access-to-cases.md
+++ b/solutions/observability/incident-management/configure-access-to-cases.md
@@ -12,7 +12,7 @@ If you are using an on-premises {{kib}} deployment and want your email notificat
::::
-For more details, refer to [feature access based on user privileges](../../../deploy-manage/manage-spaces.md#spaces-control-user-access).
+For more details, refer to [feature access based on user privileges](/deploy-manage/manage-spaces.md#spaces-control-user-access).
:::{image} /solutions/images/observability-cases-privileges.png
:alt: cases privileges
diff --git a/solutions/observability/incident-management/configure-case-settings.md b/solutions/observability/incident-management/configure-case-settings.md
index d6d91db455..53d1317372 100644
--- a/solutions/observability/incident-management/configure-case-settings.md
+++ b/solutions/observability/incident-management/configure-case-settings.md
@@ -10,7 +10,7 @@ mapped_pages:
::::{note}
-For Observability serverless projects, the **Editor** role or higher is required to create and edit connectors. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+For Observability serverless projects, the **Editor** role or higher is required to create and edit connectors. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -33,21 +33,21 @@ To close cases when they are sent to an external system, select **Automatically
If you are using an external incident management system, you can integrate Elastic Observability cases with that system using *connectors*. These third-party systems are supported:
-* {ibm-r}
+* {{ibm-r}}
* {{jira}} (including {{jira}} Service Desk)
-* {sn-itsm}
-* {sn-sir}
-* {swimlane}
+* {{sn-itsm}}
+* {{sn-sir}}
+* {{swimlane}}
* TheHive
-* {webhook-cm}
+* {{webhook-cm}}
You need to create a connector to send cases, which stores the information required to interact with an external system. For each case, you can send the title, description, and comment when you choose to push the case — for the **Webhook - Case Management** connector, you can also send the status and severity fields.
::::{important}
-To send cases to external systems, you need the appropriate license, and your role must have the **Cases** {{kib}} privilege as a user. For more details, refer to [Configure access to cases](../../../solutions/observability/incident-management/configure-access-to-cases.md).
+To send cases to external systems, you need the appropriate license, and your role must have the **Cases** {{kib}} privilege as a user. For more details, refer to [Configure access to cases](/solutions/observability/incident-management/configure-access-to-cases.md).
::::
-After creating a connector, you can set your cases to [automatically close](../../../solutions/observability/incident-management/configure-case-settings.md#close-connector-observability) when they are sent to an external system.
+After creating a connector, you can set your cases to [automatically close](/solutions/observability/incident-management/configure-case-settings.md#close-connector-observability) when they are sent to an external system.
### Create a connector [new-connector-observability]
@@ -77,7 +77,7 @@ After creating a connector, you can set your cases to [automatically close](../.
You can create additional connectors, update existing connectors, and change the connector used to send cases to external systems.
::::{tip}
-You can also configure which connector is used for each case individually. Refer to [Create and manage cases](../../../solutions/observability/incident-management/create-manage-cases.md).
+You can also configure which connector is used for each case individually. Refer to [Create and manage cases](/solutions/observability/incident-management/create-manage-cases.md).
::::
diff --git a/solutions/observability/incident-management/configure-service-level-objective-slo-access.md b/solutions/observability/incident-management/configure-service-level-objective-slo-access.md
index 610ceb21fc..71795701f8 100644
--- a/solutions/observability/incident-management/configure-service-level-objective-slo-access.md
+++ b/solutions/observability/incident-management/configure-service-level-objective-slo-access.md
@@ -21,10 +21,10 @@ You can enable access to SLOs in two different ways:
* [**SLO Editor**](#slo-all-access) — Create, edit, and manage SLOs and their historical summaries.
* [**SLO Viewer**](#slo-read-access) — Check SLOs and their historical summaries.
-* Using the `editor` [built-in role](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). This role grants full access to all features in {{kib}} (including the {{observability}} solution) and read-only access to data indices. Users assigned to this role can create, edit, and manage SLOs.
+* Using the `editor` [built-in role](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). This role grants full access to all features in {{kib}} (including the {{observability}} solution) and read-only access to data indices. Users assigned to this role can create, edit, and manage SLOs.
::::{note}
- The `editor` [built-in role](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) grants write access to *all* {{kib}} apps. If you want to limit access to the SLOs only, you have to manually create and assign the mentioned roles.
+ The `editor` [built-in role](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) grants write access to *all* {{kib}} apps. If you want to limit access to the SLOs only, you have to manually create and assign the mentioned roles.
::::
diff --git a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md
index ea9bf3813e..d05d8f29c1 100644
--- a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md
+++ b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md
@@ -11,7 +11,7 @@ mapped_pages:
::::{note}
-The **Editor** role or higher is required to create anomaly detection rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
+The **Editor** role or higher is required to create anomaly detection rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
::::
@@ -29,7 +29,7 @@ Create an anomaly detection rule to check for anomalies in one or more anomaly d
To create an anomaly detection rule:
1. In your {{obs-serverless}} project, go to **Machine learning** → **Jobs**.
-2. In the list of anomaly detection jobs, find the job you want to check for anomalies. Haven’t created a job yet? [Create one now](../../../explore-analyze/machine-learning/anomaly-detection.md).
+2. In the list of anomaly detection jobs, find the job you want to check for anomalies. Haven’t created a job yet? [Create one now](/explore-analyze/machine-learning/anomaly-detection.md).
3. From the **Actions** menu next to the job, select **Create alert rule**.
4. Specify a name and optional tags for the rule. You can use these tags later to filter alerts.
5. Verify that the correct job is selected and configure the alert details:
@@ -103,7 +103,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -136,7 +136,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.anomalyExplorerUrl`
: URL to open in the Anomaly Explorer.
diff --git a/solutions/observability/incident-management/create-an-apm-anomaly-rule.md b/solutions/observability/incident-management/create-an-apm-anomaly-rule.md
index f400f869c5..d7f12d1323 100644
--- a/solutions/observability/incident-management/create-an-apm-anomaly-rule.md
+++ b/solutions/observability/incident-management/create-an-apm-anomaly-rule.md
@@ -15,7 +15,7 @@ To use the APM Anomaly rule, you have to enable [machine learning](/solutions/ob
::::{note}
-For Observability serverless projects, the **Editor** role or higher is required to create anomaly rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+For Observability serverless projects, the **Editor** role or higher is required to create anomaly rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -81,7 +81,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -114,7 +114,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md b/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md
index aec742e181..10d5545322 100644
--- a/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md
+++ b/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md
@@ -11,7 +11,7 @@ mapped_pages:
::::{note}
-The **Editor** role or higher is required to create Elasticsearch query rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
+The **Editor** role or higher is required to create Elasticsearch query rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
::::
@@ -36,7 +36,7 @@ When you create an {{es}} query rule, your choice of query type affects the info
1. Define your query
- If you use [query DSL](../../../explore-analyze/query-filter/languages/querydsl.md), you must select an index and time field then provide your query. Only the `query`, `fields`, `_source` and `runtime_mappings` fields are used, other DSL fields are not considered. For example:
+ If you use [query DSL](/explore-analyze/query-filter/languages/querydsl.md), you must select an index and time field then provide your query. Only the `query`, `fields`, `_source` and `runtime_mappings` fields are used, other DSL fields are not considered. For example:
```sh
{
@@ -46,9 +46,9 @@ When you create an {{es}} query rule, your choice of query type affects the info
}
```
- If you use [KQL](../../../explore-analyze/query-filter/languages/kql.md) or [Lucene](../../../explore-analyze/query-filter/languages/lucene-query-syntax.md), you must specify a data view then define a text-based query. For example, `http.request.referrer: "https://example.com"`.
+ If you use [KQL](/explore-analyze/query-filter/languages/kql.md) or [Lucene](/explore-analyze/query-filter/languages/lucene-query-syntax.md), you must specify a data view then define a text-based query. For example, `http.request.referrer: "https://example.com"`.
- If you use [ES|QL](../../../explore-analyze/query-filter/languages/esql.md), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). For example:
+ If you use [ES|QL](/explore-analyze/query-filter/languages/esql.md), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). For example:
```sh
FROM kibana_sample_data_logs
@@ -135,7 +135,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -170,7 +170,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.conditions`
: A string that describes the threshold condition. Example: `count greater than 4`.
@@ -192,7 +192,7 @@ The following variables are specific to this rule type. You can also specify [va
{{/context.hits}}
```
- The documents returned by `context.hits` include the [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) field. If the {{es}} query search API’s [`fields`](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md#search-fields-param) parameter is used, documents will also return the `fields` field, which can be used to access any runtime fields defined by the [`runtime_mappings`](../../../manage-data/data-store/mapping/define-runtime-fields-in-search-request.md) parameter. For example:
+ The documents returned by `context.hits` include the [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) field. If the {{es}} query search API’s [`fields`](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md#search-fields-param) parameter is used, documents will also return the `fields` field, which can be used to access any runtime fields defined by the [`runtime_mappings`](/manage-data/data-store/mapping/define-runtime-fields-in-search-request.md) parameter. For example:
```txt
{{#context.hits}}
diff --git a/solutions/observability/incident-management/create-an-error-count-threshold-rule.md b/solutions/observability/incident-management/create-an-error-count-threshold-rule.md
index 44b26cbfab..5cf84c6d49 100644
--- a/solutions/observability/incident-management/create-an-error-count-threshold-rule.md
+++ b/solutions/observability/incident-management/create-an-error-count-threshold-rule.md
@@ -11,7 +11,7 @@ navigation_title: "Error count threshold"
::::{note}
-For Observability serverless projects, the **Editor** role or higher is required to create error count threshold rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+For Observability serverless projects, the **Editor** role or higher is required to create error count threshold rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -79,7 +79,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -112,7 +112,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-an-inventory-rule.md b/solutions/observability/incident-management/create-an-inventory-rule.md
index c4fdbe2524..3b2d0c9928 100644
--- a/solutions/observability/incident-management/create-an-inventory-rule.md
+++ b/solutions/observability/incident-management/create-an-inventory-rule.md
@@ -11,7 +11,7 @@ navigation_title: "Inventory"
::::{note}
-For Observability serverless projects, the **Editor** role or higher is required to create inventory threshold rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+For Observability serverless projects, the **Editor** role or higher is required to create inventory threshold rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -78,7 +78,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -119,7 +119,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
@@ -181,6 +181,6 @@ The following variables are specific to this rule type. You can also specify [va
## Settings [infra-alert-settings]
-With infrastructure threshold rules, it’s not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from **Metrics indices** on the [Settings](../../../solutions/observability/infra-and-hosts/configure-settings.md) page of the {{infrastructure-app}}.
+With infrastructure threshold rules, it’s not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from **Metrics indices** on the [Settings](/solutions/observability/infra-and-hosts/configure-settings.md) page of the {{infrastructure-app}}.
With each execution of the rule check, the **Metrics indices** setting is checked, but it is not stored when the rule is created.
\ No newline at end of file
diff --git a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md
index 9bedd81870..ebdbd7fc44 100644
--- a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md
+++ b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md
@@ -12,9 +12,9 @@ navigation_title: "SLO burn rate"
::::{important}
-**For Observability serverless projects**, The **Editor** role or higher is required to create SLOs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, The **Editor** role or higher is required to create SLOs. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
-**For Elastic Stack**, to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](../../../solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
+**For Elastic Stack**, to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](/solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
::::
@@ -89,7 +89,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -122,7 +122,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
@@ -162,5 +162,5 @@ The following variables are specific to this rule type. You can also specify [va
Learn how to view alerts and triage SLO burn rate breaches:
-* [View alerts](../../../solutions/observability/incident-management/view-alerts.md)
-* [SLO burn rate breaches](../../../solutions/observability/incident-management/triage-slo-burn-rate-breaches.md)
\ No newline at end of file
+* [View alerts](/solutions/observability/incident-management/view-alerts.md)
+* [SLO burn rate breaches](/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md)
\ No newline at end of file
diff --git a/solutions/observability/incident-management/create-an-slo.md b/solutions/observability/incident-management/create-an-slo.md
index e4b3d7c54e..0614391f37 100644
--- a/solutions/observability/incident-management/create-an-slo.md
+++ b/solutions/observability/incident-management/create-an-slo.md
@@ -9,9 +9,9 @@ mapped_pages:
::::{important}
-**For Observability serverless projects**, The **Editor** role or higher is required to create SLOs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, The **Editor** role or higher is required to create SLOs. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
-**For Elastic Stack**, to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](../../../solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
+**For Elastic Stack**, to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](/solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
::::
@@ -23,9 +23,9 @@ To create an SLO, find **SLOs** in the main menu or use the [global search field
From here, complete the following steps:
-1. [Define your service-level indicator (SLI)](../../../solutions/observability/incident-management/create-an-slo.md#define-sli).
-2. [Set your objectives](../../../solutions/observability/incident-management/create-an-slo.md#set-slo).
-3. [Describe your SLO](../../../solutions/observability/incident-management/create-an-slo.md#slo-describe).
+1. [Define your service-level indicator (SLI)](/solutions/observability/incident-management/create-an-slo.md#define-sli).
+2. [Set your objectives](/solutions/observability/incident-management/create-an-slo.md#set-slo).
+3. [Describe your SLO](/solutions/observability/incident-management/create-an-slo.md#slo-describe).
::::{note}
**For Elastic Stack**, the cluster must include one or more nodes with both `ingest` and `transform` [roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles). The roles can exist on the same node or be distributed across separate nodes. On {{ech}} deployments, this is handled by the hot nodes, which serve as both `ingest` and `transform` nodes.
@@ -36,11 +36,11 @@ From here, complete the following steps:
The type of SLI to use depends on the location of your data:
-* [Custom KQL](../../../solutions/observability/incident-management/create-an-slo.md#custom-kql): Create an SLI based on raw logs coming from your services.
-* [Timeslice metric](../../../solutions/observability/incident-management/create-an-slo.md#timeslice-metric): Create an SLI based on a custom equation that uses multiple aggregations.
-* [Custom metric](../../../solutions/observability/incident-management/create-an-slo.md#custom-metric): Create an SLI to define custom equations from metric fields in your indices.
-* [Histogram metric](../../../solutions/observability/incident-management/create-an-slo.md#histogram-metric): Create an SLI based on histogram metrics.
-* [APM latency and APM availability](../../../solutions/observability/incident-management/create-an-slo.md#apm-latency-and-availability): Create an SLI based on services using application performance monitoring (APM).
+* [Custom KQL](/solutions/observability/incident-management/create-an-slo.md#custom-kql): Create an SLI based on raw logs coming from your services.
+* [Timeslice metric](/solutions/observability/incident-management/create-an-slo.md#timeslice-metric): Create an SLI based on a custom equation that uses multiple aggregations.
+* [Custom metric](/solutions/observability/incident-management/create-an-slo.md#custom-metric): Create an SLI to define custom equations from metric fields in your indices.
+* [Histogram metric](/solutions/observability/incident-management/create-an-slo.md#histogram-metric): Create an SLI based on histogram metrics.
+* [APM latency and APM availability](/solutions/observability/incident-management/create-an-slo.md#apm-latency-and-availability): Create an SLI based on services using application performance monitoring (APM).
### Custom KQL [custom-kql]
@@ -200,9 +200,9 @@ Synthetics availability SLIs are automatically grouped by monitor and location.
After defining your SLI, you need to set your objectives. To set your objectives, complete the following:
-1. [Select your budgeting method](../../../solutions/observability/incident-management/create-an-slo.md#slo-budgeting-method)
-2. [Set your time window](../../../solutions/observability/incident-management/create-an-slo.md#slo-time-window)
-3. [Set your target/SLO percentage](../../../solutions/observability/incident-management/create-an-slo.md#slo-target)
+1. [Select your budgeting method](/solutions/observability/incident-management/create-an-slo.md#slo-budgeting-method)
+2. [Set your time window](/solutions/observability/incident-management/create-an-slo.md#slo-time-window)
+3. [Set your target/SLO percentage](/solutions/observability/incident-management/create-an-slo.md#slo-target)
### Set your time window and duration [slo-time-window]
@@ -239,11 +239,11 @@ After setting your objectives, give your SLO a name, a short description, and ad
When you use the UI to create an SLO, a default SLO burn rate alert rule is created automatically. The burn rate rule will use the default configuration and no connector. You must configure a connector if you want to receive alerts for SLO breaches.
-For more information about configuring the rule, see [Create an SLO burn rate rule](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md).
+For more information about configuring the rule, see [Create an SLO burn rate rule](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md).
## Add an SLO Overview panel to a custom dashboard [slo-dashboard]
-After you’ve created your SLO, you can monitor it from the *SLOs* page in Observability, but you can also add an *SLO Overview* panel to a custom dashboard. Read more about dashboards in [Dashboard and visualizations](../../../explore-analyze/dashboards.md).
+After you’ve created your SLO, you can monitor it from the *SLOs* page in Observability, but you can also add an *SLO Overview* panel to a custom dashboard. Read more about dashboards in [Dashboard and visualizations](/explore-analyze/dashboards.md).
:::{image} /solutions/images/observability-slo-overview-embeddable-widget.png
:alt: Using the Add panel button to add an SLO Overview widget to a dashboard
diff --git a/solutions/observability/incident-management/create-custom-threshold-rule.md b/solutions/observability/incident-management/create-custom-threshold-rule.md
index ee81db81ab..e1e28d3ac2 100644
--- a/solutions/observability/incident-management/create-custom-threshold-rule.md
+++ b/solutions/observability/incident-management/create-custom-threshold-rule.md
@@ -11,7 +11,7 @@ navigation_title: "Custom threshold"
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create a custom threshold rule. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create a custom threshold rule. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -32,7 +32,7 @@ Create a custom threshold rule to trigger an alert when an {{obs-serverless}} da
Specify the following settings to define the data the rule applies to:
-* **Select a data view:** Click the data view field to search for and select a data view that points to the indices or data streams that you’re creating a rule for. You can also create a *new* data view by clicking **Create a data view**. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more on creating data views.
+* **Select a data view:** Click the data view field to search for and select a data view that points to the indices or data streams that you’re creating a rule for. You can also create a *new* data view by clicking **Create a data view**. Refer to [Create a data view](/explore-analyze/find-and-organize/data-views.md) for more on creating data views.
* **Define query filter (optional):** Use a query filter to narrow down the data that the rule applies to. For example, set a query filter to a specific host name using the query filter `host.name:host-1` to only apply the rule to that host.
@@ -43,7 +43,7 @@ Set the conditions for the rule to detect using aggregations, an equation, and a
### Set aggregations [custom-threshold-aggregation]
-Aggregations summarize your data to make it easier to analyze. Set any of the following aggregation types to gather data to create your rule: `Average`, `Max`, `Min`, `Cardinality`, `Count`, `Sum,` `Percentile`, or `Rate`. For more information about these options, refer to [Aggregation options](../../../solutions/observability/incident-management/aggregation-options.md).
+Aggregations summarize your data to make it easier to analyze. Set any of the following aggregation types to gather data to create your rule: `Average`, `Max`, `Min`, `Cardinality`, `Count`, `Sum,` `Percentile`, or `Rate`. For more information about these options, refer to [Aggregation options](/solutions/observability/incident-management/aggregation-options.md).
For example, to gather the total number of log documents with a log level of `warn`:
@@ -171,7 +171,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -205,7 +205,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md b/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md
index 87d833cd2b..aa4ec6ec5e 100644
--- a/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md
+++ b/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md
@@ -11,7 +11,7 @@ navigation_title: "Failed transaction rate threshold"
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create failed transaction rate threshold rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create failed transaction rate threshold rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -79,7 +79,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -112,7 +112,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-latency-threshold-rule.md b/solutions/observability/incident-management/create-latency-threshold-rule.md
index aff3b870cc..bddd0694b5 100644
--- a/solutions/observability/incident-management/create-latency-threshold-rule.md
+++ b/solutions/observability/incident-management/create-latency-threshold-rule.md
@@ -11,7 +11,7 @@ navigation_title: "Latency threshold"
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create latency threshold rules. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create latency threshold rules. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -83,7 +83,7 @@ Some connector types are paid commercial features, while others are free. For a
::::
-For more information on creating connectors, refer to [Connectors](../../../deploy-manage/manage-connectors.md).
+For more information on creating connectors, refer to [Connectors](/deploy-manage/manage-connectors.md).
:::::
@@ -116,7 +116,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You can also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-log-threshold-rule.md b/solutions/observability/incident-management/create-log-threshold-rule.md
index 8592669269..e1c0996b4e 100644
--- a/solutions/observability/incident-management/create-log-threshold-rule.md
+++ b/solutions/observability/incident-management/create-log-threshold-rule.md
@@ -152,7 +152,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You an also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-manage-cases.md b/solutions/observability/incident-management/create-manage-cases.md
index 2a9112bfd6..d98649f02a 100644
--- a/solutions/observability/incident-management/create-manage-cases.md
+++ b/solutions/observability/incident-management/create-manage-cases.md
@@ -8,7 +8,7 @@ mapped_pages:
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create and manage cases. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create and manage cases. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -17,7 +17,7 @@ Open a new case to keep track of issues and share the details with colleagues. T
1. Find **Cases** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Click **Create case**.
-3. (Optional) If you defined [templates](../../../solutions/observability/incident-management/configure-case-settings.md#observability-case-templates), select one to use its default field values. [preview]
+3. (Optional) If you defined [templates](/solutions/observability/incident-management/configure-case-settings.md#observability-case-templates), select one to use its default field values. [preview]
4. Give the case a name, severity, and description.
::::{tip}
@@ -29,10 +29,10 @@ Open a new case to keep track of issues and share the details with colleagues. T
**For Observability serverless projects**, you can add users who are assigned the Editor user role (or a more permissive role) for the project.
- **For Elastic Stack**, You can add users only if they meet the necessary [prerequisites](../../../solutions/observability/incident-management/configure-access-to-cases.md).
+ **For Elastic Stack**, You can add users only if they meet the necessary [prerequisites](/solutions/observability/incident-management/configure-access-to-cases.md).
-6. If you defined [custom fields](../../../solutions/observability/incident-management/configure-case-settings.md#case-custom-fields), they appear in the **Additional fields** section.
-7. (Optional) Under External incident management system, you can select a connector to send cases to an external system. If you’ve created any connectors previously, they will be listed here. If there are no connectors listed, you can [create one](../../../solutions/observability/incident-management/configure-case-settings.md).
+6. If you defined [custom fields](/solutions/observability/incident-management/configure-case-settings.md#case-custom-fields), they appear in the **Additional fields** section.
+7. (Optional) Under External incident management system, you can select a connector to send cases to an external system. If you’ve created any connectors previously, they will be listed here. If there are no connectors listed, you can [create one](/solutions/observability/incident-management/configure-case-settings.md).
8. After you’ve completed all of the required fields, click **Create case**.
::::{tip}
@@ -75,7 +75,7 @@ There is a 10 MiB size limit for images. For all other MIME types, the limit is
To send a case to an external system, click the  button in the *External incident management system* section of the individual case page. This information is not sent automatically. If you make further changes to the shared case fields, you should push the case again.
-For more information about configuring connections to external incident management systems, refer to [Configure case settings](../../../solutions/observability/incident-management/configure-case-settings.md).
+For more information about configuring connections to external incident management systems, refer to [Configure case settings](/solutions/observability/incident-management/configure-case-settings.md).
## Manage existing cases [observability-create-a-new-case-manage-existing-cases]
diff --git a/solutions/observability/incident-management/create-manage-rules.md b/solutions/observability/incident-management/create-manage-rules.md
index 5309b34f0d..a1aada86bb 100644
--- a/solutions/observability/incident-management/create-manage-rules.md
+++ b/solutions/observability/incident-management/create-manage-rules.md
@@ -8,7 +8,7 @@ mapped_pages:
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create and manage rules for alerting. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create and manage rules for alerting. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -24,17 +24,17 @@ Learn more about Observability rules and how to create them:
| Rule type | Name | Detects when… |
| --- | --- | --- |
-| AIOps | [Anomaly detection](../../../solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | Anomalies match specific conditions. |
-| APM | [APM anomaly](../../../solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | The latency, throughput, or failed transaction rate of a service is abnormal. |
-| Observability | [Custom threshold](../../../solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | An Observability data type reaches or exceeds a given value. |
-| Stack | [{{es}} query](../../../solutions/observability/incident-management/create-an-elasticsearch-query-rule.md) | Matches are found during the latest query run. |
-| APM | [Error count threshold](../../../solutions/observability/incident-management/create-an-error-count-threshold-rule.md) | The number of errors in a service exceeds a defined threshold. |
-| APM | [Failed transaction rate threshold](../../../solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md) | The rate of transaction errors in a service exceeds a defined threshold. |
-| Metrics | [Inventory](../../../solutions/observability/incident-management/create-an-inventory-rule.md) | The infrastructure inventory exceeds a defined threshold. |
-| Logs | [Log threshold](../../../solutions/observability/incident-management/create-log-threshold-rule.md) | An Observability data type reaches or exceeds a given value. |
-| Metrics | [Metric threshold](../../../solutions/observability/incident-management/create-metric-threshold-rule.md)| An Observability data type reaches or exceeds a given value. |
-| APM | [Latency threshold](../../../solutions/observability/incident-management/create-latency-threshold-rule.md) | The latency of a specific transaction type in a service exceeds a defined threshold. |
-| SLO | [SLO burn rate rule](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) | The burn rate is above a defined threshold. |
+| AIOps | [Anomaly detection](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | Anomalies match specific conditions. |
+| APM | [APM anomaly](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | The latency, throughput, or failed transaction rate of a service is abnormal. |
+| Observability | [Custom threshold](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | An Observability data type reaches or exceeds a given value. |
+| Stack | [{{es}} query](/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md) | Matches are found during the latest query run. |
+| APM | [Error count threshold](/solutions/observability/incident-management/create-an-error-count-threshold-rule.md) | The number of errors in a service exceeds a defined threshold. |
+| APM | [Failed transaction rate threshold](/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md) | The rate of transaction errors in a service exceeds a defined threshold. |
+| Metrics | [Inventory](/solutions/observability/incident-management/create-an-inventory-rule.md) | The infrastructure inventory exceeds a defined threshold. |
+| Logs | [Log threshold](/solutions/observability/incident-management/create-log-threshold-rule.md) | An Observability data type reaches or exceeds a given value. |
+| Metrics | [Metric threshold](/solutions/observability/incident-management/create-metric-threshold-rule.md)| An Observability data type reaches or exceeds a given value. |
+| APM | [Latency threshold](/solutions/observability/incident-management/create-latency-threshold-rule.md) | The latency of a specific transaction type in a service exceeds a defined threshold. |
+| SLO | [SLO burn rate rule](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) | The burn rate is above a defined threshold. |
## Creating rules and alerts [observability-create-manage-rules-creating-rules-and-alerts]
@@ -42,7 +42,7 @@ Learn more about Observability rules and how to create them:
You start by defining the rule and how often it should be evaluated. You can extend these rules by adding an appropriate action (for example, send an email or create an issue) to be triggered when the rule conditions are met. These actions are defined within each rule and implemented by the appropriate connector for that action e.g. Slack, Jira. You can create any rules from scratch using the **Manage Rules** page, or you can create specific rule types from their respective UIs and benefit from some of the details being pre-filled (for example, Name and Tags).
* For APM alert types, you can select **Alerts and rules** and create rules directly from the **Services**, **Traces**, and **Dependencies** UIs.
-* For SLO alert types, from the **SLOs** page open the **More actions** menu  for an SLO and select **Create new alert rule**. Alternatively, when you create a new SLO, the **Create new SLO burn rate alert rule** checkbox is enabled by default and will prompt you to [Create SLO burn rate rule](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) upon saving the SLO.
+* For SLO alert types, from the **SLOs** page open the **More actions** menu  for an SLO and select **Create new alert rule**. Alternatively, when you create a new SLO, the **Create new SLO burn rate alert rule** checkbox is enabled by default and will prompt you to [Create SLO burn rate rule](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) upon saving the SLO.
After a rule is created, you can open the **More actions** menu  and select **Edit rule** to check or change the definition, and/or add or modify actions.
@@ -89,11 +89,11 @@ When you snooze a rule, the rule checks continue to run on a schedule but the al
When a rule is in a snoozed state, you can cancel or change the duration of this state.
-To temporarily suppress notifications for *all* rules, create a [maintenance window](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md).
+To temporarily suppress notifications for *all* rules, create a [maintenance window](/explore-analyze/alerts-cases/alerts/maintenance-windows.md).
## Import and export rules [observability-create-manage-rules-import-and-export-rules]
-To import and export rules, use [{{saved-objects-app}}](../../../explore-analyze/find-and-organize.md).
+To import and export rules, use [{{saved-objects-app}}](/explore-analyze/find-and-organize.md).
Rules are disabled on export. You are prompted to re-enable the rule on successful import.
\ No newline at end of file
diff --git a/solutions/observability/incident-management/create-metric-threshold-rule.md b/solutions/observability/incident-management/create-metric-threshold-rule.md
index ab08293215..04aa682adb 100644
--- a/solutions/observability/incident-management/create-metric-threshold-rule.md
+++ b/solutions/observability/incident-management/create-metric-threshold-rule.md
@@ -121,7 +121,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You an also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.alertDetailsUrl`
: Link to the alert troubleshooting view for further context and details. This will be an empty string if the `server.publicBaseUrl` is not configured.
diff --git a/solutions/observability/incident-management/create-monitor-status-rule.md b/solutions/observability/incident-management/create-monitor-status-rule.md
index 4f7b8b68f3..8efd0d3071 100644
--- a/solutions/observability/incident-management/create-monitor-status-rule.md
+++ b/solutions/observability/incident-management/create-monitor-status-rule.md
@@ -27,7 +27,7 @@ The **Filter by** section controls the scope of the rule. The rule will only che
## Conditions [observability-monitor-status-alert-conditions]
-Conditions for each rule will be applied to all monitors that match the filters in the [**Filter by** section](../../../solutions/observability/incident-management/create-monitor-status-rule.md#observability-monitor-status-alert-filters). You can choose the number of times the monitor has to be down relative to either a number of checks run or a time range in which checks were run, and the minimum number of locations the monitor must be down in.
+Conditions for each rule will be applied to all monitors that match the filters in the [**Filter by** section](/solutions/observability/incident-management/create-monitor-status-rule.md#observability-monitor-status-alert-filters). You can choose the number of times the monitor has to be down relative to either a number of checks run or a time range in which checks were run, and the minimum number of locations the monitor must be down in.
::::{note}
Retests are included in the number of checks.
@@ -111,7 +111,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You an also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.checkedAt`
: Timestamp of the monitor run.
@@ -174,7 +174,7 @@ serverless: unavailable
If you are using the Uptime monitor status rule with the Uptime app, you should migrate the Uptime monitor and the Uptime monitor status rule to Elastic Synthetics and the Synthetics monitor rule.
-If you are using the Uptime monitor status rule with a monitor created with Elastic Synthetics, you should migrate the Uptime monitor status rule to the Synthetics monitor rule. Learn how in [Migrate from the Uptime rule to the Synthetics rule](../../../solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule).
+If you are using the Uptime monitor status rule with a monitor created with Elastic Synthetics, you should migrate the Uptime monitor status rule to the Synthetics monitor rule. Learn how in [Migrate from the Uptime rule to the Synthetics rule](/solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule).
::::
@@ -263,8 +263,8 @@ To receive a notification when the alert recovers, select **Run when Recovered**
If you are currently using the Uptime monitor status with a monitor created with Elastic Synthetics, you should migrate the Uptime monitor status rule to:
-* If you were using the Uptime rule for **synthetic monitor *status* checks**, you can recreate similar functionality using the [Synthetics monitor rule](../../../solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-rule).
-* If you were using the Uptime rule for **synthetic monitor *availability* checks**, there is no equivalent in the Synthetics monitor rule. Instead, you can use the [Synthetics availability SLI](../../../solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-sli) to create similar functionality.
+* If you were using the Uptime rule for **synthetic monitor *status* checks**, you can recreate similar functionality using the [Synthetics monitor rule](/solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-rule).
+* If you were using the Uptime rule for **synthetic monitor *availability* checks**, there is no equivalent in the Synthetics monitor rule. Instead, you can use the [Synthetics availability SLI](/solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-sli) to create similar functionality.
### Uptime status check to Synthetics monitor rule [migrate-monitor-rule-synthetics-rule]
@@ -278,7 +278,7 @@ The KQL syntax that you used in the Uptime monitor status rule is also valid in
#### Conditions [monitor-status-alert-checks-conditions]
::::{note}
-If you are using the *Uptime availability condition* refer to [Uptime availability check to Synthetics availability SLI](../../../solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-sli).
+If you are using the *Uptime availability condition* refer to [Uptime availability check to Synthetics availability SLI](/solutions/observability/incident-management/create-monitor-status-rule.md#migrate-monitor-rule-synthetics-sli).
::::
@@ -293,12 +293,12 @@ If you’re using the Uptime status check condition, you can recreate similar ef
#### Actions [monitor-status-alert-checks-actions]
-The default messages for the Uptime monitor status rule and Synthetics monitor status rule are different, but you can recreate similar messages using [Synthetics monitor status rule action variables](../../../solutions/observability/incident-management/create-monitor-status-rule.md#observability-monitor-status-alert-action-variables).
+The default messages for the Uptime monitor status rule and Synthetics monitor status rule are different, but you can recreate similar messages using [Synthetics monitor status rule action variables](/solutions/observability/incident-management/create-monitor-status-rule.md#observability-monitor-status-alert-action-variables).
### Uptime availability check to Synthetics availability SLI [migrate-monitor-rule-synthetics-sli]
-SLOs allow you to set clear, measurable targets for your service performance, based on factors like availability. The [Synthetics availability SLI](../../../solutions/observability/incident-management/create-an-slo.md#synthetics-availability-sli) is a service-level indicator (SLI) based on the availability of your synthetic monitors.
+SLOs allow you to set clear, measurable targets for your service performance, based on factors like availability. The [Synthetics availability SLI](/solutions/observability/incident-management/create-an-slo.md#synthetics-availability-sli) is a service-level indicator (SLI) based on the availability of your synthetic monitors.
#### Filters [monitor-status-alert-checks-filters-uptime]
@@ -318,4 +318,4 @@ Use the following Synthetics availability SLI fields to replace the Uptime monit
#### Actions [monitor-status-alert-checks-actions-uptime]
-After creating a new SLO using the Synthetics availability SLI, you can use the SLO burn rate rule. For more information about configuring the rule, see [Create an SLO burn rate rule](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md).
\ No newline at end of file
+After creating a new SLO using the Synthetics availability SLI, you can use the SLO burn rate rule. For more information about configuring the rule, see [Create an SLO burn rate rule](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md).
\ No newline at end of file
diff --git a/solutions/observability/incident-management/create-tls-certificate-rule.md b/solutions/observability/incident-management/create-tls-certificate-rule.md
index 592c2a7182..2c2b865ead 100644
--- a/solutions/observability/incident-management/create-tls-certificate-rule.md
+++ b/solutions/observability/incident-management/create-tls-certificate-rule.md
@@ -104,7 +104,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You an also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.checkedAt`
: Timestamp of the monitor run.
@@ -256,7 +256,7 @@ Use the default notification message or customize it. You can add more context t
:screenshot:
:::
-The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to this rule type. You an also specify [variables common to all rules](/explore-analyze/alerts-cases/alerts/rule-action-variables.md).
`context.agingCommonNameAndDate`
: The common names and expiration date/time of the detected certs.
diff --git a/solutions/observability/incident-management/service-level-objectives-slos.md b/solutions/observability/incident-management/service-level-objectives-slos.md
index 49dc6eac4e..5a224d7451 100644
--- a/solutions/observability/incident-management/service-level-objectives-slos.md
+++ b/solutions/observability/incident-management/service-level-objectives-slos.md
@@ -8,7 +8,7 @@ mapped_pages:
% Stateful only for the following admon.
::::{important}
-**For Elastic Stack v9,** to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](../../../solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
+**For Elastic Stack v9,** to create and manage SLOs, you need an [appropriate license](https://www.elastic.co/subscriptions), an {{es}} cluster with both `transform` and `ingest` [node roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles) present, and [SLO access](/solutions/observability/incident-management/configure-service-level-objective-slo-access.md) must be configured.
::::
@@ -28,7 +28,7 @@ The following table lists some important concepts related to SLOs:
| **Error budget** | The amount of time that your SLI can fail to meet the SLO target before it violates your SLO. |
| **Burn rate** | The rate at which your service consumes your error budget. |
-In addition to these key concepts related to SLO functionality, see [Understanding SLO internals](../../../troubleshoot/observability/troubleshoot-service-level-objectives-slos.md#slo-understanding-slos) for more information on how SLOs work and their relationship with other system components, such as [{{es}} Transforms](../../../explore-analyze/transforms.md).
+In addition to these key concepts related to SLO functionality, see [Understanding SLO internals](/troubleshoot/observability/troubleshoot-service-level-objectives-slos.md#slo-understanding-slos) for more information on how SLOs work and their relationship with other system components, such as [{{es}} Transforms](/explore-analyze/transforms.md).
## SLO overview [slo-in-elastic]
@@ -45,7 +45,7 @@ Select an SLO from the overview to see additional details including:
* **Burn rate:** the percentage of bad events over different time periods (1h, 6h, 24h, 72h) and the risk of exhausting your error budget within those time periods.
* **Historical SLI:** the SLI value and how it’s trending over the SLO time window.
* **Error budget burn down:** the remaining error budget and how it’s trending over the SLO time window.
-* **Alerts:** active alerts if you’ve set any [SLO burn rate alert rules](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) for the SLO.
+* **Alerts:** active alerts if you’ve set any [SLO burn rate alert rules](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md) for the SLO.
:::{image} /solutions/images/serverless-slo-detailed-view.png
:alt: Detailed view of a single SLO
@@ -75,7 +75,7 @@ There are also options to sort and group the SLOs displayed in the overview:
* **Sort by**: SLI value, SLO status, Error budget consumed, or Error budget remaining.
* **Group by**: None, Tags, Status, or SLI type.
-* Click icons to switch between a card view (), list view (), or compact view (]).
+* Click icons to switch between a card view (), list view (), or compact view ().
## SLO dashboard panels [observability-slos-slo-dashboard-panels]
@@ -92,7 +92,7 @@ Available SLO panels include:
:screenshot:
:::
-To learn more about Dashboards, see [Dashboards](../../../solutions/observability/get-started/get-started-with-dashboards.md).
+To learn more about Dashboards, see [Dashboards](/solutions/observability/get-started/get-started-with-dashboards.md).
% Stateful only for upgrade.
@@ -100,7 +100,7 @@ To learn more about Dashboards, see [Dashboards](../../../solutions/observabilit
Starting in version 8.12.0, SLOs are generally available (GA). If you’re upgrading from a beta version of SLOs (available in 8.11.0 and earlier), you must migrate your SLO definitions to a new format.
-Refer to [Upgrade from beta to GA](../../../troubleshoot/observability/troubleshoot-service-level-objectives-slos.
+Refer to [Upgrade from beta to GA](/troubleshoot/observability/troubleshoot-service-level-objectives-slos.md).
## Next steps [slo-overview-next-steps]
@@ -109,8 +109,8 @@ Refer to [Upgrade from beta to GA](../../../troubleshoot/observability/troublesh
Get started using SLOs to measure your service performance:
-* [Configure SLO access](../../../solutions/observability/incident-management/configure-service-level-objective-slo-access.md)
-* [Create an SLO](../../../solutions/observability/incident-management/create-an-slo.md)
-* [SLO burn rate](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md)
-* [View alerts](../../../solutions/observability/incident-management/view-alerts.md)
-* [SLO burn rate breaches](../../../solutions/observability/incident-management/triage-slo-burn-rate-breaches.md)
\ No newline at end of file
+* [Configure SLO access](/solutions/observability/incident-management/configure-service-level-objective-slo-access.md)
+* [Create an SLO](/solutions/observability/incident-management/create-an-slo.md)
+* [SLO burn rate](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md)
+* [View alerts](/solutions/observability/incident-management/view-alerts.md)
+* [SLO burn rate breaches](/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md)
\ No newline at end of file
diff --git a/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md b/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md
index d3307cb176..48ea691b00 100644
--- a/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md
+++ b/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md
@@ -9,7 +9,7 @@ navigation_title: "SLO burn rate breaches"
# Triage SLO burn rate breaches [triage-slo-burn-rate-breaches]
-SLO burn rate breaches occur when the percentage of bad events over a specified time period exceeds the threshold set in your [SLO burn rate rule](../../../solutions/observability/incident-management/create-an-slo-burn-rate-rule.md). When this happens, you are at risk of exhausting your error budget and violating your SLO.
+SLO burn rate breaches occur when the percentage of bad events over a specified time period exceeds the threshold set in your [SLO burn rate rule](/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md). When this happens, you are at risk of exhausting your error budget and violating your SLO.
To triage issues quickly, go to the alert details page:
@@ -50,5 +50,5 @@ The contents of the alert details page may vary depending on the type of SLI tha
After investigating the alert, you may want to:
* Click **Snooze the rule** to snooze notifications for a specific time period or indefinitely.
-* Click the  icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to [Cases](../../../solutions/observability/incident-management/cases.md).
+* Click the  icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to [Cases](/solutions/observability/incident-management/cases.md).
* Click the  icon and select **Mark as untracked**. When an alert is marked as untracked, actions are no longer generated. You can choose to move active alerts to this state when you disable or delete rules.
\ No newline at end of file
diff --git a/solutions/observability/incident-management/triage-threshold-breaches.md b/solutions/observability/incident-management/triage-threshold-breaches.md
index 7988832f21..9c83a919e7 100644
--- a/solutions/observability/incident-management/triage-threshold-breaches.md
+++ b/solutions/observability/incident-management/triage-threshold-breaches.md
@@ -9,7 +9,7 @@ navigation_title: "Threshold breaches"
# Triage threshold breaches [triage-threshold-breaches]
-Threshold breaches occur when an {{observability}} data type reaches or exceeds the threshold set in your [custom threshold rule](../../../solutions/observability/incident-management/create-custom-threshold-rule.md). For example, you might have a custom threshold rule that triggers an alert when the total number of log documents with a log level of `error` reaches 100.
+Threshold breaches occur when an {{observability}} data type reaches or exceeds the threshold set in your [custom threshold rule](/solutions/observability/incident-management/create-custom-threshold-rule.md). For example, you might have a custom threshold rule that triggers an alert when the total number of log documents with a log level of `error` reaches 100.
To triage issues quickly, go to the alert details page:
@@ -32,7 +32,7 @@ Explore charts on the page to learn more about the threshold breach:
::::
-* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis) documentation.
+* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis) documentation.
:::{image} /solutions/images/observability-log-threshold-breach-log-rate-analysis.png
:alt: Log rate analysis chart in alert details for log threshold breach
@@ -52,5 +52,5 @@ Analyze these charts to better understand when the breach started, it’s curren
After investigating the alert, you may want to:
* Click **Snooze the rule** to snooze notifications for a specific time period or indefinitely.
-* Click the  icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to [Cases](../../../solutions/observability/incident-management/cases.md).
+* Click the  icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to [Cases](/solutions/observability/incident-management/cases.md).
* Click the  icon and select **Mark as untracked**. When an alert is marked as untracked, actions are no longer generated. You can choose to move active alerts to this state when you disable or delete rules.
\ No newline at end of file
diff --git a/solutions/observability/incident-management/view-alerts.md b/solutions/observability/incident-management/view-alerts.md
index 2de8bc309b..0c80bc6357 100644
--- a/solutions/observability/incident-management/view-alerts.md
+++ b/solutions/observability/incident-management/view-alerts.md
@@ -8,7 +8,7 @@ mapped_pages:
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to perform this task. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to perform this task. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -18,7 +18,7 @@ You can track and manage alerts for your applications and SLOs from the **Alerts
% Stateful only for the following note
::::{note}
-You can centrally manage rules from the [{{kib}} Management UI](../../../explore-analyze/alerts-cases/alerts/create-manage-rules.md) that provides a set of built-in [rule types](../../../explore-analyze/alerts-cases/alerts/rule-types.md) and [connectors](../../../deploy-manage/manage-connectors.md) for you to use. Click **Manage Rules**.
+You can centrally manage rules from the [{{kib}} Management UI](/explore-analyze/alerts-cases/alerts/create-manage-rules.md) that provides a set of built-in [rule types](/explore-analyze/alerts-cases/alerts/rule-types.md) and [connectors](/deploy-manage/manage-connectors.md) for you to use. Click **Manage Rules**.
::::
:::{image} /solutions/images/serverless-observability-alerts-view.png
@@ -29,7 +29,7 @@ You can centrally manage rules from the [{{kib}} Management UI](../../../explore
## Filter alerts [observability-view-alerts-filter-alerts]
-To help you get started with your analysis faster, use the KQL bar to create structured queries using [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md).
+To help you get started with your analysis faster, use the KQL bar to create structured queries using [{{kib}} Query Language](/explore-analyze/query-filter/languages/kql.md).
You can use the time filter to define a specific date and time range. By default, this filter is set to search for the last 15 minutes.
@@ -85,10 +85,10 @@ To view the alert in the app that triggered it:
Use the toolbar buttons in the upper-left of the alerts table to customize the columns you want displayed:
* **Columns**: Reorder the columns.
-* **x* fields sorted**: Sort the table by one or more columns.
+* **x fields sorted**: Sort the table by one or more columns.
* **Fields**: Select the fields to display in the table.
-For example, click **Fields** and choose the `Maintenance Windows` field. If an alert was affected by a maintenance window, its identifier appears in the new column. For more information about their impact on alert notifications, refer to [{{maint-windows-cap}}](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md).
+For example, click **Fields** and choose the `Maintenance Windows` field. If an alert was affected by a maintenance window, its identifier appears in the new column. For more information about their impact on alert notifications, refer to [{{maint-windows-cap}}](/explore-analyze/alerts-cases/alerts/maintenance-windows.md).
You can also use the toolbar buttons in the upper-right to customize the display options or view the table in full-screen mode.
@@ -111,7 +111,7 @@ To add an alert to a new case:
1. Select **Add to new case**.
2. Enter a case name, add relevant tags, and include a case description.
3. Under **External incident management system**, select a connector. If you’ve previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`.
-4. After you’ve completed all of the required fields, click **Create case**. A notification message confirms you successfully created the case. To view the case details, click the notification link or go to the [Cases](../../../solutions/observability/incident-management/cases.md) page.
+4. After you’ve completed all of the required fields, click **Create case**. A notification message confirms you successfully created the case. To view the case details, click the notification link or go to the [Cases](/solutions/observability/incident-management/cases.md) page.
### Add an alert to an existing case [observability-view-alerts-add-an-alert-to-an-existing-case]
diff --git a/solutions/observability/infra-and-hosts.md b/solutions/observability/infra-and-hosts.md
index a487c01e48..1efd7d7dd0 100644
--- a/solutions/observability/infra-and-hosts.md
+++ b/solutions/observability/infra-and-hosts.md
@@ -17,9 +17,9 @@ Explore the topics in this section to learn how to observe and monitor hosts and
| | |
| --- | --- |
-| [Analyze infrastructure and host metrics](../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. |
-| [Universal Profiling](../../solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. |
-| [Tutorial: Observe your Kubernetes deployments](../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. |
-| [Tutorial: Observe your nginx instances](../../solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. |
-| [Troubleshooting](../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. |
+| [Analyze infrastructure and host metrics](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. |
+| [Universal Profiling](/solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. |
+| [Tutorial: Observe your Kubernetes deployments](/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. |
+| [Tutorial: Observe your nginx instances](/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. |
+| [Troubleshooting](/troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. |
| [Metrics reference](/reference/data-analysis/observability/index.md) | Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. |
\ No newline at end of file
diff --git a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md
index 312f54162b..15276a10e0 100644
--- a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md
+++ b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md
@@ -25,7 +25,7 @@ The `symbtool` binary currently requires a Linux machine.
## Use the `symbtool` binary [profiling-use-symbtool]
-Before using the `symbtool` binary, create an [Elasticsearch API token](../../../deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key). Pass this token using the `-t` or `--api-key` argument.
+Before using the `symbtool` binary, create an [Elasticsearch API token](/deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key). Pass this token using the `-t` or `--api-key` argument.
You also need to copy the **Symbols** endpoint from the deployment overview page. Pass this URL using the `-u` or `--url` argument.
diff --git a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md
index 01aa099714..838ef232ce 100644
--- a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md
+++ b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md
@@ -35,7 +35,7 @@ To learn more about the metrics shown on this page, refer to the [Metrics refere
If you haven’t added data yet, click **Add data** to search for and install an Elastic integration.
-Need help getting started? Follow the steps in [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md).
+Need help getting started? Follow the steps in [Get started with system metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md).
::::
@@ -45,12 +45,12 @@ The **Hosts** page provides several ways to view host metrics:
* Overview tiles show the number of hosts returned by your search plus averages of key metrics, including CPU usage, normalized load, and memory usage. Max disk usage is also shown.
* The Host limit controls the maximum number of hosts shown on the page. The default is 50, which means the page shows data for the top 50 hosts based on the most recent timestamps. You can increase the host limit to see data for more hosts, but doing so may impact query performance.
* The Hosts table shows a breakdown of metrics for each host along with an alert count for any hosts with active alerts. You may need to page through the list or change the number of rows displayed on each page to see all of your hosts.
-* Each host name is an active link to a [host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details) page, where you can explore enhanced metrics and other observability data related to the selected host.
+* Each host name is an active link to a [host details](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details) page, where you can explore enhanced metrics and other observability data related to the selected host.
* Table columns are sortable, but note that the sorting behavior is applied to the already returned data set.
* The tabs at the bottom of the page show an overview of the metrics, logs, and alerts for all hosts returned by your search.
::::{tip}
-For more information about creating and viewing alerts, refer to [Alerting](../../../solutions/observability/incident-management/alerting.md).
+For more information about creating and viewing alerts, refer to [Alerting](/solutions/observability/incident-management/alerting.md).
::::
@@ -60,7 +60,7 @@ For more information about creating and viewing alerts, refer to [Alerting](../.
The **Hosts** page provides several mechanisms for filtering the data on the page:
-* Enter a search query using [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md) to show metrics that match your search criteria. For example, to see metrics for hosts running on linux, enter `host.os.type : "linux"`. Otherwise you’ll see metrics for all your monitored hosts (up to the number of hosts specified by the host limit).
+* Enter a search query using [{{kib}} Query Language](/explore-analyze/query-filter/languages/kql.md) to show metrics that match your search criteria. For example, to see metrics for hosts running on linux, enter `host.os.type : "linux"`. Otherwise you’ll see metrics for all your monitored hosts (up to the number of hosts specified by the host limit).
* Select additional criteria to filter the view:
* In the **Operating System** list, select one or more operating systems to include (or exclude) metrics for hosts running the selected operating systems.
@@ -77,14 +77,14 @@ The **Hosts** page provides several mechanisms for filtering the data on the pag
% Stateful only for filtering data?
-To learn more about filtering data in {{kib}}, refer to [{{kib}} concepts](../../../explore-analyze/query-filter/filtering.md).
+To learn more about filtering data in {{kib}}, refer to [{{kib}} concepts](/explore-analyze/query-filter/filtering.md).
## View metrics [analyze-hosts-inspect-data]
On the **Metrics** tab, view metrics trending over time, including CPU usage, normalized load, memory usage, disk usage, and other metrics related to disk IOPs and throughput. Place your cursor over a line to view metrics at a specific point in time. From within each visualization, you can choose to open the visualization in Lens.
-To see metrics for a specific host, refer to [View host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
+To see metrics for a specific host, refer to [View host details](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
### Open in Lens [analyze-hosts-open-in-lens]
@@ -98,7 +98,7 @@ Metrics visualizations are powered by Lens, meaning you can continue your analys
In Lens, you can examine all the fields and formulas used to create the visualization, make modifications to the visualization, and save your changes.
-For more information about using Lens, refer to the [{{kib}} documentation about Lens](../../../explore-analyze/visualize/lens.md).
+For more information about using Lens, refer to the [{{kib}} documentation about Lens](/explore-analyze/visualize/lens.md).
## View logs [analyze-hosts-view-logs]
@@ -110,7 +110,7 @@ On the **Logs** tab of the **Hosts** page, view logs for the systems you are mon
:screenshot:
:::
-To see logs for a specific host, refer to [View host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
+To see logs for a specific host, refer to [View host details](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
## View alerts [analyze-hosts-view-alerts]
@@ -128,7 +128,7 @@ From the **Actions** menu, you can choose to:
:screenshot:
:::
-To see alerts for a specific host, refer to [View host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
+To see alerts for a specific host, refer to [View host details](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details).
::::{note}
**Why are alerts missing from the Hosts page?**
@@ -138,7 +138,7 @@ If your rules are triggering alerts that don’t appear on the **Hosts** page, e
* For Metric threshold or Custom threshold rules, select `host.name` in the **Group alerts by** field.
* For Inventory rules, select **Host** for the node type under **Conditions**.
-To learn more about creating and managing rules, refer to [Alerting](../../../solutions/observability/incident-management/alerting.md).
+To learn more about creating and managing rules, refer to [Alerting](/solutions/observability/incident-management/alerting.md).
::::
@@ -231,7 +231,7 @@ The processes listed in the **Top processes** table are based on an aggregation
% Stateful only for Profiling
-:::::{dropdown} **Universal Profiling**
+:::::{dropdown} Universal Profiling
The **Universal Profiling** tab shows CPU usage down to the application code level. From here, you can find the sources of resource usage, and identify code that can be optimized to reduce infrastructure costs. The Universal Profiling tab has the following views.
| | |
@@ -239,7 +239,7 @@ The **Universal Profiling** tab shows CPU usage down to the application code lev
| **Flamegraph** | A visual representation of the functions that consume the most resources. Each rectangle represents a function. The rectangle width represents the time spent in the function. The number of stacked rectangles represents the stack depth, or the number of functions called to reach the current function. |
| **Top 10 Functions** | A list of the most expensive lines of code on your host. See the most frequently sampled functions, broken down by CPU time, annualized CO2, and annualized cost estimates. |
-For more on Universal Profiling, refer to the [Universal Profiling](../../../solutions/observability/infra-and-hosts/universal-profiling.md) docs.
+For more on Universal Profiling, refer to the [Universal Profiling](/solutions/observability/infra-and-hosts/universal-profiling.md) docs.
:::{image} /solutions/images/observability-universal-profiling-overlay.png
:alt: Host Universal Profiling
@@ -291,7 +291,7 @@ To drill down and analyze the metric anomaly, select **Actions** → **Open in A
* **Editor:** Has limited access. Editors can run pre-configured queries, but may have restricted permissions for setting up and scheduling new queries, especially queries that require broader access or permissions adjustments.
* **Viewer**: Has read-only access to data, including viewing Osquery results if configured by a user with higher permissions. Viewers cannot initiate or schedule Osquery queries themselves.
-To learn more about roles, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+To learn more about roles, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -302,14 +302,14 @@ You must have an active [{{agent}}](https://www.elastic.co/guide/en/fleet/curren
::::
-The **Osquery** tab allows you to build SQL statements to query your host data. You can create and run live or saved queries against the {{agent}}. Osquery results are stored in {{es}} so that you can use the {{stack}} to search, analyze, and visualize your host metrics. To create saved queries and add scheduled query groups, refer to [Osquery](../../../solutions/security/investigate/osquery.md).
+The **Osquery** tab allows you to build SQL statements to query your host data. You can create and run live or saved queries against the {{agent}}. Osquery results are stored in {{es}} so that you can use the {{stack}} to search, analyze, and visualize your host metrics. To create saved queries and add scheduled query groups, refer to [Osquery](/solutions/security/investigate/osquery.md).
To view more information about the query, click the **Status** tab. A query status can result in `success`, `error` (along with an error message), or `pending` (if the {{agent}} is offline).
Other options include:
-* View in Discover to search, filter, and view information about the structure of host metric fields. To learn more, refer to [Discover](../../../explore-analyze/discover.md).
-* View in Lens to create visualizations based on your host metric fields. To learn more, refer to [Lens](../../../explore-analyze/visualize/lens.md).
+* View in Discover to search, filter, and view information about the structure of host metric fields. To learn more, refer to [Discover](/explore-analyze/discover.md).
+* View in Lens to create visualizations based on your host metric fields. To learn more, refer to [Lens](/explore-analyze/visualize/lens.md).
* View the results in full screen mode.
* Add, remove, reorder, and resize columns.
* Sort field names in ascending or descending order.
@@ -333,9 +333,9 @@ The metrics shown on the **Hosts** page are also available when viewing hosts on
There are a few reasons why you may see dashed lines in your charts.
-* [The chart interval is too short](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#dashed-interval)
-* [Data is missing](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#dashed-missing)
-* [The chart interval is too short and data is missing](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#observability-analyze-hosts-the-chart-interval-is-too-short-and-data-is-missing)
+* [The chart interval is too short](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#dashed-interval)
+* [Data is missing](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#dashed-missing)
+* [The chart interval is too short and data is missing](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md#observability-analyze-hosts-the-chart-interval-is-too-short-and-data-is-missing)
### The chart interval is too short [dashed-interval]
diff --git a/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md b/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md
index 434fa0c581..8cfd39e54c 100644
--- a/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md
+++ b/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md
@@ -15,12 +15,12 @@ Using {{agent}} integrations, you can ingest and analyze metrics from servers, D
For more information, refer to the following links:
-* [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md): Learn how to onboard your system metrics data quickly.
-* [View infrastructure metrics by resource type](../../../solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md): Use the **Inventory page** to get a metrics-driven view of your infrastructure grouped by resource type.
-* [Analyze and compare hosts](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md): Use the **Hosts** page to get a metrics-driven view of your infrastructure backed by an easy-to-use interface called Lens.
-* [Detect metric anomalies](../../../solutions/observability/infra-and-hosts/detect-metric-anomalies.md): Detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods.
-* [Configure settings](../../../solutions/observability/infra-and-hosts/configure-settings.md): Learn how to configure infrastructure UI settings.
+* [Get started with system metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md): Learn how to onboard your system metrics data quickly.
+* [View infrastructure metrics by resource type](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md): Use the **Inventory page** to get a metrics-driven view of your infrastructure grouped by resource type.
+* [Analyze and compare hosts](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md): Use the **Hosts** page to get a metrics-driven view of your infrastructure backed by an easy-to-use interface called Lens.
+* [Detect metric anomalies](/solutions/observability/infra-and-hosts/detect-metric-anomalies.md): Detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods.
+* [Configure settings](/solutions/observability/infra-and-hosts/configure-settings.md): Learn how to configure infrastructure UI settings.
* [Metrics reference](https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html): Learn about key metrics used for infrastructure monitoring.
* [Infrastructure app fields](https://www.elastic.co/guide/en/serverless/current/observability-infrastructure-monitoring-required-fields.html): Learn about the fields required to display data in the Infrastructure UI.
-By default, the Infrastructure UI displays metrics from {{es}} indices that match the `metrics-*` and `metricbeat-*` index patterns. To learn how to change this behavior, refer to [Configure settings](../../../solutions/observability/infra-and-hosts/configure-settings.md).
\ No newline at end of file
+By default, the Infrastructure UI displays metrics from {{es}} indices that match the `metrics-*` and `metricbeat-*` index patterns. To learn how to change this behavior, refer to [Configure settings](/solutions/observability/infra-and-hosts/configure-settings.md).
\ No newline at end of file
diff --git a/solutions/observability/infra-and-hosts/configure-settings.md b/solutions/observability/infra-and-hosts/configure-settings.md
index 55fe61038f..7e63472667 100644
--- a/solutions/observability/infra-and-hosts/configure-settings.md
+++ b/solutions/observability/infra-and-hosts/configure-settings.md
@@ -8,7 +8,7 @@ mapped_pages:
::::{note}
-The **Editor** role or higher is required to configure settings. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+The **Editor** role or higher is required to configure settings. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -34,6 +34,6 @@ If the fields are grayed out and cannot be edited, you may not have sufficient p
% Stateful only for spaces.
::::{tip}
-If [Spaces](../../../deploy-manage/manage-spaces.md) are enabled in your {{kib}} instance, any configuration changes you make here are specific to the current space. You can make different subsets of data available by creating multiple spaces with different data source configurations.
+If [Spaces](/deploy-manage/manage-spaces.md) are enabled in your {{kib}} instance, any configuration changes you make here are specific to the current space. You can make different subsets of data available by creating multiple spaces with different data source configurations.
::::
\ No newline at end of file
diff --git a/solutions/observability/infra-and-hosts/detect-metric-anomalies.md b/solutions/observability/infra-and-hosts/detect-metric-anomalies.md
index 4d9a1ee4c9..fd59806ba6 100644
--- a/solutions/observability/infra-and-hosts/detect-metric-anomalies.md
+++ b/solutions/observability/infra-and-hosts/detect-metric-anomalies.md
@@ -11,7 +11,7 @@ applies_to:
::::{note}
-**For Observability serverless projects**, the **Editor** role or higher is required to create {{ml}} jobs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Editor** role or higher is required to create {{ml}} jobs. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
diff --git a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md
index d6b7811733..b72e2ed2fd 100644
--- a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md
+++ b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md
@@ -2,7 +2,7 @@
mapped_pages:
- https://www.elastic.co/guide/en/observability/current/explore-metrics.html
applies_to:
- stack:
+ stack:
---
# Explore infrastructure metrics over time [explore-metrics]
@@ -47,7 +47,7 @@ As an example, let’s view the system load metrics for hosts we’re currently
3. Select **Actions** in the top right-hand corner of one of the graphs and then click **Add filter**.
- This graph now displays the metrics only for that host. The filter has added a [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md) filter for `host.name` in the second row of the Metrics Explorer configuration.
+ This graph now displays the metrics only for that host. The filter has added a [{{kib}} Query Language](/explore-analyze/query-filter/languages/kql.md) filter for `host.name` in the second row of the Metrics Explorer configuration.
4. Let’s analyze some host-specific metrics. In the **of** field, delete each one of the system load metrics.
5. To explore the outbound network traffic, enter the `host.network.egress.bytes` metric. This is a monotonically increasing value, so from the aggregation dropdown, select `Rate`.
diff --git a/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md b/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md
index f4ad6f67a2..fab4670231 100644
--- a/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md
+++ b/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md
@@ -11,7 +11,7 @@ applies_to:
# Get started with system metrics [logs-metrics-get-started]
-In this guide you’ll learn how to onboard system metrics data from a machine or server, then observe the data in Elastic Observability. This guide describes how to use a {{fleet}}-managed {{agent}}. To get started quickly with a standalone agent that does not require {{fleet}}, follow the steps described in the [quickstart](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md).
+In this guide you’ll learn how to onboard system metrics data from a machine or server, then observe the data in Elastic Observability. This guide describes how to use a {{fleet}}-managed {{agent}}. To get started quickly with a standalone agent that does not require {{fleet}}, follow the steps described in the [quickstart](/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md).
## Prerequisites [logs-metrics-prereqs]
@@ -36,7 +36,7 @@ To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or
:::{tab-item} Serverless
:sync: serverless
-The **Admin** role or higher is required to onboard system metrics data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+The **Admin** role or higher is required to onboard system metrics data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
:::
@@ -86,7 +86,7 @@ In this step, add the System integration to monitor host logs and metrics.
:::{tab-item} Serverless
:sync: serverless
-1. [Create a new {{obs-serverless}} project](../../../solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
+1. [Create a new {{obs-serverless}} project](/solutions/observability/get-started/create-an-observability-project.md), or open an existing one.
2. In your {{obs-serverless}} project, go to **Project Settings** → **Integrations**.
3. Type **System** in the search bar, then select the integration to see more details about it.
4. Click **Add System**.
@@ -117,7 +117,7 @@ The **Add agent** flyout has two options: **Enroll in {{fleet}}** and **Run stan
Notice that you can also configure the integration to collect logs.
::::{note}
-** What if {{agent}} is already running on my host?**
+**What if {{agent}} is already running on my host?**
Do not try to deploy a second {{agent}} to the same system. You have a couple options:
@@ -132,13 +132,13 @@ Do not try to deploy a second {{agent}} to the same system. You have a couple op
::::
-After the agent is installed and successfully streaming metrics data, go to **Infrastructure** → **Infrastructure inventory** or **Hosts** to see a metrics-driven view of your infrastructure. To learn more, refer to [View infrastructure metrics by resource type](../../../solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md) or [Analyze and compare hosts](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md).
+After the agent is installed and successfully streaming metrics data, go to **Infrastructure** → **Infrastructure inventory** or **Hosts** to see a metrics-driven view of your infrastructure. To learn more, refer to [View infrastructure metrics by resource type](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md) or [Analyze and compare hosts](/solutions/observability/infra-and-hosts/analyze-compare-hosts.md).
## Next steps [observability-get-started-with-metrics-next-steps]
Now that you’ve added metrics and explored your data, learn how to onboard other types of data:
-* [Get started with system logs](../../../solutions/observability/logs/get-started-with-system-logs.md)
-* [Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md)
+* [Get started with system logs](/solutions/observability/logs/get-started-with-system-logs.md)
+* [Stream any log file](/solutions/observability/logs/stream-any-log-file.md)
* [Get started with traces and APM](/solutions/observability/apm/get-started.md)
\ No newline at end of file
diff --git a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md
index e7fef52a14..87ab915c0e 100644
--- a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md
+++ b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md
@@ -18,7 +18,7 @@ On this page, you’ll learn how to configure and use Universal Profiling. This
* Installing the Universal Profiling Agent
* Installing the Universal Profiling Agent integration
-We would appreciate feedback on your experience with this product and any other profiling pain points you may have. See the [send feedback](../../../troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md#profiling-send-feedback) section of the troubleshooting documentation for more information.
+We would appreciate feedback on your experience with this product and any other profiling pain points you may have. See the [send feedback](/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md#profiling-send-feedback) section of the troubleshooting documentation for more information.
## Prerequisites [profiling-prereqs]
diff --git a/solutions/observability/infra-and-hosts/install-backend.md b/solutions/observability/infra-and-hosts/install-backend.md
index f5e81954ea..9c70d51407 100644
--- a/solutions/observability/infra-and-hosts/install-backend.md
+++ b/solutions/observability/infra-and-hosts/install-backend.md
@@ -2,7 +2,7 @@
mapped_pages:
- https://www.elastic.co/guide/en/observability/current/profiling-self-managed-installation.html
applies_to:
- stack:
+ stack:
---
# Install the backend [profiling-self-managed-installation]
@@ -15,7 +15,7 @@ To install the Universal Profiling backend, complete these steps:
4. [Run the backend applications](step-4-run-backend-applications.md).
5. [Next steps](step-5-next-steps.md).
-If you face any issues during installation, refer to [Troubleshooting Universal Profiling backend](../../../troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md).
+If you face any issues during installation, refer to [Troubleshooting Universal Profiling backend](/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md).
After the Universal Profiling installation is complete, refer to [Operating the Universal Profiling backend](operate-universal-profiling-backend.md) for more on monitoring and scaling the backend.
diff --git a/solutions/observability/infra-and-hosts/step-2-enable-universal-profiling-in-kibana.md b/solutions/observability/infra-and-hosts/step-2-enable-universal-profiling-in-kibana.md
index 1035ddd1b3..f9885bc43b 100644
--- a/solutions/observability/infra-and-hosts/step-2-enable-universal-profiling-in-kibana.md
+++ b/solutions/observability/infra-and-hosts/step-2-enable-universal-profiling-in-kibana.md
@@ -2,7 +2,7 @@
mapped_pages:
- https://www.elastic.co/guide/en/observability/current/profiling-self-managed-enable-kibana.html
applies_to:
- stack:
+ stack:
---
# Step 2: Enable Universal Profiling in Kibana [profiling-self-managed-enable-kibana]
@@ -31,7 +31,7 @@ In ECE, you don’t need to perform any additional steps to enable the Universal
## Kubernetes [_kubernetes]
-If you’re using ECK, add the previous configuration line to the `kibana.k8s.elastic.co/v1` CRD, placing it under the `spec.config` key. Refer to the [ECK documentation](../../../deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md#k8s-kibana-configuration) for more on configuring {{kib}}.
+If you’re using ECK, add the previous configuration line to the `kibana.k8s.elastic.co/v1` CRD, placing it under the `spec.config` key. Refer to the [ECK documentation](/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md#k8s-kibana-configuration) for more on configuring {{kib}}.
If you’re not using ECK, edit the `secret` or `configMap` holding the `kibana.yml` configuration file. Add the previously mentioned config line, and then perform a rolling restart of the Kibana deployment to reload the configuration.
diff --git a/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md b/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md
index 352ee36ed1..baefa3d8a5 100644
--- a/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md
+++ b/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md
@@ -17,7 +17,7 @@ The next step is to run the backend applications. To do this:
Both the collector and symbolizer need to authenticate to Elasticsearch to process profiling data. For this, you need to create an API key for each application.
-Refer to [Create an API key](../../../deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key) to create an API key using {{kib}}. Select a **User API key** and assign the following permissions under **Control security privileges**:
+Refer to [Create an API key](/deploy-manage/api-keys/elasticsearch-api-keys.md#create-api-key) to create an API key using {{kib}}. Select a **User API key** and assign the following permissions under **Control security privileges**:
```json
{
@@ -459,7 +459,7 @@ sudo journalctl -xu pf-elastic-collector
sudo journalctl -xu pf-elastic-symbolizer
```
-Refer to [Troubleshooting Universal Profiling backend](../../../troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md) for more information on troubleshooting possible errors in the logs.
+Refer to [Troubleshooting Universal Profiling backend](/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md) for more information on troubleshooting possible errors in the logs.
### OCI containers [profiling-self-managed-running-linux-container]
diff --git a/solutions/observability/infra-and-hosts/step-5-next-steps.md b/solutions/observability/infra-and-hosts/step-5-next-steps.md
index 64929b3c7b..24fa0d0883 100644
--- a/solutions/observability/infra-and-hosts/step-5-next-steps.md
+++ b/solutions/observability/infra-and-hosts/step-5-next-steps.md
@@ -16,7 +16,7 @@ Follow the steps described in [Install the Universal Profiling Agent](get-starte
The agent logs will show that the agent is sending data to the backend, and navigating to Kibana you should be able to see data in the **Stacktraces** view. Inspect the backend services logs to verify that the data is being received and ingested. If needed, re-configure the backend services with `verbose: true` to get more detailed logs.
-If you find issues in the logs, refer to [Troubleshooting Universal Profiling backend](../../../troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md).
+If you find issues in the logs, refer to [Troubleshooting Universal Profiling backend](/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment/troubleshoot-universal-profiling-backend.md).
## Operating the backend [_operating_the_backend]
diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md
index 161ff51c61..ce676d7bf7 100644
--- a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md
+++ b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md
@@ -207,7 +207,7 @@ Collecting metrics from `kube-state-metrics` is on by default. The `kube-state-m
With the Kubernetes integration, you can collect a number of metrics using the `kube-state-metrics`. Expand the following list to see all available metrics from `kube-state-metrics`.
-::::{dropdown} Expand to see available metrics from `kube-state-metrics`
+::::{dropdown} Expand to see available metrics from kube-state-metrics
**Container metrics**
: Monitor Container performance to ensure efficiency and stability in pods. Learn more at [`kube-state-metrics` container metrics](https://docs.elastic.co/en/integrations/kubernetes/kube-state-metrics#state_container).
diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md
index dcd669d98f..34a3a1fdcc 100644
--- a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md
+++ b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md
@@ -2,7 +2,7 @@
mapped_pages:
- https://www.elastic.co/guide/en/observability/current/monitor-nginx.html
applies_to:
- stack:
+ stack:
---
# Tutorial: Observe your nginx instances [monitor-nginx]
@@ -275,7 +275,7 @@ These anomaly detection jobs are available when you have data that matches the q
### Before you begin [monitor-nginx-ml-prereqs]
-Verify that your environment is set up properly to use the {{ml-features}}. If {{es}} {{security-features}} are enabled, you need a user with permissions to manage {{anomaly-jobs}}. Refer to [Set up ML features](../../../explore-analyze/machine-learning/setting-up-machine-learning.md).
+Verify that your environment is set up properly to use the {{ml-features}}. If {{es}} {{security-features}} are enabled, you need a user with permissions to manage {{anomaly-jobs}}. Refer to [Set up ML features](/explore-analyze/machine-learning/setting-up-machine-learning.md).
### Add nginx ML jobs [monitor-nginx-ml-add-jobs]
@@ -297,11 +297,11 @@ Back on the **Anomaly Detection Jobs** page, you should see the nginx anomaly de
View your anomaly detection job results using the Anomaly Explorer or Single Metric Viewer found under **Anomaly Detection** in the Machine Learning menu. The Anomaly Explorer shows the results from all or any combination of your nginx ML jobs. The Single Metric Viewer focuses on a specific job. These tools offer a comprehensive view of anomalies and help find patterns and irregularities across data points and time intervals.
-Refer to [View anomaly detection job results](../../../explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md) for more on viewing and understanding your anomaly detection job results.
+Refer to [View anomaly detection job results](/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md) for more on viewing and understanding your anomaly detection job results.
### Set up alerts [monitor-nginx-ml-alert]
With the nginx ML jobs detecting anomalies, you can set rules to generate alerts when your jobs meet specific conditions. For example, you could set up a rule on the `low_request_rate_nginx` job to alert when low request rates hit a specific severity threshold. When you get alerted, you can make sure your server isn’t experiencing issues.
-Refer to [Generating alerts for anomaly detection jobs](../../../explore-analyze/machine-learning/anomaly-detection/ml-configuring-alerts.md) for more on setting these rules and generating alerts.
+Refer to [Generating alerts for anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-alerts.md) for more on setting these rules and generating alerts.
diff --git a/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md b/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md
index e6a72c8cc4..78d402f6d7 100644
--- a/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md
+++ b/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md
@@ -54,7 +54,7 @@ Complete the following steps to configure a custom index lifecycle policy.
5. Click **Save policy**.
::::{tip}
-See [Manage the index lifecycle](../../../manage-data/lifecycle/index-lifecycle-management.md) to learn more about {{ilm-init}} policies.
+See [Manage the index lifecycle](/manage-data/lifecycle/index-lifecycle-management.md) to learn more about {{ilm-init}} policies.
::::
diff --git a/solutions/observability/infra-and-hosts/upgrade-universal-profiling.md b/solutions/observability/infra-and-hosts/upgrade-universal-profiling.md
index c2016b73ae..e22d7392b3 100644
--- a/solutions/observability/infra-and-hosts/upgrade-universal-profiling.md
+++ b/solutions/observability/infra-and-hosts/upgrade-universal-profiling.md
@@ -106,4 +106,4 @@ Click any subheadings under Universal Profiling in the navigation menu. You shou
If you see instructions on how to deploy the Universal Profiling Agent like in the [examples](get-started-with-universal-profiling.md#profiling-install-profiling-agent) from the [Get Started](get-started-with-universal-profiling.md) documentation, the agents did not reconnect to the Integrations Server replicas.
-Refer to the [troubleshooting](../../../troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md) documentation and the [Get Started](get-started-with-universal-profiling.md) documentation to investigate the issue.
+Refer to the [troubleshooting](/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md) documentation and the [Get Started](get-started-with-universal-profiling.md) documentation to investigate the issue.
diff --git a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md
index b4867f77a5..57fb787898 100644
--- a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md
+++ b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md
@@ -27,7 +27,7 @@ To learn more about the metrics shown on this page, refer to the [Metrics refere
If you haven’t added data yet, click **Add data** to search for and install an Elastic integration.
-Need help getting started? Follow the steps in [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md).
+Need help getting started? Follow the steps in [Get started with system metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md).
::::
@@ -50,7 +50,7 @@ You can sort by resource, group the resource by specific fields related to it, a
:screenshot:
:::
-You can also use the search bar to create structured queries using [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md). For example, enter `host.hostname : "host1"` to view only the information for `host1`.
+You can also use the search bar to create structured queries using [{{kib}} Query Language](/explore-analyze/query-filter/languages/kql.md). For example, enter `host.hostname : "host1"` to view only the information for `host1`.
To examine the metrics for a specific time, use the time filter to select the date and time.
@@ -152,7 +152,7 @@ The **Universal Profiling** tab shows CPU usage down to the application code lev
| **Flamegraph** | A visual representation of the functions that consume the most resources. Each rectangle represents a function. The rectangle width represents the time spent in the function. The number of stacked rectangles represents the stack depth, or the number of functions called to reach the current function. |
| **Top 10 Functions** | A list of the most expensive lines of code on your host. See the most frequently sampled functions, broken down by CPU time, annualized CO2, and annualized cost estimates. |
-For more on Universal Profiling, refer to the [Universal Profiling](../../../solutions/observability/infra-and-hosts/universal-profiling.md) docs.
+For more on Universal Profiling, refer to the [Universal Profiling](/solutions/observability/infra-and-hosts/universal-profiling.md) docs.
:::{image} /solutions/images/observability-universal-profiling-overlay.png
:alt: Host Universal Profiling
@@ -205,7 +205,7 @@ To drill down and analyze the metric anomaly, select **Actions** → **Open in A
* **Editor:** Has limited access. Editors can run pre-configured queries, but may have restricted permissions for setting up and scheduling new queries, especially queries that require broader access or permissions adjustments.
* **Viewer**: Has read-only access to data, including viewing Osquery results if configured by a user with higher permissions. Viewers cannot initiate or schedule Osquery queries themselves.
-To learn more about roles, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+To learn more about roles, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -216,14 +216,14 @@ You must have an active [{{agent}}](https://www.elastic.co/guide/en/fleet/curren
::::
-The **Osquery** tab allows you to build SQL statements to query your host data. You can create and run live or saved queries against the {{agent}}. Osquery results are stored in {{es}} so that you can use the {{stack}} to search, analyze, and visualize your host metrics. To create saved queries and add scheduled query groups, refer to [Osquery](../../../solutions/security/investigate/osquery.md).
+The **Osquery** tab allows you to build SQL statements to query your host data. You can create and run live or saved queries against the {{agent}}. Osquery results are stored in {{es}} so that you can use the {{stack}} to search, analyze, and visualize your host metrics. To create saved queries and add scheduled query groups, refer to [Osquery](/solutions/security/investigate/osquery.md).
To view more information about the query, click the **Status** tab. A query status can result in `success`, `error` (along with an error message), or `pending` (if the {{agent}} is offline).
Other options include:
-* View in Discover to search, filter, and view information about the structure of host metric fields. To learn more, refer to [Discover](../../../explore-analyze/discover.md).
-* View in Lens to create visualizations based on your host metric fields. To learn more, refer to [Lens](../../../explore-analyze/visualize/lens.md).
+* View in Discover to search, filter, and view information about the structure of host metric fields. To learn more, refer to [Discover](/explore-analyze/discover.md).
+* View in Lens to create visualizations based on your host metric fields. To learn more, refer to [Lens](/explore-analyze/visualize/lens.md).
* View the results in full screen mode.
* Add, remove, reorder, and resize columns.
* Sort field names in ascending or descending order.
@@ -361,6 +361,6 @@ Select your resource, and from the **Metric** filter menu, click **Add metric**.
Depending on the features you have installed and configured, you can view logs or traces relating to a specific resource. For example, in the high-level view, when you click a Kubernetes Pod resource, you can choose:
-* **Kubernetes Pod logs** to [view corresponding logs](../../../solutions/observability/logs.md) in the {{logs-app}}.
+* **Kubernetes Pod logs** to [view corresponding logs](/solutions/observability/logs.md) in the {{logs-app}}.
* **Kubernetes Pod APM traces** to [view corresponding APM traces](/solutions/observability/apm/index.md) in the {{apm-app}}.
* **Kubernetes Pod in Uptime** to [view related uptime information](/solutions/observability/synthetics/index.md) in the {{uptime-app}}.
\ No newline at end of file
diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md
index 52acfec0f1..3318861292 100644
--- a/solutions/observability/logs.md
+++ b/solutions/observability/logs.md
@@ -10,13 +10,13 @@ navigation_title: "Logs"
Elastic Observability allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links:
-* [Get started with system logs](../../solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server.
-* [Stream any log file](../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}.
-* [Parse and route logs](../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data.
-* [Filter and aggregate logs](../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently.
-* [Explore logs](../../solutions/observability/logs/discover-logs.md): Find information on visualizing and analyzing logs.
-* [Run pattern analysis on log data](../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data.
-* [Troubleshoot logs](../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs.
+* [Get started with system logs](/solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server.
+* [Stream any log file](/solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}.
+* [Parse and route logs](/solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data.
+* [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently.
+* [Explore logs](/solutions/observability/logs/discover-logs.md): Find information on visualizing and analyzing logs.
+* [Run pattern analysis on log data](/solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data.
+* [Troubleshoot logs](/troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs.
## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project]
@@ -26,7 +26,7 @@ You can send logs data to your project in different ways depending on your needs
* {{agent}}
* {{filebeat}}
-When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](../../manage-data/ingest/tools.md) for more information on which option best fits your situation.
+When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md) for more information on which option best fits your situation.
### {{agent}} [observability-log-monitoring-agent]
@@ -68,11 +68,11 @@ See [install {{agent}} in containers](/reference/fleet/install-elastic-agents-in
The following resources provide information on configuring your logs:
-* [Data streams](../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size.
-* [Data views](../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces.
-* [Index lifecycle management](../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements.
-* [Ingest pipeline](../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing.
-* [Mapping](../../manage-data/data-store/mapping.md): Define how data is stored and indexed.
+* [Data streams](/manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size.
+* [Data views](/explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces.
+* [Index lifecycle management](/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements.
+* [Ingest pipeline](/manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing.
+* [Mapping](/manage-data/data-store/mapping.md): Define how data is stored and indexed.
## View and monitor logs [observability-log-monitoring-view-and-monitor-logs]
@@ -81,30 +81,30 @@ Use **Discover** to search, filter, and tail all your logs ingested into your pr
The following resources provide information on viewing and monitoring your logs:
-* [Discover and explore](../../solutions/observability/logs/discover-logs.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view.
-* [Detect log anomalies](../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically.
+* [Discover and explore](/solutions/observability/logs/discover-logs.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view.
+* [Detect log anomalies](/explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically.
## Monitor data sets [observability-log-monitoring-monitor-data-sets]
The **Data Set Quality** page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents.
-[Monitor data sets](../../solutions/observability/data-set-quality-monitoring.md)
+[Monitor data sets](/solutions/observability/data-set-quality-monitoring.md)
## Application logs [observability-log-monitoring-application-logs]
-Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](../../solutions/observability/logs/stream-application-logs.md).
+Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](/solutions/observability/logs/stream-application-logs.md).
## Log threshold alert [logs-alerts-checklist]
You can create a rule to send an alert when the log aggregation exceeds a threshold.
-Refer to [Log threshold](../../solutions/observability/incident-management/create-log-threshold-rule.md).
+Refer to [Log threshold](/solutions/observability/incident-management/create-log-threshold-rule.md).
## Default logs template [logs-template-checklist]
Configure the default `logs` template using the `logs@custom` component template.
-Refer to the [Logs index template reference](../../solutions/observability/logs/logs-index-template-reference.md).
\ No newline at end of file
+Refer to the [Logs index template reference](/solutions/observability/logs/logs-index-template-reference.md).
\ No newline at end of file
diff --git a/solutions/observability/logs/add-service-name-to-logs.md b/solutions/observability/logs/add-service-name-to-logs.md
index 7efc136762..794e652685 100644
--- a/solutions/observability/logs/add-service-name-to-logs.md
+++ b/solutions/observability/logs/add-service-name-to-logs.md
@@ -54,12 +54,12 @@ For logs that with an existing field being used to represent the service name, m
7. Under **Field path**, select the existing field you want to map to the service name.
8. Select **Add field**.
-For more ways to add a field to your mapping, refer to [add a field to an existing mapping](../../../manage-data/data-store/mapping/explicit-mapping.md#add-field-mapping).
+For more ways to add a field to your mapping, refer to [add a field to an existing mapping](/manage-data/data-store/mapping/explicit-mapping.md#add-field-mapping).
## Additional ways to process data [observability-add-logs-service-name-additional-ways-to-process-data]
The {{stack}} provides additional ways to process your data:
-* **[Ingest pipelines](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md):** convert data to ECS, normalize field data, or enrich incoming data.
+* **[Ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md):** convert data to ECS, normalize field data, or enrich incoming data.
* **[Logstash](https://www.elastic.co/guide/en/logstash/current):** enrich your data using input, output, and filter plugins.
\ No newline at end of file
diff --git a/solutions/observability/logs/categorize-log-entries.md b/solutions/observability/logs/categorize-log-entries.md
index 6d0a5593e5..8cc51b78d8 100644
--- a/solutions/observability/logs/categorize-log-entries.md
+++ b/solutions/observability/logs/categorize-log-entries.md
@@ -12,7 +12,7 @@ Application log events are often unstructured and contain variable data. Many lo
The **Categories** page enables you to identify patterns in your log events quickly. Instead of manually identifying similar logs, the logs categorization view lists log events that have been grouped based on their messages and formats so that you can take action quicker.
::::{note}
-This feature makes use of {{ml}} {{anomaly-jobs}}. To set up jobs, you must have `all` {{kib}} feature privileges for **{{ml-app}}**. Users that have full or read-only access to {{ml-features}} within a {{kib}} space can view the results of *all* {{anomaly-jobs}} that are visible in that space, even if they do not have access to the source indices of those jobs. You must carefully consider who is given access to {{ml-features}}; {{anomaly-job}} results may propagate field values that contain sensitive information from the source indices to the results. For more details, refer to [Set up {{ml-features}}](../../../explore-analyze/machine-learning/setting-up-machine-learning.md).
+This feature makes use of {{ml}} {{anomaly-jobs}}. To set up jobs, you must have `all` {{kib}} feature privileges for **{{ml-app}}**. Users that have full or read-only access to {{ml-features}} within a {{kib}} space can view the results of *all* {{anomaly-jobs}} that are visible in that space, even if they do not have access to the source indices of those jobs. You must carefully consider who is given access to {{ml-features}}; {{anomaly-job}} results may propagate field values that contain sensitive information from the source indices to the results. For more details, refer to [Set up {{ml-features}}](/explore-analyze/machine-learning/setting-up-machine-learning.md).
::::
@@ -51,4 +51,4 @@ To view a log message under a particular category, click the arrow at the end of
:screenshot:
:::
-For more information about categorization, go to [Detecting anomalous categories of data](../../../explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md).
+For more information about categorization, go to [Detecting anomalous categories of data](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md).
diff --git a/solutions/observability/logs/discover-logs.md b/solutions/observability/logs/discover-logs.md
index 1b3e848b61..5cd84b684b 100644
--- a/solutions/observability/logs/discover-logs.md
+++ b/solutions/observability/logs/discover-logs.md
@@ -14,7 +14,7 @@ From the `logs-*` or `All logs` data view in Discover, you can quickly search an
To open **Discover**, find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Select the `logs-*` or `All logs` data view from the **Data view** menu.
:::{note}
-For a contextual logs experience, set the **Solution view** for your space to **Observability**. Refer to [Managing spaces](../../../deploy-manage/manage-spaces.md) for more information.
+For a contextual logs experience, set the **Solution view** for your space to **Observability**. Refer to [Managing spaces](/deploy-manage/manage-spaces.md) for more information.
:::
:::{image} ../../images/observability-log-explorer.png
@@ -24,23 +24,23 @@ For a contextual logs experience, set the **Solution view** for your space to **
## Required {{kib}} privileges [logs-explorer-privileges]
-Viewing data in Discover logs data views requires `read` privileges for **Discover**, **Index**, and **Logs**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs.
+Viewing data in Discover logs data views requires `read` privileges for **Discover**, **Index**, and **Logs**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs.
## Find your logs [find-your-logs]
By default, the **All logs** data view shows all of your logs, according to the index patterns set in the **logs sources** advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
-To focus on logs from a specific source or sources, create a data view using the index patterns of those source. For more information on creating data views, refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md#settings-create-pattern)
+To focus on logs from a specific source or sources, create a data view using the index patterns of those source. For more information on creating data views, refer to [Create a data view](/explore-analyze/find-and-organize/data-views.md#settings-create-pattern)
-Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Discover, refer to [Filter logs in Discover](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover).
+Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Discover, refer to [Filter logs in Discover](/solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover).
## Review log data in the documents table [review-log-data-in-the-documents-table]
The documents table lets you add fields, order table columns, sort fields, and update the row height in the same way you would in Discover.
-Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table.
+Refer to the [Discover](/explore-analyze/discover.md) documentation for more information on updating the table.
### Actions column [actions-column]
@@ -72,4 +72,4 @@ The following actions help you filter and focus on specific fields in the log de
Go to **Data Sets** to view more details about your data sets and monitor their overall quality. To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
-Refer to [*Data set quality*](../../../solutions/observability/data-set-quality-monitoring.md) for more information.
\ No newline at end of file
+Refer to [*Data set quality*](/solutions/observability/data-set-quality-monitoring.md) for more information.
\ No newline at end of file
diff --git a/solutions/observability/logs/ecs-formatted-application-logs.md b/solutions/observability/logs/ecs-formatted-application-logs.md
index 4bfad8a2d1..43e28ed50a 100644
--- a/solutions/observability/logs/ecs-formatted-application-logs.md
+++ b/solutions/observability/logs/ecs-formatted-application-logs.md
@@ -13,8 +13,8 @@ Logs formatted in Elastic Common Schema (ECS) don’t require manual parsing, an
You can format your logs in ECS format the following ways:
-* [ECS loggers](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ecs-loggers): plugins for your logging libraries that reformat your logs into ECS format.
-* [APM agent ECS reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#apm-agent-ecs-reformatting): Java, Ruby, and Python {{apm-agent}}s automatically reformat application logs to ECS format without a logger.
+* [ECS loggers](/solutions/observability/logs/ecs-formatted-application-logs.md#ecs-loggers): plugins for your logging libraries that reformat your logs into ECS format.
+* [APM agent ECS reformatting](/solutions/observability/logs/ecs-formatted-application-logs.md#apm-agent-ecs-reformatting): Java, Ruby, and Python {{apm-agent}}s automatically reformat application logs to ECS format without a logger.
## ECS loggers [ecs-loggers]
@@ -41,9 +41,9 @@ Java, Ruby, and Python {{apm-agent}}s can automatically reformat application log
To set up log ECS reformatting:
-1. [Enable {{apm-agent}} reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#enable-log-ecs-reformatting)
-2. [Ingest logs with {{filebeat}} or {{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs)
-3. [View logs in Discover](../../../solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs)
+1. [Enable {{apm-agent}} reformatting](/solutions/observability/logs/ecs-formatted-application-logs.md#enable-log-ecs-reformatting)
+2. [Ingest logs with {{filebeat}} or {{agent}}](/solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs)
+3. [View logs in Discover](/solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs)
### Enable log ECS reformatting [enable-log-ecs-reformatting]
@@ -59,8 +59,8 @@ Log ECS reformatting is controlled by the `log_ecs_reformatting` configuration o
After enabling log ECS reformatting, send your application logs to your project using one of the following shipping tools:
-* [{{filebeat}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-filebeat): A lightweight data shipper that sends log data to your project.
-* [{{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-agent): A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage {{agent}} policies and lifecycles directly from your project.
+* [{{filebeat}}](/solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-filebeat): A lightweight data shipper that sends log data to your project.
+* [{{agent}}](/solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-agent): A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage {{agent}} policies and lifecycles directly from your project.
#### Ingest logs with {{filebeat}} [ingest-ecs-logs-with-filebeat]
@@ -76,22 +76,36 @@ Install {{filebeat}} on the server you want to monitor by running the commands t
::::::{tab-item} DEB
```sh subs=true
+curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-amd64.deb
+sudo dpkg -i filebeat-{{version}}-amd64.deb
+```
+::::::
+
+::::::{tab-item} RPM
+```sh subs=true
+curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-x86_64.rpm
+sudo rpm -vi filebeat-{{version}}-x86_64.rpm
+```
+::::::
+
+::::::{tab-item} macOS
+```sh subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-darwin-x86_64.tar.gz
tar xzvf filebeat-{{version}}-darwin-x86_64.tar.gz
```
::::::
-::::::{tab-item} RPM
+::::::{tab-item} Linux
```sh subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-linux-x86_64.tar.gz
tar xzvf filebeat-{{version}}-linux-x86_64.tar.gz
```
::::::
-::::::{tab-item} macOS
-1. Download the {{filebeat}} Windows zip file: `https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-windows-x86_64.zip`
+::::::{tab-item} Windows
+1. Download the [{{filebeat}} Windows zip file](https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-windows-x86_64.zip).
2. Extract the contents of the zip file into `C:\Program Files`.
-3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`.
+3. Rename the _filebeat-{{version}}-windows-x86\_64_ directory to _Filebeat_:
4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**).
5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service:
@@ -100,24 +114,9 @@ tar xzvf filebeat-{{version}}-linux-x86_64.tar.gz
PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1
```
-
If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
::::::
-::::::{tab-item} Linux
-```sh subs=true
-curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-amd64.deb
-sudo dpkg -i filebeat-{{version}}-amd64.deb
-```
-::::::
-
-::::::{tab-item} Windows
-```sh subs=true
-curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-x86_64.rpm
-sudo rpm -vi filebeat-{{version}}-x86_64.rpm
-```
-::::::
-
:::::::
#### Step 2: Connect to your project [step-2-ecs-connect-to-your-project]
@@ -130,7 +129,7 @@ output.elasticsearch:
api_key: "id:api_key"
```
-1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
+1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using:
```console
@@ -173,7 +172,7 @@ filebeat.inputs:
#### Step 4: Set up and start {{filebeat}} [step-4-ecs-set-up-and-start-filebeat]
-From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system:
+From the {{filebeat}} installation directory, set the [index template](/manage-data/data-store/templates.md) by running the command that aligns with your system:
:::::::{tab-set}
@@ -314,7 +313,7 @@ To add the custom logs integration to your project:
## View logs [view-ecs-logs]
-Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}.
+Refer to the [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}.
% What needs to be done: Align serverless/stateful
diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md
index cb17a27b2c..0ce47aade7 100644
--- a/solutions/observability/logs/filter-aggregate-logs.md
+++ b/solutions/observability/logs/filter-aggregate-logs.md
@@ -13,20 +13,20 @@ Filter and aggregate your log data to find specific information, gain insight, a
This guide shows you how to:
-* [Filter logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Narrow down your log data by applying specific criteria.
-* [Aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-aggregate): Analyze and summarize data to find patterns and gain insight.
+* [Filter logs](/solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Narrow down your log data by applying specific criteria.
+* [Aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md#logs-aggregate): Analyze and summarize data to find patterns and gain insight.
## Before you get started [logs-filter-and-aggregate-prereq]
::::{note}
-**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines and set the index template. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines and set the index template. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
-The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation.
+The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation.
Set the ingest pipeline with the following command:
@@ -73,15 +73,15 @@ PUT _index_template/logs-example-default-template
Filter your data using the fields you’ve extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways:
-* [Filter logs in Discover](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover): Filter and visualize log data in Discover.
-* [Filter logs with Query DSL](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl): Filter log data from Developer Tools using Query DSL.
+* [Filter logs in Discover](/solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover): Filter and visualize log data in Discover.
+* [Filter logs with Query DSL](/solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl): Filter log data from Developer Tools using Query DSL.
### Filter logs in Discover [logs-filter-discover]
Discover is a tool that provides views of your log data based on data views and index patterns. To open **Discover**, find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
-From Discover, open the `logs-*` or `All logs` data views from the **Data views** menu. From here, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range.
+From Discover, open the `logs-*` or `All logs` data views from the **Data views** menu. From here, you can use the [{{kib}} Query Language (KQL)](/explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range.
Add some logs with varying timestamps and log levels to your data stream:
@@ -124,12 +124,12 @@ Under the **Documents** tab, you’ll see the filtered log data matching your qu
:screenshot:
:::
-For more on using Discover, refer to the [Discover](../../../explore-analyze/discover.md) documentation.
+For more on using Discover, refer to the [Discover](/explore-analyze/discover.md) documentation.
### Filter logs with Query DSL [logs-filter-qdsl]
-[Query DSL](../../../explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**.
+[Query DSL](/explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**.
For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels.
@@ -344,4 +344,4 @@ The results should show an aggregate of logs that occurred within your timestamp
}
```
-For more on aggregation types and available aggregations, refer to the [Aggregations](../../../explore-analyze/query-filter/aggregations.md) documentation.
\ No newline at end of file
+For more on aggregation types and available aggregations, refer to the [Aggregations](/explore-analyze/query-filter/aggregations.md) documentation.
\ No newline at end of file
diff --git a/solutions/observability/logs/get-started-with-system-logs.md b/solutions/observability/logs/get-started-with-system-logs.md
index d58743c5db..f283a53f0b 100644
--- a/solutions/observability/logs/get-started-with-system-logs.md
+++ b/solutions/observability/logs/get-started-with-system-logs.md
@@ -10,7 +10,7 @@ applies_to:
::::{note}
-**For Observability Serverless projects**, the **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
+**For Observability Serverless projects**, the **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
::::
diff --git a/solutions/observability/logs/inspect-log-anomalies.md b/solutions/observability/logs/inspect-log-anomalies.md
index db2a031612..acd364ba88 100644
--- a/solutions/observability/logs/inspect-log-anomalies.md
+++ b/solutions/observability/logs/inspect-log-anomalies.md
@@ -14,10 +14,10 @@ When the {{anomaly-detect}} features of {{ml}} are enabled, you can use the **Lo
* A significant drop in the log rate might suggest that a piece of infrastructure stopped responding, and thus we’re serving fewer requests.
* A spike in the log rate could denote a DDoS attack. This may lead to an investigation of IP addresses from incoming requests.
-You can also view log anomalies directly in the [{{ml-app}} app](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md).
+You can also view log anomalies directly in the [{{ml-app}} app](/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md).
::::{note}
-This feature makes use of {{ml}} {{anomaly-jobs}}. To set up jobs, you must have `all` {{kib}} feature privileges for **{{ml-app}}**. Users that have full or read-only access to {{ml-features}} within a {{kib}} space can view the results of *all* {{anomaly-jobs}} that are visible in that space, even if they do not have access to the source indices of those jobs. You must carefully consider who is given access to {{ml-features}}; {{anomaly-job}} results may propagate field values that contain sensitive information from the source indices to the results. For more details, refer to [Set up {{ml-features}}](../../../explore-analyze/machine-learning/setting-up-machine-learning.md).
+This feature makes use of {{ml}} {{anomaly-jobs}}. To set up jobs, you must have `all` {{kib}} feature privileges for **{{ml-app}}**. Users that have full or read-only access to {{ml-features}} within a {{kib}} space can view the results of *all* {{anomaly-jobs}} that are visible in that space, even if they do not have access to the source indices of those jobs. You must carefully consider who is given access to {{ml-features}}; {{anomaly-job}} results may propagate field values that contain sensitive information from the source indices to the results. For more details, refer to [Set up {{ml-features}}](/explore-analyze/machine-learning/setting-up-machine-learning.md).
::::
diff --git a/solutions/observability/logs/parse-route-logs.md b/solutions/observability/logs/parse-route-logs.md
index 3fca7731e3..b51c1753d3 100644
--- a/solutions/observability/logs/parse-route-logs.md
+++ b/solutions/observability/logs/parse-route-logs.md
@@ -11,7 +11,7 @@ applies_to:
::::{note}
-**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines that parse and route logs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines that parse and route logs. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
::::
@@ -22,8 +22,8 @@ After parsing, you can use the structured fields to further organize your logs b
Refer to the following sections for more on parsing and organizing your log data:
-* [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields): Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier.
-* [Reroute log data to specific data streams](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-reroute-log-data-to-specific-data-streams): Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing.
+* [Extract structured fields](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields): Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier.
+* [Reroute log data to specific data streams](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-reroute-log-data-to-specific-data-streams): Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing.
## Extract structured fields [observability-parse-log-data-extract-structured-fields]
@@ -124,10 +124,10 @@ When you added the log to Elastic in the previous section, the `@timestamp` fiel
When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to Elastic. To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following:
-1. [Use an ingest pipeline to extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field)
-2. [Test the pipeline with the simulate pipeline API](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api)
-3. [Configure a data stream with an index template](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template)
-4. [Create a data stream](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-create-a-data-stream)
+1. [Use an ingest pipeline to extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field)
+2. [Test the pipeline with the simulate pipeline API](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api)
+3. [Configure a data stream with an index template](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template)
+4. [Create a data stream](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-create-a-data-stream)
#### Use an ingest pipeline to extract the `@timestamp` field [observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field]
@@ -320,14 +320,14 @@ Extracting the `log.level` field lets you filter by severity and focus on critic
To extract and use the `log.level` field:
-1. [Add the `log.level` field to the dissect processor pattern in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-loglevel-to-your-ingest-pipeline)
-2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api)
-3. [Query your logs based on the `log.level` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-loglevel)
+1. [Add the `log.level` field to the dissect processor pattern in your ingest pipeline.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-loglevel-to-your-ingest-pipeline)
+2. [Test the pipeline with the simulate API.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api)
+3. [Query your logs based on the `log.level` field.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-loglevel)
#### Add `log.level` to your ingest pipeline [observability-parse-log-data-add-loglevel-to-your-ingest-pipeline]
-Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section with this command:
+Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section with this command:
```console
PUT _ingest/pipeline/logs-example-default
@@ -350,7 +350,7 @@ Now your pipeline will extract these fields:
* The `log.level` field: `WARN`
* The `message` field: `192.168.1.101 Disk usage exceeds 90%.`
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
+In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api]
@@ -491,14 +491,14 @@ This section shows you how to extract the `host.ip` field from the following exa
To extract and use the `host.ip` field:
-1. [Add the `host.ip` field to your dissect processor in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-hostip-to-your-ingest-pipeline)
-2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api)
-3. [Query your logs based on the `host.ip` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-hostip)
+1. [Add the `host.ip` field to your dissect processor in your ingest pipeline.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-hostip-to-your-ingest-pipeline)
+2. [Test the pipeline with the simulate API.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api)
+3. [Query your logs based on the `host.ip` field.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-hostip)
#### Add `host.ip` to your ingest pipeline [observability-parse-log-data-add-hostip-to-your-ingest-pipeline]
-Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section:
+Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section:
```console
PUT _ingest/pipeline/logs-example-default
@@ -522,7 +522,7 @@ Your pipeline will extract these fields:
* The `host.ip` field: `192.168.1.101`
* The `message` field: `Disk usage exceeds 90%.`
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
+In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api-1]
@@ -758,16 +758,16 @@ This section shows you how to use a reroute processor to send the high-severity
```
::::{note}
-When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](../../../deploy-manage/production-guidance/optimize-performance/size-shards.md) documentation.
+When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](/deploy-manage/production-guidance/optimize-performance/size-shards.md) documentation.
::::
To use a reroute processor:
-1. [Add a reroute processor to your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline)
-2. [Add the example logs to your data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-logs-to-a-data-stream)
-3. [Query your logs and verify the high-severity logs were routed to the new data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-verify-the-reroute-processor-worked)
+1. [Add a reroute processor to your ingest pipeline.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline)
+2. [Add the example logs to your data stream.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-logs-to-a-data-stream)
+3. [Query your logs and verify the high-severity logs were routed to the new data stream.](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-verify-the-reroute-processor-worked)
### Add a reroute processor to the ingest pipeline [observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline]
@@ -802,7 +802,7 @@ The previous command sets the following values for your reroute processor:
* `if`: Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`.
* `dataset`: the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream.
-In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
+In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section.
### Add logs to a data stream [observability-parse-log-data-add-logs-to-a-data-stream]
diff --git a/solutions/observability/logs/plaintext-application-logs.md b/solutions/observability/logs/plaintext-application-logs.md
index e37bf39277..dcfe1e6faf 100644
--- a/solutions/observability/logs/plaintext-application-logs.md
+++ b/solutions/observability/logs/plaintext-application-logs.md
@@ -14,21 +14,21 @@ Ingest and parse plaintext logs, including existing logs, from any programming l
Plaintext logs require some additional setup that structured logs do not require:
* To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications.
-* To [correlate plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs), you need to inject IDs into log messages and parse them using an ingest pipeline.
+* To [correlate plaintext logs](/solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs), you need to inject IDs into log messages and parse them using an ingest pipeline.
To ingest, parse, and correlate plaintext logs:
-1. Ingest plaintext logs with [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) or [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) and parse them before indexing with an ingest pipeline.
-2. [Correlate plaintext logs with an {{apm-agent}}.](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs)
-3. [View logs in Discover](../../../solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs)
+1. Ingest plaintext logs with [{{filebeat}}](/solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) or [{{agent}}](/solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) and parse them before indexing with an ingest pipeline.
+2. [Correlate plaintext logs with an {{apm-agent}}.](/solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs)
+3. [View logs in Discover](/solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs)
## Ingest logs [ingest-plaintext-logs]
Send application logs to {{es}} using one of the following shipping tools:
-* [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) A lightweight data shipper that sends log data to {{es}}.
-* [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) A single agent for logs, metrics, security data, and threat prevention. Combined with Fleet, you can centrally manage {{agent}} policies and lifecycles directly from {{kib}}.
+* [{{filebeat}}](/solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) A lightweight data shipper that sends log data to {{es}}.
+* [{{agent}}](/solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) A single agent for logs, metrics, security data, and threat prevention. Combined with Fleet, you can centrally manage {{agent}} policies and lifecycles directly from {{kib}}.
### Ingest logs with {{filebeat}} [ingest-plaintext-logs-with-filebeat]
@@ -44,22 +44,36 @@ Install {{filebeat}} on the server you want to monitor by running the commands t
::::::{tab-item} DEB
```sh subs=true
+curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-amd64.deb
+sudo dpkg -i filebeat-{{version}}-amd64.deb
+```
+::::::
+
+::::::{tab-item} RPM
+```sh subs=true
+curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-x86_64.rpm
+sudo rpm -vi filebeat-{{version}}-x86_64.rpm
+```
+::::::
+
+::::::{tab-item} macOS
+```sh subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-darwin-x86_64.tar.gz
tar xzvf filebeat-{{version}}-darwin-x86_64.tar.gz
```
::::::
-::::::{tab-item} RPM
+::::::{tab-item} Linux
```sh subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-linux-x86_64.tar.gz
tar xzvf filebeat-{{version}}-linux-x86_64.tar.gz
```
::::::
-::::::{tab-item} macOS
-1. Download the {{filebeat}} Windows zip file: `https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-windows-x86_64.zip`
+::::::{tab-item} Windows
+1. Download the [{{filebeat}} Windows zip file](https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-windows-x86_64.zip).
2. Extract the contents of the zip file into `C:\Program Files`.
-3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`.
+3. Rename the _filebeat-{{version}}-windows-x86_64_ directory to _{{filebeat}}_.
4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**).
5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service:
@@ -68,24 +82,9 @@ tar xzvf filebeat-{{version}}-linux-x86_64.tar.gz
PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1
```
-
If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
::::::
-::::::{tab-item} Linux
-```sh subs=true
-curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-amd64.deb
-sudo dpkg -i filebeat-{{version}}-amd64.deb
-```
-::::::
-
-::::::{tab-item} Windows
-```sh subs=true
-curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version}}-x86_64.rpm
-sudo rpm -vi filebeat-{{version}}-x86_64.rpm
-```
-::::::
-
:::::::
#### Step 2: Connect to {{es}} [step-2-plaintext-connect-to-your-project]
@@ -98,7 +97,7 @@ output.elasticsearch:
api_key: "id:api_key"
```
-1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
+1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using:
```console
@@ -143,7 +142,7 @@ filebeat.inputs:
{{filebeat}} comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:
-From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system:
+From the {{filebeat}} installation directory, set the [index template](/manage-data/data-store/templates.md) by running the command that aligns with your system:
:::::::{tab-set}
@@ -257,7 +256,7 @@ PUT _ingest/pipeline/filebeat* <1>
4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.`
-Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data.
+Refer to [Extract structured fields](/solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data.
After creating your pipeline, specify the pipeline for filebeat in the `filebeat.yml` file:
@@ -351,4 +350,4 @@ Learn about correlating plaintext logs in the agent-specific ingestion guides:
## View logs [view-plaintext-logs]
-To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. You can also select **All logs** from the **Data views** menu as it includes the `filebeat-*` index pattern by default. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more information.
\ No newline at end of file
+To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. You can also select **All logs** from the **Data views** menu as it includes the `filebeat-*` index pattern by default. Refer to [Create a data view](/explore-analyze/find-and-organize/data-views.md) for more information.
\ No newline at end of file
diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md
index 47bb4e5c71..e4f770b39f 100644
--- a/solutions/observability/logs/stream-any-log-file.md
+++ b/solutions/observability/logs/stream-any-log-file.md
@@ -11,7 +11,7 @@ applies_to:
This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file.
-To get started quickly without manually configuring the {{agent}}, you can use the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information.
+To get started quickly without manually configuring the {{agent}}, you can use the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information.
Continue with this guide for instructions on manual configuration.
@@ -39,7 +39,7 @@ To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or
:::{tab-item} Serverless
:sync: serverless
-The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
+The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles).
:::
@@ -49,9 +49,9 @@ The **Admin** role or higher is required to onboard log data. To learn more, ref
Complete these steps to install and configure the standalone {{agent}} and send your log data to {{es}}:
-1. [Download and extract the {{agent}} installation package.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-extract-agent)
-2. [Install and start the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-install-agent)
-3. [Configure the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-agent-config)
+1. [Download and extract the {{agent}} installation package.](/solutions/observability/logs/stream-any-log-file.md#logs-stream-extract-agent)
+2. [Install and start the {{agent}}.](/solutions/observability/logs/stream-any-log-file.md#logs-stream-install-agent)
+3. [Configure the {{agent}}.](/solutions/observability/logs/stream-any-log-file.md#logs-stream-agent-config)
### Step 1: Download and extract the {{agent}} installation package [logs-stream-extract-agent]
@@ -64,29 +64,24 @@ On your host, download and extract the installation package that corresponds wit
::::::{tab-item} macOS
-```shell
-
+```shell subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-darwin-x86_64.tar.gz
tar xzvf elastic-agent-{{stack-version}}-darwin-x86_64.tar.gz
-
```
::::::
::::::{tab-item} Linux
-```shell
-
+```shell subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-linux-x86_64.tar.gz
tar xzvf elastic-agent-{{stack-version}}-linux-x86_64.tar.gz
-
```
::::::
::::::{tab-item} Windows
-```powershell
-
+```powershell subs=true
# PowerShell 5.0+
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-windows-x86_64.zip -OutFile elastic-agent-{{stack-version}}-windows-x86_64.zip
Expand-Archive .\elastic-agent-{{stack-version}}-windows-x86_64.zip
@@ -103,11 +98,9 @@ To simplify upgrading to future versions of Elastic Agent, we recommended that y
You can install Elastic Agent in an unprivileged mode that does not require root privileges.
:::
-```shell
-
+```shell subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-amd64.deb
sudo dpkg -i elastic-agent-{{stack-version}}-amd64.deb
-
```
::::::
@@ -118,11 +111,9 @@ To simplify upgrading to future versions of Elastic Agent, we recommended that y
You can install Elastic Agent in an unprivileged mode that does not require root privileges.
:::
-```shell
-
+```shell subs=true
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-x86_64.rpm
sudo rpm -vi elastic-agent-{{stack-version}}-x86_64.rpm
-
```
::::::
@@ -272,7 +263,7 @@ inputs:
Next, set the values for these fields:
-* `hosts` – Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
+* `hosts` – Copy the {{es}} endpoint from **Help menu () → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`.
* `api-key` – Use an API key to grant the agent access to {{es}}. To create an API key for your agent, refer to the [Create API keys for standalone agents](/reference/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) documentation.
::::{note}
@@ -430,12 +421,12 @@ If you’re not seeing your log files in the UI, verify the following in the `el
* The path to your logs file under `paths` is correct.
* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format.
-If you’re still running into issues, see [{{agent}} troubleshooting](../../../troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](/reference/fleet/configure-standalone-elastic-agents.md).
+If you’re still running into issues, see [{{agent}} troubleshooting](/troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](/reference/fleet/configure-standalone-elastic-agents.md).
## Next steps [logs-stream-next-steps]
After you have your agent configured and are streaming log data to {{es}}:
-* Refer to the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data.
-* Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently.
\ No newline at end of file
+* Refer to the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data.
+* Refer to the [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently.
\ No newline at end of file
diff --git a/solutions/observability/logs/stream-application-logs.md b/solutions/observability/logs/stream-application-logs.md
index 3988c35815..47fab9fe4c 100644
--- a/solutions/observability/logs/stream-application-logs.md
+++ b/solutions/observability/logs/stream-application-logs.md
@@ -46,7 +46,7 @@ With {{filebeat}} or {{agent}}, you can ingest plaintext logs, including existin
For plaintext logs to be useful, you need to use {{filebeat}} or {{agent}} to parse the log data.
-** Learn more in [Plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md)**
+** Learn more in [Plaintext logs](/solutions/observability/logs/plaintext-application-logs.md)**
### ECS formatted logs [observability-correlate-application-logs-ecs-formatted-logs]
@@ -60,7 +60,7 @@ Add ECS logging plugins to your logging libraries to format your logs into ECS-c
To use ECS logging, you need to modify your application and its log configuration.
-** Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)**
+** Learn more in [ECS formatted logs](/solutions/observability/logs/ecs-formatted-application-logs.md)**
#### {{apm-agent}} log reformatting [observability-correlate-application-logs-apm-agent-log-reformatting]
@@ -73,7 +73,7 @@ This feature is supported for the following {{apm-agent}}s:
* [Python](apm-agent-python://reference/logs.md#log-reformatting)
* [Java](apm-agent-java://reference/logs.md#log-reformatting)
-** Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)**
+** Learn more in [ECS formatted logs](/solutions/observability/logs/ecs-formatted-application-logs.md)**
### {{apm-agent}} log sending [observability-correlate-application-logs-apm-agent-log-sending]
@@ -82,7 +82,7 @@ Automatically capture and send logs directly to the managed intake service using
Log sending is supported in the Java {{apm-agent}}.
-** Learn more in [{{apm-agent}} log sending](../../../solutions/observability/logs/apm-agent-log-sending.md)**
+** Learn more in [{{apm-agent}} log sending](/solutions/observability/logs/apm-agent-log-sending.md)**
## Log correlation [observability-correlate-application-logs-log-correlation]
diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md
index 4afeb6df99..8817afab63 100644
--- a/solutions/observability/observability-ai-assistant.md
+++ b/solutions/observability/observability-ai-assistant.md
@@ -25,7 +25,7 @@ The {{obs-ai-assistant}} helps you:
* **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes
* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch
* **Generate reports**: Create alert summaries and incident timelines with key metrics
-* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
+* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
* **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data
## Requirements [obs-ai-requirements]
@@ -35,16 +35,16 @@ The AI assistant requires the following:
- An **Elastic deployment**:
- For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
-
+
- For **Search**: {{stack}} version **8.16.0** or later, or **{{serverless-short}} {{es}} project**.
-
+
- To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions).
-
+
- An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.
- Refer to the [documentation](../../deploy-manage/manage-connectors.md) for your provider to learn about supported and default models.
+ Refer to the [documentation](/deploy-manage/manage-connectors.md) for your provider to learn about supported and default models.
* The knowledge base requires a 4 GB {{ml}} node.
- In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs.
@@ -105,7 +105,7 @@ The AI Assistant connects to one of these supported LLM providers:
::::
:::::
-The AI Assistant uses [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md), Elastic’s semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.
+The AI Assistant uses [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md), Elastic’s semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.
Add data to the knowledge base with one or more of the following methods:
@@ -119,7 +119,7 @@ You can also add information to the knowledge base by asking the AI Assistant to
To add external data to the knowledge base in {{kib}}:
-1. To open AI Assistant settings, find `AI Assistants` in the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
+1. To open AI Assistant settings, find `AI Assistants` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Under **{{obs-ai-assistant}}**, click **Manage settings**.
3. Switch to the **Knowledge base** tab.
4. Click the **New entry** button, and choose either:
@@ -146,7 +146,7 @@ To add external data to the knowledge base in {{kib}}:
**Setup process:**
1. **Create a connector**
-
+
**Use the UI**:
- Navigate to `Content / Connectors` in the global search field
@@ -356,7 +356,7 @@ To learn more about alerting, actions, and connectors, refer to [Alerting](incid
To access the AI Assistant Settings page, you can:
-* Find `AI Assistants` in the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
+* Find `AI Assistants` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
* Use the **More actions** menu inside the AI Assistant window.
The AI Assistant Settings page contains the following tabs:
diff --git a/solutions/observability/synthetics/create-monitors-ui.md b/solutions/observability/synthetics/create-monitors-ui.md
index 13397450e5..3bfc8411ec 100644
--- a/solutions/observability/synthetics/create-monitors-ui.md
+++ b/solutions/observability/synthetics/create-monitors-ui.md
@@ -42,16 +42,15 @@ To use the UI to add a lightweight monitor:
::::{note}
If you don’t see any locations listed, refer to the [troubleshooting guide](/troubleshoot/observability/troubleshooting-synthetics.md#synthetics-troubleshooting-no-locations) for guidance.
-
::::
:::::{note}
If you’ve [added a {{private-location}}](/solutions/observability/synthetics/monitor-resources-on-private-networks.md), you’ll see your the {{private-location}} in the list of *Locations*.
- :::{image} /solutions/images/serverless-private-locations-monitor-locations.png
+ ```{image} /solutions/images/serverless-private-locations-monitor-locations.png
:alt: Screenshot of Monitor locations options including a {private-location}
:screenshot:
- :::
+ ```
:::::
diff --git a/solutions/observability/synthetics/create-monitors-with-projects.md b/solutions/observability/synthetics/create-monitors-with-projects.md
index e9e486ed37..c345a08488 100644
--- a/solutions/observability/synthetics/create-monitors-with-projects.md
+++ b/solutions/observability/synthetics/create-monitors-with-projects.md
@@ -193,10 +193,10 @@ For more details on writing journeys and configuring browser monitors, refer to
## Test and connect to your Observability project or Elastic Stack deployment[synthetics-get-started-project-test-and-connect-to-your-observability-project]
-::::{tab-set}
+:::::{tab-set}
:group: stack-serverless
-:::{tab-item} Elastic Stack
+::::{tab-item} Elastic Stack
:sync: stack
While inside the project directory you can do two things with the `npx @elastic/synthetics` command:
@@ -215,16 +215,16 @@ While inside the project directory you can do two things with the `npx @elastic/
One monitor will appear in the {{synthetics-app}} for each journey or lightweight monitor, and you’ll manage all monitors from your local environment. For more details on using the `push` command, refer to [`@elastic/synthetics push`](/solutions/observability/synthetics/cli.md#elastic-synthetics-push-command).
-::::{note}
+:::{note}
If you’ve [added a {{private-location}}](/solutions/observability/synthetics/monitor-resources-on-private-networks.md), you can `push` to that {{private-location}}.
To list available {{private-location}}s, run the [`elastic-synthetics locations` command](/solutions/observability/synthetics/cli.md#elastic-synthetics-locations-command) with the {{kib}} URL for the deployment from which to fetch available locations.
-::::
-
:::
-:::{tab-item} Serverless
+::::
+
+::::{tab-item} Serverless
:sync: serverless
While inside the Synthetics project directory you can do two things with the `npx @elastic/synthetics` command:
@@ -243,17 +243,17 @@ While inside the Synthetics project directory you can do two things with the `np
One monitor will appear in the Synthetics UI for each journey or lightweight monitor, and you’ll manage all monitors from your local environment. For more details on using the `push` command, refer to [`@elastic/synthetics push`](/solutions/observability/synthetics/cli.md#elastic-synthetics-push-command).
-::::{note}
+:::{note}
If you’ve [added a {{private-location}}](/solutions/observability/synthetics/monitor-resources-on-private-networks.md), you can `push` to that {{private-location}}.
To list available {{private-location}}s, run the [`elastic-synthetics locations` command](/solutions/observability/synthetics/cli.md#elastic-synthetics-locations-command) with the URL for the Observability project from which to fetch available locations.
-::::
-
:::
::::
+:::::
+
## View in the Synthetics UI [synthetics-get-started-project-view-in-your-observability-project]
Then, go to **Synthetics** in your serverless Observability project or in {{kib}}. You should see your newly pushed monitors running. You can also go to the **Management** tab to see the monitors' configuration settings.
diff --git a/solutions/security/get-started/automatic-migration.md b/solutions/security/get-started/automatic-migration.md
index 250cb2188b..1028dc8208 100644
--- a/solutions/security/get-started/automatic-migration.md
+++ b/solutions/security/get-started/automatic-migration.md
@@ -12,12 +12,12 @@ This feature is in technical preview. It may change in the future, and you shoul
Automatic Migration for detection rules helps you quickly convert SIEM rules from the Splunk Processing Language (SPL) to the Elasticsearch Query Language ({{esql}}). If comparable Elastic-authored rules exist, it simplifies onboarding by mapping your rules to them. Otherwise, it creates custom rules on the fly so you can verify and edit them instead of writing them from scratch.
-You can ingest your data before migrating your rules, or migrate your rules first in which case the tool will recommend which data sources you need to power your migrated rules.
+You can ingest your data before migrating your rules, or migrate your rules first in which case the tool will recommend which data sources you need to power your migrated rules.
::::{admonition} Requirements
* The `SIEM migrations: All` Security sub-feature privilege.
* A working [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md).
-* {{stack}} users: an [Enterprise](https://www.elastic.co/pricing) subscription.
+* {{stack}} users: an [Enterprise](https://www.elastic.co/pricing) subscription.
* {{Stack}} users: {{ml}} must be enabled.
* {{serverless-short}} users: a [Security Complete](/deploy-manage/deploy/elastic-cloud/project-settings.md) subscription.
* {{ecloud}} users: {{ml}} must be enabled. We recommend a minimum size of 4GB of RAM per {{ml}} zone.
@@ -29,7 +29,7 @@ You can ingest your data before migrating your rules, or migrate your rules firs
1. Find **Get started** in the navigation menu or use the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Under **Configure AI provider**, select a configured model or [add a new one](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). For information on how different models perform, refer to the [LLM performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md).
3. Next, under **Migrate rules & add data**, click **Translate your existing SIEM rules to Elastic**, then **Upload rules**.
-4. Follow the instructions on the **Upload Splunk SIEM rules** flyout to export your rules from Splunk as JSON.
+4. Follow the instructions on the **Upload Splunk SIEM rules** flyout to export your rules from Splunk as JSON.
:::{image} /solutions/images/security-siem-migration-1.png
:alt: the Upload Splunk SIEM rules flyout
@@ -45,30 +45,30 @@ You can ingest your data before migrating your rules, or migrate your rules firs
| rest /servicesNS/-/-/saved/searches
| search is_scheduled=1 AND eai:acl.app=splunksysmonsecurity
| where disabled=0
- | table id, title, search, description, action.escu.eli5,
+ | table id, title, search, description, action.escu.eli5,
```
The above sample query would download rules related to just the `splunksysmonsecurity` app.
- We don't recommend downloading all searches (for example with `| rest /servicesNS/-/-/saved/searches`) since most of the data will be irrelevant to SIEM rule migration.
+ We don't recommend downloading all searches (for example with `| rest /servicesNS/-/-/saved/searches`) since most of the data will be irrelevant to SIEM rule migration.
::::
-5. Select your JSON file and click **Upload**.
+5. Select your JSON file and click **Upload**.
::::{note}
If the file is large, you may need to separate it into multiple parts and upload them individually to avoid exceeding your LLM's context window.
::::
6. After you upload your Splunk rules, Automatic Migration will detect whether they use any Splunk macros or lookups. If so, follow the instructions which appear to export and upload them. Alternatively, you can complete this step later — however, until you upload them, some of your migrated rules will have a `partially translated` status. If you upload them now, you don't have to wait on the page for them to be processed — a notification will appear when processing is complete.
-7. Click **Translate** to start the rule translation process. You don't need to stay on this page. A notification will appear when the process is complete.
+7. Click **Translate** to start the rule translation process. You don't need to stay on this page. A notification will appear when the process is complete.
-8. When migration is complete, click the notification or return to the **Get started** page then click **View translated rules** to open the **Translated rules** page.
+8. When migration is complete, click the notification or return to the **Get started** page then click **View translated rules** to open the **Translated rules** page.
## The Translated rules page
-This section describes the **Translated rules** page's interface and explains how the data that appears here is derived.
+This section describes the **Translated rules** page's interface and explains how the data that appears here is derived.
-When you upload a new batch of rules, they are assigned a name and number, for example `SIEM rule migration 1`, or `SIEM rule migration 2`. Use the **Migrations** dropdown menu in the upper right to select which batch appears.
+When you upload a new batch of rules, they are assigned a name and number, for example `SIEM rule migration 1`, or `SIEM rule migration 2`. Use the **Migrations** dropdown menu in the upper right to select which batch appears.
::::{image} /solutions/images/security-siem-migration-processed-rules.png
:alt: The translated rules page
@@ -105,12 +105,12 @@ The table's fields are as follows:
## Finalize translated rules
-Once you're on the **Translated rules** page, to install any rules that were partially translated or not translated, you will need to edit them. Optionally, you can also edit custom rules that were successfully translated to finetune them.
+Once you're on the **Translated rules** page, to install any rules that were partially translated or not translated, you will need to edit them. Optionally, you can also edit custom rules that were successfully translated to finetune them.
:::{note}
-You cannot edit Elastic-authored rules using this interface, but after they are installed you can [edit them](/solutions/security/detect-and-alert/manage-detection-rules.md) from the **Rules** page.
+You cannot edit Elastic-authored rules using this interface, but after they are installed you can [edit them](/solutions/security/detect-and-alert/manage-detection-rules.md) from the **Rules** page.
:::
-
+
### Edit a custom rule
Click the rule's name to open the rule's details flyout to the **Translation** tab, which shows the source rule alongside the translated — or partially translated — Elastic version. You can update any part of the rule. When finished, click **Save**.
@@ -153,4 +153,4 @@ No matter how many times you use Automatic Migration, migration data will contin
**How does Automatic Migration handle Splunk rules which lookup other indices?**
-Rules that fall into this category will typically appear with a status of partially translated. You can use the [`LOOKUP JOIN`](elasticsearch://reference/query-languages/esql/esql-lookup-join.md) capability to help in this situation.
+Rules that fall into this category will typically appear with a status of partially translated. You can use the [`LOOKUP JOIN`](elasticsearch://reference/query-languages/esql/esql-lookup-join.md) capability to help in this situation.