diff --git a/deploy-manage/_snippets/deployment-options-overview.md b/deploy-manage/_snippets/deployment-options-overview.md
index 027587e5dc..da1dccaed6 100644
--- a/deploy-manage/_snippets/deployment-options-overview.md
+++ b/deploy-manage/_snippets/deployment-options-overview.md
@@ -8,5 +8,5 @@
**Advanced options**
* [**Self-managed**](/deploy-manage/deploy/self-managed.md): Install, configure, and run Elastic on your own premises.
-* [**{{ece}}**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy {{ecloud}} on public or private clouds, virtual machines, or your own premises.
+* [**{{ece}}**](/deploy-manage/deploy/cloud-enterprise.md): Deploy {{ecloud}} on public or private clouds, virtual machines, or your own premises.
* [**{{eck}}**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy {{eck}}.
\ No newline at end of file
diff --git a/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md b/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md
index 4ab9b768c1..795daa62b0 100644
--- a/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md
+++ b/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md
@@ -60,7 +60,7 @@ Once you have the endpoint, use it in your client application. To test connectiv
## Connect using Cloud ID [ece-cloud-id]
-The Cloud ID reduces the number of steps required to start sending data from [Beats](https://www.elastic.co/guide/en/beats/libbeat/current/index.html) or [Logstash](https://www.elastic.co/guide/en/logstash/current/index.html) to your hosted {{es}} cluster on ECE, by assigning a unique ID to your cluster.
+The Cloud ID reduces the number of steps required to start sending data from [Beats](beats://reference/index.md) or [Logstash](logstash://reference/index.md) to your hosted {{es}} cluster on ECE, by assigning a unique ID to your cluster.
::::{note}
Connections through Cloud IDs are only supported in Beats and Logstash.
diff --git a/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md b/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md
index 7c39941a0b..3687c14dfb 100644
--- a/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md
+++ b/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md
@@ -10,7 +10,7 @@ mapped_pages:
For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apm/index.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts.
-As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications.
+As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](/reference/apm-agents/index.md). The APM Agents get deployed within your services and applications.
From the deployment **Integrations Server** page you can also:
diff --git a/deploy-manage/deploy/cloud-enterprise/resource-overrides.md b/deploy-manage/deploy/cloud-enterprise/resource-overrides.md
index e25d36fad2..057dd08b4e 100644
--- a/deploy-manage/deploy/cloud-enterprise/resource-overrides.md
+++ b/deploy-manage/deploy/cloud-enterprise/resource-overrides.md
@@ -14,7 +14,7 @@ The RAM to CPU proportions can’t be overridden per instance.
## Override disk quota
-You can override the RAM to disk storage capacity for an instance under **Override disk quota** from the instance’s drop-down menu. This can be helpful when troubleshooting [watermark errors](../../../troubleshoot/elasticsearch/fix-watermark-errors.md) that result in a red [cluster health](https://www.elastic.co/guide/en/elasticsearch/reference/current/_cluster_health.html) status, which blocks configuration changes. A **Reset system default** message appears while disk quota overrides are set.
+You can override the RAM to disk storage capacity for an instance under **Override disk quota** from the instance’s drop-down menu. This can be helpful when troubleshooting [watermark errors](../../../troubleshoot/elasticsearch/fix-watermark-errors.md) that result in a red [cluster health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health) status, which blocks configuration changes. A **Reset system default** message appears while disk quota overrides are set.
::::{note}
Overriding the disk storage capacity does not restart the {{es}} node.
diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md
index fb27368775..846b9205ce 100644
--- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md
+++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md
@@ -175,4 +175,4 @@ By default the operator manages a private CA and generates a self-signed certifi
This behavior and the relevant configuration is identical to what is done for {{es}} and {{kib}}. Check [Setting up your own certificate](/deploy-manage/security/secure-cluster-communications.md) for more information on how to use your own certificate to configure the TLS endpoint of the APM Server.
-For more details on how to configure the APM agents to work with custom certificates, check the [APM agents documentation](https://www.elastic.co/guide/en/apm/agent/index.html).
+For more details on how to configure the APM agents to work with custom certificates, check the [APM agents documentation](/reference/apm-agents/index.md).
diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md
index 9aa04b81e4..27b8c7f0ed 100644
--- a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md
+++ b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md
@@ -103,7 +103,7 @@ ECK supports the deployment of any Community Beat.
2. Set the `image` element to point to the image to be deployed.
3. Make sure the following roles exist in {{es}}:
- * If `elasticsearchRef` is provided, create the role `eck_beat_es_$type_role`, where `$type` is the Beat type. For example, when deploying `kafkabeat`, the role name is `eck_beat_es_kafkabeat_role`. This role must have the permissions required by the Beat. Check the [{{es}} documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html) for more details.
+ * If `elasticsearchRef` is provided, create the role `eck_beat_es_$type_role`, where `$type` is the Beat type. For example, when deploying `kafkabeat`, the role name is `eck_beat_es_kafkabeat_role`. This role must have the permissions required by the Beat. Check the [{{es}} documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for more details.
* If `kibanaRef` is provided, create the role `eck_beat_kibana_$type_role` with the permissions required to setup {{kib}} dashboards.
diff --git a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md
index 97a36b0732..78eaef5620 100644
--- a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md
+++ b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md
@@ -40,11 +40,11 @@ This token is stored in a secret named `{{APM-server-name}}-apm-token` and can b
kubectl get secret/apm-server-quickstart-apm-token -o go-template='{{index .data "secret-token" | base64decode}}'
```
-For more information, check [APM Server Reference](https://www.elastic.co/guide/en/apm/server/current/index.html).
+For more information, check [APM Server Reference](/solutions/observability/apm/index.md).
## APM Server API keys [k8s-apm-api-keys]
-If you want to configure API keys to authorize requests to the APM Server, instead of using the APM Server CLI, you have to create API keys using the {{es}} [create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-api-create-api-key.html), check the [APM Server documentation](/solutions/observability/apm/api-keys.md).
+If you want to configure API keys to authorize requests to the APM Server, instead of using the APM Server CLI, you have to create API keys using the {{es}} [create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), check the [APM Server documentation](/solutions/observability/apm/api-keys.md).
diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
index 7f2d8d8973..5bae5daba5 100644
--- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
+++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
@@ -254,7 +254,7 @@ Some {{ls}} plugins need to write "checkpoints" to local storage in order to kee
Not all external data sources have mechanisms to track state internally, and {{ls}} checkpoints can help persist data.
-In the plugin documentation, look for configurations that call for a `path` with a settings like `sincedb`, `sincedb_path`, `sequence_path`, or `last_run_metadata_path`. Check out specific plugin documentation in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current) for details.
+In the plugin documentation, look for configurations that call for a `path` with a settings like `sincedb`, `sincedb_path`, `sequence_path`, or `last_run_metadata_path`. Check out specific plugin documentation in the [Logstash Reference](logstash://reference/index.md) for details.
```yaml
spec:
diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md
index 8d4509248f..c12989f760 100644
--- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md
+++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md
@@ -82,7 +82,7 @@ helm install eck-stack-with-apm-server elastic/eck-stack \
## Enterprise Search server along with {{es}} and {{kib}} [k8s-install-enterprise-search-elasticsearch-kibana-helm]
-Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example deployment of {{es}} version 8.x, {{kib}} 8.x, and an 8.x Enterprise Search server using the Helm chart, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s-stack-helm-chart.html).
+Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example deployment of {{es}} version 8.x, {{kib}} 8.x, and an 8.x Enterprise Search server using the Helm chart, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-stack-helm-chart.html).
## Install individual components of the {{stack}} [k8s-eck-stack-individual-components]
diff --git a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md
index cab1a32a1d..6785da7562 100644
--- a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md
+++ b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md
@@ -15,7 +15,7 @@ The following guides provide specific instructions for deploying and configuring
* [{{ls}}](logstash.md)
::::{note}
-Enterprise Search is not available in {{stack}} versions 9.0 and later. To deploy or manage Enterprise Search in earlier versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s-enterprise-search.html).
+Enterprise Search is not available in {{stack}} versions 9.0 and later. To deploy or manage Enterprise Search in earlier versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-enterprise-search.html).
::::
When orchestrating any of these applications, also consider the following topics:
diff --git a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md
index 94ac7d75a6..324eccf1d3 100644
--- a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md
+++ b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md
@@ -10,7 +10,7 @@ mapped_pages:
## Enable HTTPS [k8s-logstash-https]
-Access to the [Logstash Monitoring APIs](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html#monitoring-api-security) use HTTPS by default - the operator will set the values `api.ssl.enabled: true`, `api.ssl.keystore.path` and `api.ssl.keystore.password`.
+Access to the [Logstash Monitoring APIs](logstash://reference/monitoring-logstash.md#monitoring-api-security) use HTTPS by default - the operator will set the values `api.ssl.enabled: true`, `api.ssl.keystore.path` and `api.ssl.keystore.password`.
You can further secure the {{ls}} Monitoring APIs by requiring HTTP Basic authentication by setting `api.auth.type: basic`, and providing the relevant credentials `api.auth.basic.username` and `api.auth.basic.password`:
diff --git a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md
index dc9e44c3f6..24771704d3 100644
--- a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md
+++ b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md
@@ -130,7 +130,7 @@ $$$faq-subscriptions$$$**Do you offer support?**
: Yes, all subscription levels for {{ech}} include support, handled by email or through the Elastic Support Portal. Different subscription levels include different levels of support. For the Standard subscription level, there is no service-level agreement (SLA) on support response times. Gold and Platinum subscription levels include an SLA on response times to tickets and dedicated resources. To learn more, check [Getting Help](/troubleshoot/index.md).
$$$faq-where$$$**Where are deployments hosted?**
-: We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](https://www.elastic.co/guide/en/cloud/current/ec-reference-regions.html) and what [hardware we use](https://www.elastic.co/guide/en/cloud/current/ec-reference-hardware.html). New data centers are added all the time.
+: We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](cloud://reference/cloud-hosted/regions.md) and what [hardware we use](cloud://reference/cloud-hosted/hardware.md). New data centers are added all the time.
$$$faq-vs-aws$$$**What is the difference between {{ech}} and the Amazon {{es}} Service?**
: {{ech}} is the only hosted and managed {{es}} service built, managed, and supported by the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. With {{ech}}, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of {{es}} clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic {{es}} Service [comparison page](https://www.elastic.co/aws-elasticsearch-service).
diff --git a/deploy-manage/deploy/elastic-cloud/create-an-organization.md b/deploy-manage/deploy/elastic-cloud/create-an-organization.md
index 96de797180..1f25783323 100644
--- a/deploy-manage/deploy/elastic-cloud/create-an-organization.md
+++ b/deploy-manage/deploy/elastic-cloud/create-an-organization.md
@@ -99,7 +99,7 @@ You can subscribe to {{ecloud}} at any time during your trial. [Billing](../../.
### Get started with your trial [general-sign-up-trial-how-do-i-get-started-with-my-trial]
-Start by checking out some common approaches for [moving data into {{ecloud}}](https://www.elastic.co/guide/en/cloud/current/ec-cloud-ingest-data.html).
+Start by checking out some common approaches for [moving data into {{ecloud}}](/manage-data/ingest.md).
### Maintain access to your trial projects and data [general-sign-up-trial-what-happens-at-the-end-of-the-trial]
diff --git a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md
index 80e37fbdb2..0c996a80c8 100644
--- a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md
+++ b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md
@@ -90,7 +90,7 @@ This table compares Elasticsearch capabilities between {{ech}} deployments and S
| [**Reindexing from remote**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) | ✅ | **Planned** | Anticipated in a future release |
| **Repository management** | ✅ | Managed | Automatically managed by Elastic |
| [**Scripted metric aggregations**](elasticsearch://reference/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) | ✅ | ❌ | Not available in Serverless
The alternative for this in Serverless is [ES|QL](/explore-analyze/query-filter/languages/esql.md) |
-| [**Search applications**](https://www.elastic.co/guide/en/elasticsearch/reference/8.18/search-application-overview.html) | - UI and APIs
- Maintenance mode (beta) | - API-only
- Maintenance mode (beta) | UI not available in Serverless |
+| [**Search applications**](/solutions/search/search-applications.md) | - UI and APIs
- Maintenance mode (beta) | - API-only
- Maintenance mode (beta) | UI not available in Serverless |
| **Shard management** | User-configurable | Managed by Elastic | No manual shard allocation in Serverless |
| [**Watcher**](/explore-analyze/alerts-cases/watcher.md) | ✅ | ❌ | Use **Kibana Alerts** instead, which provides rich integrations across use cases |
| **Web crawler** | ❌ (Managed Elastic Crawler discontinued with Enterprise Search in 9.0) | Self-managed only | Use [**self-managed crawler**](https://github.com/elastic/crawler) |
diff --git a/deploy-manage/deploy/elastic-cloud/heroku.md b/deploy-manage/deploy/elastic-cloud/heroku.md
index 148d1090f6..45ca652d4f 100644
--- a/deploy-manage/deploy/elastic-cloud/heroku.md
+++ b/deploy-manage/deploy/elastic-cloud/heroku.md
@@ -28,7 +28,7 @@ Not all features of {{ecloud}} are available to Heroku users. Specifically, you
Generally, if a feature is shown as available in the [{{heroku}} console](https://cloud.elastic.co?page=docs&placement=docs-body), you can use it.
-[{{es}} Machine Learning](https://www.elastic.co/guide/en/machine-learning/current/index.html), [Elastic APM](/solutions/observability/apm/index.md) and [Elastic Fleet Server](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) are not supported by the {{es}} Add-On for Heroku.
+[{{es}} Machine Learning](/explore-analyze/machine-learning.md), [Elastic APM](/solutions/observability/apm/index.md) and [Elastic Fleet Server](/reference/fleet/index.md) are not supported by the {{es}} Add-On for Heroku.
For other restrictions that apply to all of {{ecloud}}, refer to [](/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md).
diff --git a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md
index 9702a149eb..8c3c1aae7f 100644
--- a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md
+++ b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md
@@ -10,7 +10,7 @@ mapped_pages:
For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apm/index.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts.
-As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications.
+As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](/reference/apm-agents/index.md). The APM Agents get deployed within your services and applications.
From the deployment **Integrations Server** page you can also:
diff --git a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md
index ccb51e4337..95dc4d188b 100644
--- a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md
+++ b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md
@@ -70,7 +70,7 @@ $$$ec-restrictions-apis-kibana$$$
* {{es}} plugins, are not enabled by default for security purposes. Reach out to support if you would like to enable {{es}} plugins support on your account.
* Some {{es}} plugins do not apply to {{ecloud}}. For example, you won’t ever need to change discovery, as {{ecloud}} handles how nodes discover one another.
% * In {{es}} 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins {{ecloud}} might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not {{es}} itself.
-% * In {{es}} 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html) if you want more detailed information or perform index management actions.
+% * In {{es}} 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md) if you want more detailed information or perform index management actions.
## Watcher [ec-restrictions-watcher]
diff --git a/deploy-manage/deploy/elastic-cloud/serverless.md b/deploy-manage/deploy/elastic-cloud/serverless.md
index 127bb227da..fa66cfbae8 100644
--- a/deploy-manage/deploy/elastic-cloud/serverless.md
+++ b/deploy-manage/deploy/elastic-cloud/serverless.md
@@ -67,7 +67,7 @@ Migration paths between hosted deployments and serverless projects are currently
**How can I move data to or from serverless projects?**
-We are working on data migration tools! In the interim, [use Logstash](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html) with {{es}} input and output plugins to move data to and from serverless projects.
+We are working on data migration tools! In the interim, [use Logstash](logstash://reference/index.md) with {{es}} input and output plugins to move data to and from serverless projects.
**How does serverless ensure compatibility between software versions?**
diff --git a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md
index 9d8ec1265b..26171263b7 100644
--- a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md
+++ b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md
@@ -381,7 +381,7 @@ Beginning with {{stack}} version 8.0, [Integrations Server](manage-integrations-
You have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apm/index.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts.
-As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications.
+As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](/reference/apm-agents/index.md). The APM Agents get deployed within your services and applications.
From the deployment **APM & Fleet** page you can also:
diff --git a/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md b/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md
index a3907cdd11..1a5dbb4925 100644
--- a/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md
+++ b/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md
@@ -85,7 +85,7 @@ Bundles
The dictionary `synonyms.txt` can be used as `synonyms.txt` or using the full path `/app/config/synonyms.txt` in the `synonyms_path` of the `synonym-filter`.
- To learn more about analyzing with synonyms, check [Synonym token filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html) and [Formatting Synonyms](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/synonym-formats.html).
+ To learn more about analyzing with synonyms, check [Synonym token filter](elasticsearch://reference/text-analysis/analysis-synonym-tokenfilter.md) and [Formatting Synonyms](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/synonym-formats.html).
**GeoIP database bundle**
diff --git a/deploy-manage/deploy/self-managed/_snippets/connect-clients.md b/deploy-manage/deploy/self-managed/_snippets/connect-clients.md
index 10e0dc187e..e67eade0b2 100644
--- a/deploy-manage/deploy/self-managed/_snippets/connect-clients.md
+++ b/deploy-manage/deploy/self-managed/_snippets/connect-clients.md
@@ -7,6 +7,6 @@ When you start {{es}} for the first time, TLS is configured automatically for th
{{es-conf}}{{slash}}certs{{slash}}http_ca.crt
```
-The hex-encoded SHA-256 fingerprint of this certificate is also output to the terminal. Any clients that connect to {{es}}, such as the [{{es}} Clients](https://www.elastic.co/guide/en/elasticsearch/client/index.html), {{beats}}, standalone {{agent}}s, and {{ls}} must validate that they trust the certificate that {{es}} uses for HTTPS. {{fleet-server}} and {{fleet}}-managed {{agent}}s are automatically configured to trust the CA certificate. Other clients can establish trust by using either the fingerprint of the CA certificate or the CA certificate itself.
+The hex-encoded SHA-256 fingerprint of this certificate is also output to the terminal. Any clients that connect to {{es}}, such as the [{{es}} Clients](/reference/elasticsearch-clients/index.md), {{beats}}, standalone {{agent}}s, and {{ls}} must validate that they trust the certificate that {{es}} uses for HTTPS. {{fleet-server}} and {{fleet}}-managed {{agent}}s are automatically configured to trust the CA certificate. Other clients can establish trust by using either the fingerprint of the CA certificate or the CA certificate itself.
If the auto-configuration process already completed, you can still obtain the fingerprint of the security certificate. You can also copy the CA certificate to your machine and configure your client to use it.
\ No newline at end of file
diff --git a/deploy-manage/deploy/self-managed/_snippets/start-local.md b/deploy-manage/deploy/self-managed/_snippets/start-local.md
index e32f4e8b94..461d3d6fa1 100644
--- a/deploy-manage/deploy/self-managed/_snippets/start-local.md
+++ b/deploy-manage/deploy/self-managed/_snippets/start-local.md
@@ -36,4 +36,4 @@ For more detailed information about the `start-local` setup, refer to the [READM
## Next steps [local-dev-next-steps]
-Use our [quick start guides](https://www.elastic.co/guide/en/elasticsearch/reference/current/quickstart.html) to learn the basics of {{es}}.
+Use our [quick start guides](/solutions/search/api-quickstarts.md) to learn the basics of {{es}}.
diff --git a/deploy-manage/distributed-architecture/shard-request-cache.md b/deploy-manage/distributed-architecture/shard-request-cache.md
index f7d41621d2..4a258ebae7 100644
--- a/deploy-manage/distributed-architecture/shard-request-cache.md
+++ b/deploy-manage/distributed-architecture/shard-request-cache.md
@@ -18,7 +18,7 @@ The shard-level request cache module caches the local results on each shard. Thi
You can control the size and expiration of the cache at the node level using the [shard request cache settings](elasticsearch://reference/elasticsearch/configuration-reference/shard-request-cache-settings.md).
::::{important}
-By default, the requests cache will only cache the results of search requests where `size=0`, so it will not cache `hits`, but it will cache `hits.total`, [aggregations](/explore-analyze/query-filter/aggregations.md), and [suggestions](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters.html).
+By default, the requests cache will only cache the results of search requests where `size=0`, so it will not cache `hits`, but it will cache `hits.total`, [aggregations](/explore-analyze/query-filter/aggregations.md), and [suggestions](elasticsearch://reference/elasticsearch/rest-apis/search-suggesters.md).
Most queries that use `now` (see [Date Math](elasticsearch://reference/elasticsearch/rest-apis/common-options.md#date-math)) cannot be cached.
diff --git a/deploy-manage/kibana-reporting-configuration.md b/deploy-manage/kibana-reporting-configuration.md
index 1817bd0162..e14f4a6cc5 100644
--- a/deploy-manage/kibana-reporting-configuration.md
+++ b/deploy-manage/kibana-reporting-configuration.md
@@ -129,7 +129,7 @@ Granting the privilege to generate reports also grants the user the privilege to
With [{{kib}} application privileges](#grant-user-access), you can use the [role APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-roles) to grant access to the {{report-features}}, using **All** privileges, or sub-feature privileges.
:::{note}
-This API request needs to be run against the [{{kib}} API endpoint](https://www.elastic.co/guide/en/kibana/current/api.html).
+This API request needs to be run against the [{{kib}} API endpoint](https://www.elastic.co/docs/api/doc/kibana/).
:::
```console
diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md
index 1146eeff72..4c0805c62a 100644
--- a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md
+++ b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md
@@ -78,7 +78,7 @@ If you later need to update the remote connection with different permissions, yo
::::::{tab-item} TLS certificate (deprecated)
### Configuring trust with clusters in {{ecloud}} [ece-trust-ec]
-A deployment can be configured to trust all or specific deployments from an organization in [{{ecloud}}](https://www.elastic.co/guide/en/cloud/current):
+A deployment can be configured to trust all or specific deployments from an organization in [{{ecloud}}](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md):
1. From the **Security** menu, select **Remote Connections > Add trusted environment** and select **{{ecloud}} Organization**.
2. Enter the organization ID (which can be found near the organization name).
diff --git a/deploy-manage/security/k8s-network-policies.md b/deploy-manage/security/k8s-network-policies.md
index e8391aca7d..d785fbf0af 100644
--- a/deploy-manage/security/k8s-network-policies.md
+++ b/deploy-manage/security/k8s-network-policies.md
@@ -435,4 +435,4 @@ spec:
## Isolating Enterprise Search [k8s-network-policies-enterprise-search-isolation]
-Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example of Enterprise Search isolation using network policies in previous {{stack}} versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s_prerequisites.html#k8s-network-policies-enterprise-search-isolation).
+Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example of Enterprise Search isolation using network policies in previous {{stack}} versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s_prerequisites.html#k8s-network-policies-enterprise-search-isolation).
diff --git a/deploy-manage/security/kibana-session-management.md b/deploy-manage/security/kibana-session-management.md
index b3c47763ea..d268868289 100644
--- a/deploy-manage/security/kibana-session-management.md
+++ b/deploy-manage/security/kibana-session-management.md
@@ -15,7 +15,7 @@ When you log in, {{kib}} creates a session that is used to authenticate subseque
When your session expires, or you log out, {{kib}} will invalidate your cookie and remove session information from the index. {{kib}} also periodically invalidates and removes any expired sessions that weren’t explicitly invalidated.
-To manage user sessions programmatically, {{kib}} exposes [session management APIs](https://www.elastic.co/guide/en/kibana/current/session-management-api.html). For details, check out [Session and cookie security settings](kibana://reference/configuration-reference/security-settings.md#security-session-and-cookie-settings).
+To manage user sessions programmatically, {{kib}} exposes [session management APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-user-session). For details, check out [Session and cookie security settings](kibana://reference/configuration-reference/security-settings.md#security-session-and-cookie-settings).
## Session idle timeout [session-idle-timeout]
diff --git a/deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md b/deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md
index b9df2544ba..122a231dc9 100644
--- a/deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md
+++ b/deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md
@@ -29,11 +29,11 @@ Refer to [Audit events](elasticsearch://reference/elasticsearch/elasticsearch-au
## `trace.id` field in {{kib}} audit events
-In {{kib}}, the [trace.id](https://www.elastic.co/guide/en/kibana/current/xpack-security-audit-logging.html#field-trace-id) field allows to correlate multiple events that originate from the same request.
+In {{kib}}, the [trace.id](kibana://reference/kibana-audit-events.md#tracing-fields) field allows to correlate multiple events that originate from the same request.
Additionally, this field helps correlate events from one request with the backend calls that create {{es}} audit events. When {{kib}} sends requests to {{es}}, the `trace.id` value is propagated and stored in the `opaque_id` attribute of {{es}} audit logs, allowing cross-component correlation.
-Refer to [{{kib}} audit events](https://www.elastic.co/guide/en/kibana/current/xpack-security-audit-logging.html#xpack-security-ecs-audit-logging) for a complete description of {{kib}} auditing events.
+Refer to [{{kib}} audit events](kibana://reference/kibana-audit-events.md#xpack-security-ecs-audit-logging) for a complete description of {{kib}} auditing events.
## Examples
diff --git a/deploy-manage/security/using-kibana-with-security.md b/deploy-manage/security/using-kibana-with-security.md
index bb1a46003c..c527374981 100644
--- a/deploy-manage/security/using-kibana-with-security.md
+++ b/deploy-manage/security/using-kibana-with-security.md
@@ -38,7 +38,7 @@ The {{kib}} server can instruct browsers to enable additional security controls
1. Enable `HTTP Strict Transport Security (HSTS)`.
- Use [`strictTransportSecurity`](https://www.elastic.co/guide/en/kibana/current/settings.html#server-securityResponseHeaders-strictTransportSecurity) to ensure that browsers will only attempt to access [{{kib}} with SSL/TLS encryption](./set-up-basic-security-plus-https.md#encrypt-kibana-browser). This is designed to prevent manipulator-in-the-middle attacks. To configure this with a lifetime of one year in your [`kibana.yml`](/deploy-manage/stack-settings.md):
+ Use [`strictTransportSecurity`](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-stricttransportsecurity) to ensure that browsers will only attempt to access [{{kib}} with SSL/TLS encryption](./set-up-basic-security-plus-https.md#encrypt-kibana-browser). This is designed to prevent manipulator-in-the-middle attacks. To configure this with a lifetime of one year in your [`kibana.yml`](/deploy-manage/stack-settings.md):
```js
server.securityResponseHeaders.strictTransportSecurity: "max-age=31536000"
@@ -50,7 +50,7 @@ The {{kib}} server can instruct browsers to enable additional security controls
2. Disable embedding.
- Use [`disableEmbedding`](https://www.elastic.co/guide/en/kibana/current/settings.html#server-securityResponseHeaders-disableEmbedding) to ensure that {{kib}} cannot be embedded in other websites. To configure this in your [`kibana.yml`](/deploy-manage/stack-settings.md):
+ Use [`disableEmbedding`](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-disableembedding) to ensure that {{kib}} cannot be embedded in other websites. To configure this in your [`kibana.yml`](/deploy-manage/stack-settings.md):
```js
server.securityResponseHeaders.disableEmbedding: true
diff --git a/deploy-manage/tools/snapshot-and-restore.md b/deploy-manage/tools/snapshot-and-restore.md
index ceb997133e..0a62d49b0f 100644
--- a/deploy-manage/tools/snapshot-and-restore.md
+++ b/deploy-manage/tools/snapshot-and-restore.md
@@ -84,7 +84,7 @@ By default, a snapshot of a cluster contains the cluster state, all regular data
- [Persistent cluster settings](/deploy-manage/deploy/self-managed/configure-elasticsearch.md#cluster-setting-types)
- [Index templates](/manage-data/data-store/templates.md)
-- [Legacy index templates](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/indices-templates-v1.html)
+- [Legacy index templates](https://www.elastic.co/guide/en/elasticsearch/reference/8.18/indices-templates-v1.html)
- [Ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md)
- [ILM policies](/manage-data/lifecycle/index-lifecycle-management.md)
- [Stored scripts](/explore-analyze/scripting/modules-scripting-using.md#script-stored-scripts)
@@ -154,7 +154,7 @@ You can’t restore an index to an earlier version of {{es}}. For example, you c
A compatible snapshot can contain indices created in an older incompatible version. For example, a snapshot of a 7.17 cluster can contain an index created in 6.8. Restoring the 6.8 index to an 8.17 cluster fails unless you can use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md). Keep this in mind if you take a snapshot before upgrading a cluster.
-As a workaround, you can first restore the index to another cluster running the latest version of {{es}} that’s compatible with both the index and your current cluster. You can then use [reindex-from-remote](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-reindex.html#reindex-from-remote) to rebuild the index on your current cluster. Reindex from remote is only possible if the index’s [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) is enabled.
+As a workaround, you can first restore the index to another cluster running the latest version of {{es}} that’s compatible with both the index and your current cluster. You can then use [reindex-from-remote](https://www.elastic.co/guide/en/elasticsearch/reference/8.18/docs-reindex.html#reindex-from-remote) to rebuild the index on your current cluster. Reindex from remote is only possible if the index’s [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) is enabled.
Reindexing from remote can take significantly longer than restoring a snapshot. Before you start, test the reindex from remote process with a subset of the data to estimate your time requirements.
diff --git a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md
index 0a382f1bf6..544f6179d7 100644
--- a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md
+++ b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md
@@ -3,7 +3,7 @@ navigation_title: "AWS S3 repository"
applies_to:
deployment:
- ece:
+ ece:
---
# Configure a snapshot repository using AWS S3 [ece-aws-custom-repository]
@@ -41,7 +41,7 @@ To add a snapshot repository:
Used for Microsoft Azure, Google Cloud Platform, or for some Amazon S3 repositories where you need to provide additional configuration parameters not supported by the S3 repository option. Configurations must be specified in a valid JSON format. For example:
- Amazon S3 (check [supported settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/repository-s3.html#repository-s3-repository)):
+ Amazon S3 (check [supported settings](/deploy-manage/tools/snapshot-and-restore/s3-repository.md#repository-s3-repository)):
```json
{
diff --git a/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md b/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md
index 19f746edf2..bc4e507d9b 100644
--- a/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md
+++ b/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md
@@ -277,7 +277,7 @@ If you’re restoring to a different cluster, see [Restore to a different cluste
}
```
-3. $$$restore-create-file-realm-user$$$If you use {{es}} security features, log in to a node host, navigate to the {{es}} installation directory, and add a user with the `superuser` role to the file realm using the [`elasticsearch-users`](https://www.elastic.co/guide/en/elasticsearch/reference/current/users-command.html) tool.
+3. $$$restore-create-file-realm-user$$$If you use {{es}} security features, log in to a node host, navigate to the {{es}} installation directory, and add a user with the `superuser` role to the file realm using the [`elasticsearch-users`](elasticsearch://reference/elasticsearch/command-line-tools/users-command.md) tool.
For example, the following command creates a user named `restore_user`.
@@ -287,7 +287,7 @@ If you’re restoring to a different cluster, see [Restore to a different cluste
Use this file realm user to authenticate requests until the restore operation is complete.
-4. Use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) to set [`action.destructive_requires_name`](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-management-settings.html#action-destructive-requires-name) to `false`. This lets you delete data streams and indices using wildcards.
+4. Use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) to set [`action.destructive_requires_name`](elasticsearch://reference/elasticsearch/configuration-reference/index-management-settings.md#action-destructive-requires-name) to `false`. This lets you delete data streams and indices using wildcards.
```console
PUT _cluster/settings
@@ -464,7 +464,7 @@ Before you start a restore operation, ensure the new cluster has enough capacity
* Add nodes or upgrade your hardware to increase capacity.
* Restore fewer indices and data streams.
-* Reduce the [number of replicas](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#dynamic-index-number-of-replicas) for restored indices.
+* Reduce the [number of replicas](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) for restored indices.
For example, the following restore snapshot API request uses the `index_settings` option to set `index.number_of_replicas` to `1`.
diff --git a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md
index 71b80fad5a..4cda6ba237 100644
--- a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md
+++ b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md
@@ -8,16 +8,16 @@ applies_to:
# Upgrade your deployment on {{eck}} (ECK)
-The ECK orchestrator can safely perform upgrades to newer versions of the {{stack}}.
+The ECK orchestrator can safely perform upgrades to newer versions of the {{stack}}.
-Once you're [prepared to upgrade](/deploy-manage/upgrade/prepare-to-upgrade.md), ensure the ECK version is [compatible](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-supported.html) with the {{stack}} version you’re upgrading to. For example, if you're upgrading to 9.0.0, the minimum required ECK version is 3.0.0. If it's incompatible, [upgrade your orchestrator](/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md).
+Once you're [prepared to upgrade](/deploy-manage/upgrade/prepare-to-upgrade.md), ensure the ECK version is [compatible](/deploy-manage/deploy/cloud-on-k8s.md) with the {{stack}} version you’re upgrading to. For example, if you're upgrading to 9.0.0, the minimum required ECK version is 3.0.0. If it's incompatible, [upgrade your orchestrator](/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md).
-## Perform the upgrade
+## Perform the upgrade
-1. In the resource spec file, modify the `version` field for the desired {{stack}} version.
-2. Save your changes. The orchestrator will start the upgrade process automatically.
+1. In the resource spec file, modify the `version` field for the desired {{stack}} version.
+2. Save your changes. The orchestrator will start the upgrade process automatically.
-In this example, we’re modifying the version to `9.0.0`.
+In this example, we’re modifying the version to `9.0.0`.
```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
@@ -142,4 +142,4 @@ Check out [Nodes orchestration](/deploy-manage/deploy/cloud-on-k8s/nodes-orchest
## Next steps
-Once you've successfully upgraded your deployment, [upgrade your ingest components](/deploy-manage/upgrade/ingest-components.md), such as {{ls}}, {{agents}}, or {{beats}}.
+Once you've successfully upgraded your deployment, [upgrade your ingest components](/deploy-manage/upgrade/ingest-components.md), such as {{ls}}, {{agents}}, or {{beats}}.
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md b/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md
index a5ef2a1555..b67e83210f 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md
@@ -44,7 +44,7 @@ If your Active Directory domain supports authentication with user-provided crede
1. Add a realm configuration of type `active_directory` to [`elasticsearch.yml`](/deploy-manage/stack-settings.md) under the `xpack.security.authc.realms.active_directory` namespace. At a minimum, you must specify the Active Directory `domain_name` and `order`.
- See [Active Directory realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ad-settings) for all of the options you can set for an `active_directory` realm.
+ See [Active Directory realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ad-settings) for all of the options you can set for an `active_directory` realm.
:::{note}
Binding to Active Directory fails if the domain name is not mapped in DNS.
@@ -116,7 +116,7 @@ If your Active Directory domain supports authentication with user-provided crede
1. (Optional) Configure how {{es}} should interact with multiple Active Directory servers.
- The `load_balance.type` setting can be used at the realm level. Two modes of operation are supported: failover and load balancing. See [Active Directory realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ad-settings).
+ The `load_balance.type` setting can be used at the realm level. Two modes of operation are supported: failover and load balancing. See [Active Directory realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ad-settings).
2. (Optional) To protect passwords, [encrypt communications](/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md#tls-active-directory) between {{es}} and the Active Directory server.
@@ -264,7 +264,7 @@ Additional metadata can be extracted from the Active Directory server by configu
The `load_balance.type` setting can be used at the realm level to configure how the {{security-features}} should interact with multiple Active Directory servers. Two modes of operation are supported: failover and load balancing.
-See [Load balancing and failover](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#load-balancing).
+See [Load balancing and failover](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#load-balancing).
## Encrypting communications between {{es}} and Active Directory [tls-active-directory]
@@ -312,7 +312,7 @@ xpack:
certificate_authorities: [ "ES_PATH_CONF/cacert.pem" ]
```
-For more information about these settings, see [Active Directory realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ad-settings).
+For more information about these settings, see [Active Directory realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ad-settings).
::::{note}
By default, when you configure {{es}} to connect to Active Directory using SSL/TLS, it attempts to verify the hostname or IP address specified with the `url` attribute in the realm configuration with the values in the certificate. If the values in the certificate and realm configuration do not match, {{es}} does not allow a connection to the Active Directory server. This is done to protect against man-in-the-middle attacks. If necessary, you can disable this behavior by setting the `ssl.verification_mode` property to `certificate`.
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md b/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md
index b35129baad..a4151fded9 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md
@@ -85,7 +85,7 @@ jacknich:{PBKDF2}50000$z1CLJt0MEFjkIK5iEfgvfnA6xq7lF25uasspsTKSo5Q=$XxCVLbaKDimO
```
:::{tip}
-To limit exposure to credential theft and mitigate credential compromise, the file realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted sha-256 hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the bcrypt hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#hashing-settings).
+To limit exposure to credential theft and mitigate credential compromise, the file realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted sha-256 hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the bcrypt hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings).
:::
#### `users_roles`
@@ -158,7 +158,7 @@ stringData:
**Using a tool**
-To avoid editing these files manually, you can use the [elasticsearch-users](https://www.elastic.co/guide/en/elasticsearch/reference/current/users-command.html) tool:
+To avoid editing these files manually, you can use the [elasticsearch-users](elasticsearch://reference/elasticsearch/command-line-tools/users-command.md) tool:
::::{tab-set}
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md
index 751d084fd7..e70b9f56e8 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md
@@ -357,7 +357,7 @@ xpack.security.authc.providers:
One of the most popular use cases for anonymous access is when you embed {{kib}} into other applications and don’t want to force your users to log in to view it. If you configured {{kib}} to use anonymous access as the sole authentication mechanism, you don’t need to do anything special while embedding {{kib}}.
-For information on how to embed, refer to [Embed {{kib}} content in a web page](https://www.elastic.co/guide/en/kibana/current/embedding.html).
+For information on how to embed, refer to [Embed {{kib}} content in a web page](/explore-analyze/report-and-share.md#embed-code).
#### Anonymous access session [anonymous-access-session]
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md
index b9ce7eb6c2..3ca3b4065b 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md
@@ -303,7 +303,7 @@ xpack:
You can also specify the individual server certificates rather than the CA certificate, but this is only recommended if you have a single LDAP server or the certificates are self-signed
-For more information about these settings, see [LDAP realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ldap-settings).
+For more information about these settings, see [LDAP realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings).
::::{note}
By default, when you configure {{es}} to connect to an LDAP server using SSL/TLS, it attempts to verify the hostname or IP address specified with the `url` attribute in the realm configuration with the values in the certificate. If the values in the certificate and realm configuration do not match, {{es}} does not allow a connection to the LDAP server. This is done to protect against man-in-the-middle attacks. If necessary, you can disable this behavior by setting the `ssl.verification_mode` property to `certificate`.
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md
index 394d2d6be0..dcc442036b 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md
@@ -47,7 +47,7 @@ You can configure a `native` realm in the `xpack.security.authc.realms.native` n
::::
- See [Native realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-native-settings) for all of the options you can set for the `native` realm. For example, the following snippet shows a `native` realm configuration that sets the `order` to zero so the realm is checked first:
+ See [Native realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-native-settings) for all of the options you can set for the `native` realm. For example, the following snippet shows a `native` realm configuration that sets the `order` to zero so the realm is checked first:
```yaml
xpack.security.authc.realms.native.native1:
@@ -55,7 +55,7 @@ You can configure a `native` realm in the `xpack.security.authc.realms.native` n
```
::::{note}
- To limit exposure to credential theft and mitigate credential compromise, the native realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted `sha-256` hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the `bcrypt` hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#hashing-settings).
+ To limit exposure to credential theft and mitigate credential compromise, the native realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted `sha-256` hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the `bcrypt` hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings).
::::
2. Restart {{es}}.
@@ -100,7 +100,7 @@ deployment:
self: all
```
-You can also reset passwords for users in the native realm through the command line using the [`elasticsearch-reset-password`](https://www.elastic.co/guide/en/elasticsearch/reference/current/reset-password.html) tool.
+You can also reset passwords for users in the native realm through the command line using the [`elasticsearch-reset-password`](elasticsearch://reference/elasticsearch/command-line-tools/reset-password.md) tool.
For example, the following command changes the password for a user with the username `user1` to an auto-generated value, and prints the new password to the terminal:
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md b/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md
index 0f55d31026..eae40448cb 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md
@@ -27,7 +27,7 @@ The default realm chain contains the `file` and `native` realms. To explicitly c
If your realm chain does not contain `file` or `native` realm or does not disable them explicitly, `file` and `native` realms will be added automatically to the beginning of the realm chain in that order. To opt out from this automatic behavior, you can explicitly configure the `file` and `native` realms with the `order` and `enabled` settings.
-Each realm has a unique name that identifies it. Each type of realm dictates its own set of required and optional settings. There are also settings that are common to all realms. To explore these settings, refer to [Realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#realm-settings).
+Each realm has a unique name that identifies it. Each type of realm dictates its own set of required and optional settings. There are also settings that are common to all realms. To explore these settings, refer to [Realm settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#realm-settings).
The following snippet configures a realm chain that enables the `file` realm, two LDAP realms, and an Active Directory realm, and disables the `native` realm.
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md b/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md
index 5e0bbdf3ef..afad6d4711 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md
@@ -55,7 +55,7 @@ You must create a service token to use a service account. You can create a servi
* The [create service account token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-service-token), which saves the new service token in the `.security` index and returns the bearer token in the HTTP response.
* Self-managed and {{eck}} deployments only: The [elasticsearch-service-tokens](elasticsearch://reference/elasticsearch/command-line-tools/service-tokens-command.md) CLI tool, which saves the new service token in the `$ES_HOME/config/service_tokens` file and outputs the bearer token to your terminal
-We recommend that you create service tokens via the REST API rather than the CLI. The API stores service tokens within the `.security` index which means that the tokens are available for authentication on all nodes, and will be backed up within cluster snapshots. The use of the CLI is intended for cases where there is an external orchestration process (such as [{{ece}}](https://www.elastic.co/guide/en/cloud-enterprise/current) or [{{eck}}](https://www.elastic.co/guide/en/cloud-on-k8s/current)) that will manage the creation and distribution of the `service_tokens` file.
+We recommend that you create service tokens via the REST API rather than the CLI. The API stores service tokens within the `.security` index which means that the tokens are available for authentication on all nodes, and will be backed up within cluster snapshots. The use of the CLI is intended for cases where there is an external orchestration process (such as [{{ece}}](/deploy-manage/deploy/cloud-enterprise.md) or [{{eck}}](/deploy-manage/deploy/cloud-on-k8s.md)) that will manage the creation and distribution of the `service_tokens` file.
Both of these methods (API and CLI) create a service token with a guaranteed secret string length of `22`. The minimal, acceptable length of a secret string for a service token is `10`. If the secret string doesn’t meet this minimal length, authentication with {{es}} will fail without even checking the value of the service token.
diff --git a/explore-analyze/alerts-cases/alerts/alerting-getting-started.md b/explore-analyze/alerts-cases/alerts/alerting-getting-started.md
index a16090dbf8..883c3261b5 100644
--- a/explore-analyze/alerts-cases/alerts/alerting-getting-started.md
+++ b/explore-analyze/alerts-cases/alerts/alerting-getting-started.md
@@ -119,4 +119,4 @@ Functionally, the {{alert-features}} differ in that:
* {{kib}} rules track and persist the state of each detected condition through alerts. This makes it possible to mute and throttle individual alerts, and detect changes in state such as resolution.
* Actions are linked to alerts. Actions are fired for each occurrence of a detected condition, rather than for the entire rule.
-At a higher level, the {{alert-features}} allow rich integrations across use cases like [**APM**](https://www.elastic.co/guide/en/kibana/current/observability.html#apm-app), [**Metrics**](https://www.elastic.co/guide/en/kibana/current/observability.html#metrics-app), [**Security**](https://www.elastic.co/guide/en/kibana/current/xpack-siem.html), and [**Uptime**](https://www.elastic.co/guide/en/kibana/current/observability.html#uptime-app). Prepackaged rule types simplify setup and hide the details of complex, domain-specific detections, while providing a consistent interface across {{kib}}.
+At a higher level, the {{alert-features}} allow rich integrations across use cases like [**APM**](/solutions/observability/apm/index.md), [**Metrics**](/solutions/observability/infra-and-hosts.md), [**Security**](/solutions/security.md), and [**Uptime**](/solutions/observability/uptime/index.md). Prepackaged rule types simplify setup and hide the details of complex, domain-specific detections, while providing a consistent interface across {{kib}}.
diff --git a/explore-analyze/alerts-cases/alerts/alerting-setup.md b/explore-analyze/alerts-cases/alerts/alerting-setup.md
index 075c08f2a9..429e1e0a83 100644
--- a/explore-analyze/alerts-cases/alerts/alerting-setup.md
+++ b/explore-analyze/alerts-cases/alerts/alerting-setup.md
@@ -95,7 +95,7 @@ When you disable a rule, it retains the associated API key which is reused when
You can generate a new API key at any time in **{{stack-manage-app}} > {{rules-ui}}** or in the rule details page by selecting **Update API key** in the actions menu.
-If you manage your rules by using {{kib}} APIs, they support support both key- and token-based authentication as described in [Authentication](https://www.elastic.co/guide/en/kibana/current/api.html#api-authentication). To use key-based authentication, create API keys and use them in the header of your API calls as described in [API Keys](../../../deploy-manage/api-keys/elasticsearch-api-keys.md). To use token-based authentication, provide a username and password; an API key that matches the current privileges of the user is created automatically. In both cases, the API key is subsequently associated with the rule and used when it runs.
+If you manage your rules by using {{kib}} APIs, they support support both key- and token-based authentication as described in [Authentication](https://www.elastic.co/docs/api/doc/kibana/authentication). To use key-based authentication, create API keys and use them in the header of your API calls as described in [API Keys](../../../deploy-manage/api-keys/elasticsearch-api-keys.md). To use token-based authentication, provide a username and password; an API key that matches the current privileges of the user is created automatically. In both cases, the API key is subsequently associated with the rule and used when it runs.
::::{important}
If a rule requires certain privileges, such as index privileges, to run and a user without those privileges updates the rule, the rule will no longer function. Conversely, if a user with greater or administrator privileges modifies the rule, it will begin running with increased privileges. The same behavior occurs when you change the API key in the header of your API calls.
diff --git a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md
index d5421794b8..92387ef9bc 100644
--- a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md
+++ b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md
@@ -92,7 +92,7 @@ Task Manager provides a visible status which can be used to diagnose issues and
When a rule is created, a task is created, scheduled to run at the interval specified. For example, when a rule is created and configured to check every 5 minutes, then the underlying task will be expected to run every 5 minutes. In practice, after each time the rule runs, the task is scheduled to run again in 5 minutes, rather than being scheduled to run every 5 minutes indefinitely.
-If you use the [alerting APIs](https://www.elastic.co/guide/en/kibana/current/alerting-apis.html), such as the get rule API or find rules API, you’ll get an object that contains rule details:
+If you use the [alerting APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-alerting), such as the get rule API or find rules API, you’ll get an object that contains rule details:
```txt
{
@@ -156,7 +156,7 @@ For example:
}
```
-For the rule to work, this task must be in a healthy state. Its health information is available in the [Task Manager health API](https://www.elastic.co/guide/en/kibana/current/task-manager-api-health.html) or in verbose logs if debug logging is enabled. When diagnosing the health state of the task, you will most likely be interested in the following fields:
+For the rule to work, this task must be in a healthy state. Its health information is available in the [Task Manager health API](https://www.elastic.co/docs/api/doc/kibana/operation/operation-task-manager-health) or in verbose logs if debug logging is enabled. When diagnosing the health state of the task, you will most likely be interested in the following fields:
`status`
: This is the current status of the task. Is Task Manager currently running? Is Task Manager idle, and you’re waiting for it to run? Or has Task Manager has tried to run and failed?
diff --git a/explore-analyze/alerts-cases/alerts/create-manage-rules.md b/explore-analyze/alerts-cases/alerts/create-manage-rules.md
index a2b236c92c..6873d6079e 100644
--- a/explore-analyze/alerts-cases/alerts/create-manage-rules.md
+++ b/explore-analyze/alerts-cases/alerts/create-manage-rules.md
@@ -29,7 +29,7 @@ Access to rules is granted based on your {{alert-features}} privileges. For more
## Create and edit rules [create-edit-rules]
-Some rules must be created within the context of a {{kib}} app like [Metrics](https://www.elastic.co/guide/en/kibana/current/observability.html#metrics-app), [**APM**](https://www.elastic.co/guide/en/kibana/current/observability.html#apm-app), or [Uptime](https://www.elastic.co/guide/en/kibana/current/observability.html#uptime-app), but others are generic. Generic rule types can be created in **{{rules-ui}}** by clicking the **Create rule** button. This will launch a flyout that guides you through selecting a rule type and configuring its conditions and actions.
+Some rules must be created within the context of a {{kib}} app like [Metrics](/solutions/observability/infra-and-hosts.md), [**APM**](/solutions/observability/apm/index.md), or [Uptime](/solutions/observability/uptime/index.md), but others are generic. Generic rule types can be created in **{{rules-ui}}** by clicking the **Create rule** button. This will launch a flyout that guides you through selecting a rule type and configuring its conditions and actions.
After a rule is created, you can open the action menu (…) and select **Edit rule** to re-open the flyout and change the rule properties.
diff --git a/explore-analyze/dashboards/building.md b/explore-analyze/dashboards/building.md
index d7d633e79e..0cc3be5852 100644
--- a/explore-analyze/dashboards/building.md
+++ b/explore-analyze/dashboards/building.md
@@ -18,7 +18,7 @@ mapped_pages:
$$$dashboard-minimum-requirements$$$
To create or edit dashboards, you first need to:
-* have [data indexed into {{es}}](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-general-purpose.html#gp-gs-add-data) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load just the right data when building a visualization or exploring it.
+* have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load just the right data when building a visualization or exploring it.
::::{tip}
If you don’t have data at hand and still want to explore dashboards, you can import one of the [sample data sets](../../manage-data/ingest/sample-data.md) available.
diff --git a/explore-analyze/discover/discover-get-started.md b/explore-analyze/discover/discover-get-started.md
index e0985d9cb8..8542ace816 100644
--- a/explore-analyze/discover/discover-get-started.md
+++ b/explore-analyze/discover/discover-get-started.md
@@ -28,7 +28,7 @@ Select the data you want to explore, and then specify the time range in which to
1. Find **Discover** in the navigation menu or by using the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Select the data view that contains the data you want to explore.
::::{tip}
- By default, {{kib}} requires a [{{data-source}}](../find-and-organize/data-views.md) to access your Elasticsearch data. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](https://www.elastic.co/guide/en/elasticsearch/reference/current/alias.html). When adding data to {{es}} using one of the many integrations available, sometimes data views are created automatically, but you can also create your own.
+ By default, {{kib}} requires a [{{data-source}}](../find-and-organize/data-views.md) to access your Elasticsearch data. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](/manage-data/data-store/aliases.md). When adding data to {{es}} using one of the many integrations available, sometimes data views are created automatically, but you can also create your own.
You can also [try {{esql}}](try-esql.md), that lets you query any data you have in {{es}} without specifying a {{data-source}} first.
::::
@@ -69,7 +69,7 @@ You can later filter the data that shows in the chart and in the table by specif
3. Select the **Plus** icon to add fields to the results table. You can also drag them from the list into the table.

-
+
When you add fields to the table, the **Summary** column is replaced.

@@ -127,7 +127,7 @@ In the following example, we’re adding 2 fields: A simple "Hello world" field,
If a field can be [aggregated](../query-filter/aggregations.md), you can quickly visualize it in detail by opening it in **Lens** from **Discover**. **Lens** is the default visualization editor in {{kib}}.
1. In the list of fields, find an aggregatable field. For example, with the sample data, you can look for `day_of_week`.
-
+

2. In the popup, click **Visualize**.
diff --git a/explore-analyze/find-and-organize/data-views.md b/explore-analyze/find-and-organize/data-views.md
index 4704118f3e..572a90f80c 100644
--- a/explore-analyze/find-and-organize/data-views.md
+++ b/explore-analyze/find-and-organize/data-views.md
@@ -21,7 +21,7 @@ $$$management-cross-cluster-search$$$
$$$data-views-read-only-access$$$
-By default, analytics features such as Discover require a {{data-source}} to access the {{es}} data that you want to explore. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](https://www.elastic.co/guide/en/elasticsearch/reference/current/alias.html). For example, a {{data-source}} can point to your log data from yesterday, or all indices that contain your data.
+By default, analytics features such as Discover require a {{data-source}} to access the {{es}} data that you want to explore. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](/manage-data/data-store/aliases.md). For example, a {{data-source}} can point to your log data from yesterday, or all indices that contain your data.
::::{note}
In certain apps, you can also query your {{es}} data using [{{esql}}](../query-filter/languages/esql.md). With {{esql}}, data views aren't required.
diff --git a/explore-analyze/find-and-organize/saved-objects.md b/explore-analyze/find-and-organize/saved-objects.md
index ff9e9287fb..7c622a0075 100644
--- a/explore-analyze/find-and-organize/saved-objects.md
+++ b/explore-analyze/find-and-organize/saved-objects.md
@@ -167,7 +167,7 @@ When you import a saved object and it is created with a different ID, if 1. it c
If you are using the saved objects APIs directly, you should be aware of these changes:
::::{warning}
-Some of the saved objects APIs are deprecated since version 8.7.0. For more information, refer to the [API docs](https://www.elastic.co/guide/en/kibana/current/saved-objects-api.html)
+Some of the saved objects APIs are deprecated since version 8.7.0. For more information, refer to the [API docs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects)
::::
diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md
index a2d402f4f5..0c0792aa1b 100644
--- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md
+++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md
@@ -32,7 +32,7 @@ In this example, you can see that some of the anomalies fall within the shaded b
Both the **Anomaly Explorer** and the **Single Metric Viewer** contain an **Anomalies** table that shows key details about each anomaly such as time, typical and actual values, and probability. The **Anomaly explanation** section helps you to interpret a given anomaly by providing further insights about its type, impact, and score.
-If you have [{{anomaly-detect-cap}} alert rules](https://www.elastic.co/guide/en/machine-learning/current/creating-anomaly-alert-rules.html) applied to an {{anomaly-job}} and an alert has occured for the rule, you can view how the alert correlates with the {{anomaly-detect}} results in the **Anomaly Explorer** by using the **Anomaly timeline** swimlane and the **Alerts** panel. The **Alerts** panel contains a line chart with the alerts count over time. The cursor on the line chart is in sync with the anomaly swimlane making it easier to review anomalous buckets with the spike produced by the alerts. The panel also contains aggregated information for each alert rule associated with the job selection such as the total number of active, recovered, and untracked alerts for the selected job and time range. An alert context menu is displayed when an anomaly swimlane cell is selected with alerts in the chosen time range. The context menu contains the alert counters for the selected time buckets.
+If you have [{{anomaly-detect-cap}} alert rules](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-alerts.md#creating-anomaly-alert-rules) applied to an {{anomaly-job}} and an alert has occured for the rule, you can view how the alert correlates with the {{anomaly-detect}} results in the **Anomaly Explorer** by using the **Anomaly timeline** swimlane and the **Alerts** panel. The **Alerts** panel contains a line chart with the alerts count over time. The cursor on the line chart is in sync with the anomaly swimlane making it easier to review anomalous buckets with the spike produced by the alerts. The panel also contains aggregated information for each alert rule associated with the job selection such as the total number of active, recovered, and untracked alerts for the selected job and time range. An alert context menu is displayed when an anomaly swimlane cell is selected with alerts in the chosen time range. The context menu contains the alert counters for the selected time buckets.
:::{image} /explore-analyze/images/machine-learning-anomaly-explorer-alerts.png
:alt: Alerts table in the Anomaly Explorer
diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-anomaly-detection-job-types.md b/explore-analyze/machine-learning/anomaly-detection/ml-anomaly-detection-job-types.md
index feb68b58d0..5db834dafd 100644
--- a/explore-analyze/machine-learning/anomaly-detection/ml-anomaly-detection-job-types.md
+++ b/explore-analyze/machine-learning/anomaly-detection/ml-anomaly-detection-job-types.md
@@ -39,7 +39,7 @@ In the case of the population jobs, the analyzed data is split by the distinct v
For example, if you want to detect IP addresses with unusual request rates compared to the number of requests coming from other IP addresses, you can use a population job. That job has a `count` function to detect unusual number of requests and the analysis is split by the `client_ip` field. In this context, an event is anomalous if the request rate of an IP address is unusually high or low compared to the request rate of all IP addresses in the population. The population job builds a model of the typical number of requests for the IP addresses collectively and compares the behavior of each IP address against that collective model to detect outliers.
-Refer to [Performing population analysis](https://www.elastic.co/guide/en/machine-learning/current/ml-configuring-populations.html) to learn more.
+Refer to [Performing population analysis](/explore-analyze/machine-learning/anomaly-detection/ml-anomaly-detection-job-types.md) to learn more.
## Advanced jobs [advanced-jobs]
diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-feature-processors.md b/explore-analyze/machine-learning/data-frame-analytics/ml-feature-processors.md
index 3af77a3f20..56f8b026c7 100644
--- a/explore-analyze/machine-learning/data-frame-analytics/ml-feature-processors.md
+++ b/explore-analyze/machine-learning/data-frame-analytics/ml-feature-processors.md
@@ -35,7 +35,7 @@ With this encoding technique, it is not possible to get back to the categorical
*The figure shows a simple frequency encoding example. The Animal_freq value of `cat` is 0.5 as the feature is present at half of the number of related values. The labels `dog` and `crocodile` occur only once each. For this reason, the Animal_freq value of these labels is 0.25.*
-## Multi encoding [multi-encoding]
+## Multi encoding [multi-encoding]
Multi encoding enables you to use multiple processors in the same {{dfanalytics-job}}.
You can define an ordered sequence of processors in which the output of a processor can be forwarded to the next processor as an input.
diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md
index 3f03a0eefb..24dcb3603f 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md
@@ -39,7 +39,7 @@ Trained models must be in a TorchScript representation for use with {{stack-ml-f
```
1. Specify the Elastic Cloud identifier. Alternatively, use `--url`.
- 2. Provide authentication details to access your cluster. Refer to [Authentication methods](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-authentication.html) to learn more.
+ 2. Provide authentication details to access your cluster. Refer to [Authentication methods](#ml-nlp-authentication) to learn more.
3. Specify the identifier for the model in the Hugging Face model hub.
4. Specify the type of NLP task. Supported values are `fill_mask`, `ner`, `question_answering`, `text_classification`, `text_embedding`, `text_expansion`, `text_similarity`, and `zero_shot_classification`.
diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
index 3ef74a35bf..163b5fa6be 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
@@ -21,7 +21,7 @@ To follow along the process on this page, you must have:
## Deploy a NER model [ex-ner-deploy]
-You can use the [Eland client](https://www.elastic.co/guide/en/elasticsearch/client/eland/current) to install the {{nlp}} model. Use the prebuilt Docker image to run the Eland install model commands. Pull the latest image with:
+You can use the [Eland client](eland://reference/index.md) to install the {{nlp}} model. Use the prebuilt Docker image to run the Eland install model commands. Pull the latest image with:
```shell
docker pull docker.elastic.co/eland/eland
diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
index deaaa2f725..45d04437dc 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
@@ -25,7 +25,7 @@ To follow along the process on this page, you must have:
## Deploy a text embedding model [ex-te-vs-deploy]
-You can use the [Eland client](https://www.elastic.co/guide/en/elasticsearch/client/eland/current) to install the {{nlp}} model. Use the prebuilt Docker image to run the Eland install model commands. Pull the latest image with:
+You can use the [Eland client](eland://reference/index.md) to install the {{nlp}} model. Use the prebuilt Docker image to run the Eland install model commands. Pull the latest image with:
```shell
docker pull docker.elastic.co/eland/eland
diff --git a/explore-analyze/query-filter.md b/explore-analyze/query-filter.md
index 26ecaa3719..47de4ce1a6 100644
--- a/explore-analyze/query-filter.md
+++ b/explore-analyze/query-filter.md
@@ -20,7 +20,7 @@ You’ll use a combination of an API endpoint and a query language to interact w
- Elasticsearch provides a number of [query languages](/explore-analyze/query-filter/languages.md). From Query DSL to the newest ES|QL, find the one that's most appropriate for you.
-- You can call Elasticsearch's REST APIs by submitting requests directly from the command line or through the Dev Tools [Console](/explore-analyze/query-filter/tools/console.md) in {{kib}}. From your applications, you can use a [client](https://www.elastic.co/guide/en/elasticsearch/client/index.html) in your programming language of choice.
+- You can call Elasticsearch's REST APIs by submitting requests directly from the command line or through the Dev Tools [Console](/explore-analyze/query-filter/tools/console.md) in {{kib}}. From your applications, you can use a [client](/reference/elasticsearch-clients/index.md) in your programming language of choice.
- A number of [tools](/explore-analyze/query-filter/tools.md) are available for you to save, debug, and optimize your queries.
diff --git a/explore-analyze/query-filter/languages/eql.md b/explore-analyze/query-filter/languages/eql.md
index f90e113823..a3fc073e5c 100644
--- a/explore-analyze/query-filter/languages/eql.md
+++ b/explore-analyze/query-filter/languages/eql.md
@@ -24,14 +24,14 @@ Event Query Language (EQL) is a query language for event-based time series data,
## Required fields [eql-required-fields]
-With the exception of sample queries, EQL searches require that the searched data stream or index contains a *timestamp* field. By default, EQL uses the `@timestamp` field from the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current).
+With the exception of sample queries, EQL searches require that the searched data stream or index contains a *timestamp* field. By default, EQL uses the `@timestamp` field from the [Elastic Common Schema (ECS)](ecs://reference/index.md).
EQL searches also require an *event category* field, unless you use the [`any` keyword](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-syntax-match-any-event-category) to search for documents without an event category field. By default, EQL uses the ECS `event.category` field.
To use a different timestamp or event category field, see [Specify a timestamp or event category field](#specify-a-timestamp-or-event-category-field).
::::{tip}
-While no schema is required to use EQL, we recommend using the [ECS](https://www.elastic.co/guide/en/ecs/current). EQL searches are designed to work with core ECS fields by default.
+While no schema is required to use EQL, we recommend using the [ECS](ecs://reference/index.md). EQL searches are designed to work with core ECS fields by default.
::::
@@ -1042,7 +1042,7 @@ The API returns:
## Specify a timestamp or event category field [specify-a-timestamp-or-event-category-field]
-The EQL search API uses the `@timestamp` and `event.category` fields from the [ECS](https://www.elastic.co/guide/en/ecs/current) by default. To specify different fields, use the `timestamp_field` and `event_category_field` parameters:
+The EQL search API uses the `@timestamp` and `event.category` fields from the [ECS](ecs://reference/index.md) by default. To specify different fields, use the `timestamp_field` and `event_category_field` parameters:
```console
GET /my-data-stream/_eql/search
@@ -1064,7 +1064,7 @@ By default, the EQL search API returns matching hits by timestamp. If two or mor
If you don’t specify a tiebreaker field or the events also share the same tiebreaker value, {{es}} considers the events concurrent and may not return them in a consistent sort order.
-To specify a tiebreaker field, use the `tiebreaker_field` parameter. If you use the [ECS](https://www.elastic.co/guide/en/ecs/current), we recommend using `event.sequence` as the tiebreaker field.
+To specify a tiebreaker field, use the `tiebreaker_field` parameter. If you use the [ECS](ecs://reference/index.md), we recommend using `event.sequence` as the tiebreaker field.
```console
GET /my-data-stream/_eql/search
diff --git a/explore-analyze/query-filter/languages/esql-cross-clusters.md b/explore-analyze/query-filter/languages/esql-cross-clusters.md
index 88c6e5e379..f92908f032 100644
--- a/explore-analyze/query-filter/languages/esql-cross-clusters.md
+++ b/explore-analyze/query-filter/languages/esql-cross-clusters.md
@@ -284,13 +284,13 @@ Which returns:
```
1. How long the entire search (across all clusters) took, in milliseconds.
-2. This section of counters shows all possible cluster search states and how many cluster searches are currently in that state. The clusters can have one of the following statuses: **running**, **successful** (searches on all shards were successful), **skipped** (the search failed on a cluster marked with `skip_unavailable`=`true`), **failed** (the search failed on a cluster marked with `skip_unavailable`=`false`) or **partial** (the search was [interrupted](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-async-query-stop-api.html) before finishing or has partially failed).
+2. This section of counters shows all possible cluster search states and how many cluster searches are currently in that state. The clusters can have one of the following statuses: **running**, **successful** (searches on all shards were successful), **skipped** (the search failed on a cluster marked with `skip_unavailable`=`true`), **failed** (the search failed on a cluster marked with `skip_unavailable`=`false`) or **partial** (the search was [interrupted](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql) before finishing or has partially failed).
3. The `_clusters/details` section shows metadata about the search on each cluster.
4. If you included indices from the local cluster you sent the request to in your {{ccs}}, it is identified as "(local)".
5. How long (in milliseconds) the search took on each cluster. This can be useful to determine which clusters have slower response times than others.
6. The shard details for the search on that cluster, including a count of shards that were skipped due to the can-match phase results. Shards are skipped when they cannot have any matching data and therefore are not included in the full ES|QL query.
7. The `is_partial` field is set to `true` if the search has partial results for any reason, for example due to partial shard failures,
-failures in remote clusters, or if the async query was stopped by calling the [async query stop API](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-async-query-stop-api.html).
+failures in remote clusters, or if the async query was stopped by calling the [async query stop API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql).
The cross-cluster metadata can be used to determine whether any data came back from a cluster. For instance, in the query below, the wildcard expression for `cluster-two` did not resolve to a concrete index (or indices). The cluster is, therefore, marked as *skipped* and the total number of shards searched is set to zero.
diff --git a/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md b/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
index 42d7f06187..a8b30e66d1 100644
--- a/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
+++ b/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
@@ -21,7 +21,7 @@ One common variant of regsvr32 misuse is a [Squiblydoo attack](https://attack.mi
## Setup [eql-ex-threat-detection-setup]
-This tutorial uses a test dataset from [Atomic Red Team](https://github.com/redcanaryco/atomic-red-team) that includes events imitating a Squiblydoo attack. The data has been mapped to [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current) fields.
+This tutorial uses a test dataset from [Atomic Red Team](https://github.com/redcanaryco/atomic-red-team) that includes events imitating a Squiblydoo attack. The data has been mapped to [Elastic Common Schema (ECS)](ecs://reference/index.md) fields.
To get started:
diff --git a/explore-analyze/report-and-share.md b/explore-analyze/report-and-share.md
index 3a1708e464..f13fc65a14 100644
--- a/explore-analyze/report-and-share.md
+++ b/explore-analyze/report-and-share.md
@@ -120,7 +120,7 @@ We recommend using CSV reports to export moderate amounts of data only. The feat
To work around the limitations, use filters to create multiple smaller reports, or extract the data you need directly with the Elasticsearch APIs.
-For more information on using Elasticsearch APIs directly, see [Scroll API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-scroll), [Point in time API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-open-point-in-time), [ES|QL](/explore-analyze/query-filter/languages/esql-rest.md) or [SQL](/explore-analyze/query-filter/languages/sql-rest-format.md#_csv) with CSV response data format. We recommend that you use an official Elastic language client: details for each programming language library that Elastic provides are in the [{{es}} Client documentation](https://www.elastic.co/guide/en/elasticsearch/client/index.html).
+For more information on using Elasticsearch APIs directly, see [Scroll API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-scroll), [Point in time API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-open-point-in-time), [ES|QL](/explore-analyze/query-filter/languages/esql-rest.md) or [SQL](/explore-analyze/query-filter/languages/sql-rest-format.md#_csv) with CSV response data format. We recommend that you use an official Elastic language client: details for each programming language library that Elastic provides are in the [{{es}} Client documentation](/reference/elasticsearch-clients/index.md).
[Reporting parameters](kibana://reference/configuration-reference/reporting-settings.md) can be adjusted to overcome some of these limiting scenarios. Results are dependent on data size, availability, and latency factors and are not guaranteed.
diff --git a/explore-analyze/report-and-share/automating-report-generation.md b/explore-analyze/report-and-share/automating-report-generation.md
index 6b5a9f7157..c6d1fed37d 100644
--- a/explore-analyze/report-and-share/automating-report-generation.md
+++ b/explore-analyze/report-and-share/automating-report-generation.md
@@ -105,7 +105,7 @@ curl \
1. The required `POST` method.
2. The user credentials for a user with permission to access {{kib}} and {{report-features}}.
-3. The required `kbn-xsrf` header for all `POST` requests to {{kib}}. For more information, refer to [API Request Headers](https://www.elastic.co/guide/en/kibana/current/api.html#api-request-headers).
+3. The required `kbn-xsrf` header for all `POST` requests to {{kib}}. For more information, refer to [API Request Headers](https://www.elastic.co/docs/api/doc/kibana/).
4. The POST URL. You can copy and paste the URL for any report.
diff --git a/explore-analyze/report-and-share/reporting-troubleshooting-csv.md b/explore-analyze/report-and-share/reporting-troubleshooting-csv.md
index e9e2a90f00..7ae87e2b65 100644
--- a/explore-analyze/report-and-share/reporting-troubleshooting-csv.md
+++ b/explore-analyze/report-and-share/reporting-troubleshooting-csv.md
@@ -24,7 +24,7 @@ We recommend using CSV reports to export moderate amounts of data only. The feat
To work around the limitations, use filters to create multiple smaller reports, or extract the data you need directly with the Elasticsearch APIs.
-For more information on using Elasticsearch APIs directly, see [Scroll API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-scroll), [Point in time API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-open-point-in-time), [ES|QL](../query-filter/languages/esql-rest.md) or [SQL](../query-filter/languages/sql-rest-format.md#_csv) with CSV response data format. We recommend that you use an official Elastic language client: details for each programming language library that Elastic provides are in the [{{es}} Client documentation](https://www.elastic.co/guide/en/elasticsearch/client/index.html).
+For more information on using Elasticsearch APIs directly, see [Scroll API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-scroll), [Point in time API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-open-point-in-time), [ES|QL](../query-filter/languages/esql-rest.md) or [SQL](../query-filter/languages/sql-rest-format.md#_csv) with CSV response data format. We recommend that you use an official Elastic language client: details for each programming language library that Elastic provides are in the [{{es}} Client documentation](/reference/elasticsearch-clients/index.md).
[Reporting parameters](kibana://reference/configuration-reference/reporting-settings.md) can be adjusted to overcome some of these limiting scenarios. Results are dependent on data size, availability, and latency factors and are not guaranteed.
diff --git a/explore-analyze/scripting/grok.md b/explore-analyze/scripting/grok.md
index 0d6136b071..0b7040690c 100644
--- a/explore-analyze/scripting/grok.md
+++ b/explore-analyze/scripting/grok.md
@@ -44,7 +44,7 @@ The first value is a number, followed by what appears to be an IP address. You c
## Migrating to Elastic Common Schema (ECS) [grok-ecs]
-To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.
+To ease migration to the [Elastic Common Schema (ECS)](ecs://reference/index.md), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash-docs-md://lsr/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.
diff --git a/explore-analyze/visualize.md b/explore-analyze/visualize.md
index bd395ef258..e48d407351 100644
--- a/explore-analyze/visualize.md
+++ b/explore-analyze/visualize.md
@@ -36,7 +36,7 @@ $$$panels-editors$$$
| | [SLO Alerts](/solutions/observability/incident-management/service-level-objectives-slos.md) | Visualize one or more SLO alerts, including status, rule name, duration, and reason. In addition, configure and update alerts, or create cases directly from the panel. |
| | [SLO Error Budget](/solutions/observability/incident-management/service-level-objectives-slos.md) | Visualize the consumption of your SLO’s error budget |
| Legacy | | |
-| | [Log stream](https://www.elastic.co/guide/en/kibana/current/observability.html#logs-app) (deprecated) | Display a table of live streaming logs |
+| | [Log stream](/solutions/observability/logs.md) (deprecated) | Display a table of live streaming logs |
| | [Aggregation based](visualize/legacy-editors/aggregation-based.md) | Create visualizations including area, line, and pie charts and split them up to three aggregation levels. While these panel types are still available, we recommend using [Lens](visualize/lens.md) instead. |
| | [TSVB](visualize/legacy-editors/tsvb.md) | Visualize time-based data through various panel types |
diff --git a/explore-analyze/visualize/canvas.md b/explore-analyze/visualize/canvas.md
index e88c70002b..652318a120 100644
--- a/explore-analyze/visualize/canvas.md
+++ b/explore-analyze/visualize/canvas.md
@@ -36,7 +36,7 @@ A *workpad* provides you with a space where you can build presentations of your
To create workpads, you must meet the minimum requirements.
* If you need to set up {{kib}}, use [our free trial](https://www.elastic.co/cloud/elasticsearch-service/signup?baymax=docs-body&elektra=docs).
-* Make sure you have [data indexed into {{es}}](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-general-purpose.html#gp-gs-add-data) and a [data view](../find-and-organize/data-views.md).
+* Make sure you have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md).
* Have an understanding of [{{es}} documents and indices](../../manage-data/data-store/index-basics.md).
* Make sure you have sufficient privileges to create and save workpads. When the read-only indicator appears, you have insufficient privileges, and the options to create and save workpads are unavailable. For more information, refer to [Granting access to {{kib}}](../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md).
diff --git a/explore-analyze/visualize/maps/maps-connect-to-ems.md b/explore-analyze/visualize/maps/maps-connect-to-ems.md
index 76a2acbcae..e074ffe54c 100644
--- a/explore-analyze/visualize/maps/maps-connect-to-ems.md
+++ b/explore-analyze/visualize/maps/maps-connect-to-ems.md
@@ -628,4 +628,4 @@ With {{hosted-ems}} running, add the `map.emsUrl` configuration key in your [kib
### Logging [elastic-maps-server-logging]
-Logs are generated in [ECS JSON format](https://www.elastic.co/guide/en/ecs/current) and emitted to the standard output and to `/var/log/elastic-maps-server/elastic-maps-server.log`. The server won’t rotate the logs automatically but the `logrotate` tool is installed in the image. Mount `/dev/null` to the default log path if you want to disable the output to that file.
+Logs are generated in [ECS JSON format](ecs://reference/index.md) and emitted to the standard output and to `/var/log/elastic-maps-server/elastic-maps-server.log`. The server won’t rotate the logs automatically but the `logrotate` tool is installed in the image. Mount `/dev/null` to the default log path if you want to disable the output to that file.
diff --git a/manage-data/data-store/data-streams/set-up-data-stream.md b/manage-data/data-store/data-streams/set-up-data-stream.md
index 79d5b98aae..4b46e10980 100644
--- a/manage-data/data-store/data-streams/set-up-data-stream.md
+++ b/manage-data/data-store/data-streams/set-up-data-stream.md
@@ -96,7 +96,7 @@ When creating your component templates, include:
* Your lifecycle policy in the `index.lifecycle.name` index setting.
::::{tip}
-Use the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current) when mapping your fields. ECS fields integrate with several {{stack}} features by default.
+Use the [Elastic Common Schema (ECS)](ecs://reference/index.md) when mapping your fields. ECS fields integrate with several {{stack}} features by default.
If you’re unsure how to map your fields, use [runtime fields](../mapping/define-runtime-fields-in-search-request.md) to extract fields from [unstructured content](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#mapping-unstructured-content) at search time. For example, you can index a log message to a `wildcard` field and later extract IP addresses and other data from this field during a search.
diff --git a/manage-data/data-store/index-basics.md b/manage-data/data-store/index-basics.md
index 421abe9ba8..a5a6aca2e6 100644
--- a/manage-data/data-store/index-basics.md
+++ b/manage-data/data-store/index-basics.md
@@ -13,7 +13,7 @@ applies_to:
An index is a fundamental unit of storage in {{es}}. It is a collection of documents uniquely identified by a name or an [alias](/manage-data/data-store/aliases.md). This unique name is important because it’s used to target the index in search queries and other operations.
::::{tip}
-A closely related concept is a [data stream](/manage-data/data-store/data-streams.md). This index abstraction is optimized for append-only timestamped data, and is made up of hidden, auto-generated backing indices. If you’re working with timestamped data, we recommend the [Elastic Observability](https://www.elastic.co/guide/en/observability/current) solution for additional tools and optimized content.
+A closely related concept is a [data stream](/manage-data/data-store/data-streams.md). This index abstraction is optimized for append-only timestamped data, and is made up of hidden, auto-generated backing indices. If you’re working with timestamped data, we recommend the [Elastic Observability](/solutions/observability/get-started.md) solution for additional tools and optimized content.
::::
## Index components
@@ -94,7 +94,7 @@ Investigate your data streams and address lifecycle management needs in the **Da
:screenshot:
:::
-In {{es-serverless}}, indices matching the `logs-*-*` pattern use the logsDB index mode by default. The logsDB index mode creates a [logs data stream](https://www.elastic.co/guide/en/elasticsearch/reference/master/logs-data-stream.html).
+In {{es-serverless}}, indices matching the `logs-*-*` pattern use the logsDB index mode by default. The logsDB index mode creates a [logs data stream](/manage-data/data-store/data-streams/logs-data-stream.md).
* To view information about the stream's backing indices, click the number in the **Indices** column.
* A value in the **Data retention** column indicates that the data stream is managed by a data stream lifecycle policy. This value is the time period for which your data is guaranteed to be stored. Data older than this period can be deleted by {{es}} at a later time.
diff --git a/manage-data/data-store/manage-data-from-the-command-line.md b/manage-data/data-store/manage-data-from-the-command-line.md
index 7b6337bc15..0390b134c9 100644
--- a/manage-data/data-store/manage-data-from-the-command-line.md
+++ b/manage-data/data-store/manage-data-from-the-command-line.md
@@ -118,7 +118,7 @@ curl -u USER:PASSWORD https://ELASTICSEARCH_URL/my_index/_doc/_search?pretty=tru
For performance reasons, `?pretty=true` is not recommended in production. You can verify the performance difference yourself by checking the `took` field in the JSON response which tells you how long Elasticsearch took to evaluate the search in milliseconds. When we tested these examples ourselves, the difference was `"took" : 4` against `"took" : 18`, a substantial difference.
-For a full explanation of how the request body is structured, check [Elasticsearch Request Body documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html). You can also execute multiple queries in one request with the [Multi Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch).
+For a full explanation of how the request body is structured, check [Elasticsearch Request Body documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json). You can also execute multiple queries in one request with the [Multi Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch).
## Deleting [deleting]
diff --git a/manage-data/ingest.md b/manage-data/ingest.md
index 48980e38d3..f013d6879f 100644
--- a/manage-data/ingest.md
+++ b/manage-data/ingest.md
@@ -28,7 +28,7 @@ You can ingest:
Elastic offer tools designed to ingest specific types of general content. The content type determines the best ingest option.
* To index **documents** directly into {{es}}, use the {{es}} [document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document).
-* To send **application data** directly to {{es}}, use an [{{es}} language client](https://www.elastic.co/guide/en/elasticsearch/client/index.html).
+* To send **application data** directly to {{es}}, use an [{{es}} language client](/reference/elasticsearch-clients/index.md).
* To index **web page content**, use the Elastic [web crawler](https://www.elastic.co/web-crawler).
* To sync **data from third-party sources**, use [connectors](elasticsearch://reference/search-connectors/index.md). A connector syncs content from an original data source to an {{es}} index. Using connectors you can create *searchable*, read-only replicas of your data sources.
* To index **single files** for testing in a non-production environment, use the {{kib}} [file uploader](ingest/upload-data-files.md).
@@ -43,7 +43,7 @@ The best approach for ingesting data is the *simplest option* that *meets your n
In most cases, the *simplest option* for ingesting time series data is using {{agent}} paired with an Elastic integration.
-* Install [Elastic Agent](https://www.elastic.co/guide/en/fleet/current) on the computer(s) from which you want to collect data.
+* Install [Elastic Agent](/reference/fleet/index.md) on the computer(s) from which you want to collect data.
* Add the [Elastic integration](https://docs.elastic.co/en/integrations) for the data source to your deployment.
Integrations are available for many popular platforms and services, and are a good place to start for ingesting data into Elastic solutions—Observability, Security, and Search—or your own search application.
diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md
index e534cfa71d..70c5f49a7f 100644
--- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md
+++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md
@@ -26,17 +26,17 @@ Notes
Info on {{agent}} and agent integrations:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [{{agent}} integrations](https://docs.elastic.co/en/integrations)
Info on {{ls}} and {{ls}} plugins:
-* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
+* [{{ls}} Reference](logstash://reference/index.md)
* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md)
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
* ES sink [ToDo: Add link]
diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md
index 68cec26f22..a9bd28c9bc 100644
--- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md
+++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md
@@ -26,12 +26,12 @@ Notes
Info on {{agent}} and agent integrations:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [{{agent}} integrations](https://docs.elastic.co/en/integrations)
Info on {{ls}} and {{ls}} Kafka plugins:
-* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
+* [{{ls}} Reference](logstash://reference/index.md)
* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md)
* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md)
@@ -39,5 +39,5 @@ Info on {{ls}} and {{ls}} Kafka plugins:
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/agent-proxy.md b/manage-data/ingest/ingest-reference-architectures/agent-proxy.md
index 00e9ae8d1f..31fbbb0199 100644
--- a/manage-data/ingest/ingest-reference-architectures/agent-proxy.md
+++ b/manage-data/ingest/ingest-reference-architectures/agent-proxy.md
@@ -34,7 +34,7 @@ Currently {{agent}} is not able to present a certificate for connectivity to {{f
Info on {{agent}} and agent integrations:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [{{agent}} integrations](https://docs.elastic.co/en/integrations)
Info on using a proxy server:
@@ -43,5 +43,5 @@ Info on using a proxy server:
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/agent-to-es.md b/manage-data/ingest/ingest-reference-architectures/agent-to-es.md
index 13dd9cfda3..4db900fa56 100644
--- a/manage-data/ingest/ingest-reference-architectures/agent-to-es.md
+++ b/manage-data/ingest/ingest-reference-architectures/agent-to-es.md
@@ -24,16 +24,16 @@ Integrations offer advantages beyond easier data collection—advantages such
Info on {{agent}} and agent integrations:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [{{agent}} integrations](https://docs.elastic.co/en/integrations)
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
This basic architecture is a common approach for ingesting data for the [Elastic Observability](https://www.elastic.co/observability) and [Elastic Security](https://www.elastic.co/security) solutions:
-* [Elastic Observability tutorials](https://www.elastic.co/guide/en/observability/current/observability-tutorials.html)
+* [Elastic Observability tutorials](/solutions/observability/get-started.md#_get_started_with_other_features)
* [Ingest data to Elastic Security](../../../solutions/security/get-started/ingest-data-to-elastic-security.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md
index fdf53d0af3..dc338dcc8e 100644
--- a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md
+++ b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md
@@ -30,10 +30,10 @@ Examples
Info on configuring {{agent}}:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [Configuring outputs for {{agent}}](/reference/fleet/elastic-agent-output-configuration.md)
-For info on {{ls}} for enriching data, check out these sections in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current):
+For info on {{ls}} for enriching data, check out these sections in the [Logstash Reference](logstash://reference/index.md):
* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{ls}} plugins for enriching data](logstash://reference/lookup-enrichment.md)
@@ -42,5 +42,5 @@ For info on {{ls}} for enriching data, check out these sections in the [Logstash
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md
index 8fb6acee81..d365e9b025 100644
--- a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md
+++ b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md
@@ -28,12 +28,12 @@ Before you implement this approach, check to see if an {{agent}} integration exi
Info on {{ls}} and {{ls}} input and output plugins:
* [{{ls}} plugin support matrix](https://www.elastic.co/support/matrix#logstash_plugins)
-* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
+* [{{ls}} Reference](logstash://reference/index.md)
* [{{ls}} input plugins](logstash-docs-md://lsr/input-plugins.md)
* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md)
Info on {{es}} and ingest pipelines:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
* [{{es}} Ingest Pipelines](../transform-enrich/ingest-pipelines.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/ls-multi.md b/manage-data/ingest/ingest-reference-architectures/ls-multi.md
index 7209969882..e5e73b4a59 100644
--- a/manage-data/ingest/ingest-reference-architectures/ls-multi.md
+++ b/manage-data/ingest/ingest-reference-architectures/ls-multi.md
@@ -56,16 +56,16 @@ output {
Info on configuring {{agent}}:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [Configuring outputs for {{agent}}](/reference/fleet/elastic-agent-output-configuration.md)
Info on {{ls}} and {{ls}} outputs:
-* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
+* [{{ls}} Reference](logstash://reference/index.md)
* [{{ls}} {{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md)
* [{{ls}} output plugins](logstash-docs-md://lsr/output-plugins.md)
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md
index 4b36c3ed4f..9bb071b8d5 100644
--- a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md
+++ b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md
@@ -23,15 +23,15 @@ Example
Info on configuring {{agent}}:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [Configuring outputs for {{agent}}](/reference/fleet/elastic-agent-output-configuration.md)
Info on {{ls}} and {{ls}} plugins:
-* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
+* [{{ls}} Reference](logstash://reference/index.md)
* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md)
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingest-reference-architectures/lspq.md b/manage-data/ingest/ingest-reference-architectures/lspq.md
index c9fab17c3e..02bdd26dc1 100644
--- a/manage-data/ingest/ingest-reference-architectures/lspq.md
+++ b/manage-data/ingest/ingest-reference-architectures/lspq.md
@@ -20,7 +20,7 @@ Use when
Info on configuring {{agent}}:
-* [Fleet and Elastic Agent Guide](https://www.elastic.co/guide/en/fleet/current)
+* [Fleet and Elastic Agent Guide](/reference/fleet/index.md)
* [Configuring outputs for {{agent}}](/reference/fleet/elastic-agent-output-configuration.md)
For info on {{ls}} plugins:
@@ -28,11 +28,11 @@ For info on {{ls}} plugins:
* [{{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md)
-For info on using {{ls}} for buffering and data resiliency, check out this section in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current):
+For info on using {{ls}} for buffering and data resiliency, check out this section in the [Logstash Reference](logstash://reference/index.md):
* [{{ls}} Persistent Queues (PQ)](logstash://reference/persistent-queues.md)
Info on {{es}}:
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
diff --git a/manage-data/ingest/ingesting-data-for-elastic-solutions.md b/manage-data/ingest/ingesting-data-for-elastic-solutions.md
index d5a2d606ec..c8687ebb05 100644
--- a/manage-data/ingest/ingesting-data-for-elastic-solutions.md
+++ b/manage-data/ingest/ingesting-data-for-elastic-solutions.md
@@ -8,12 +8,12 @@ applies_to:
# Ingesting data for Elastic solutions [ingest-for-solutions]
-Elastic solutions—Security, Observability, and Search—are loaded with features and functionality to help you get value and insights from your data. [Elastic Agent](https://www.elastic.co/guide/en/fleet/current) and [Elastic integrations](https://docs.elastic.co/en/integrations) can help, and are the best place to start.
+Elastic solutions—Security, Observability, and Search—are loaded with features and functionality to help you get value and insights from your data. [Elastic Agent](/reference/fleet/index.md) and [Elastic integrations](https://docs.elastic.co/en/integrations) can help, and are the best place to start.
When you use integrations with solutions, you have an integrated experience that offers easier implementation and decreases the time it takes to get insights and value from your data.
::::{admonition} High-level overview
-To use [Elastic Agent](https://www.elastic.co/guide/en/fleet/current) and [Elastic integrations](https://docs.elastic.co/en/integrations) with Elastic solutions:
+To use [Elastic Agent](/reference/fleet/index.md) and [Elastic integrations](https://docs.elastic.co/en/integrations) with Elastic solutions:
1. Create an [{{ecloud}}](https://www.elastic.co/cloud) deployment for your solution. If you don’t have an {{ecloud}} account, you can sign up for a [free trial](https://cloud.elastic.co/registration) to get started.
2. Add the [Elastic integration](https://docs.elastic.co/en/integrations) for your data source to the deployment.
@@ -36,10 +36,10 @@ To use [Elastic Agent](https://www.elastic.co/guide/en/fleet/current) and [Elast
* [Install {{agent}}](/reference/fleet/install-elastic-agents.md)
* [Elastic Search for integrations](https://www.elastic.co/integrations/data-integrations?solution=search)
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
* [{{es}} document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)
- * [{{es}} language clients](https://www.elastic.co/guide/en/elasticsearch/client/index.html)
+ * [{{es}} language clients](/reference/elasticsearch-clients/index.md)
* [Elastic web crawler](https://www.elastic.co/web-crawler)
* [Elastic connectors](elasticsearch://reference/search-connectors/index.md)
@@ -51,7 +51,7 @@ With [Elastic Observability](https://www.elastic.co/observability), you can moni
**Guides for popular Observability use cases**
-* [Monitor applications and systems with Elastic Observability](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-observability.html)
+* [Monitor applications and systems with Elastic Observability](/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md)
* [Get started with logs and metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md)
* [Step 1: Add the {{agent}} System integration](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md#add-system-integration)
@@ -77,8 +77,8 @@ You can detect and respond to threats when you use [Elastic Security](https://ww
**Guides for popular Security use cases**
-* [Use Elastic Security for SIEM](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-siem-security.html)
-* [Protect hosts with endpoint threat intelligence from Elastic Security](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-endpoint-security.html)
+* [Use Elastic Security for SIEM](https://www.elastic.co/getting-started/security/detect-threats-in-my-data-with-siem)
+* [Protect hosts with endpoint threat intelligence from Elastic Security](https://www.elastic.co/getting-started/security/secure-my-hosts-with-endpoint-security)
**Resources**
@@ -96,11 +96,10 @@ Bring your ideas and use {{es}} and the {{stack}} to store, search, and visualiz
**Resources**
* [Install {{agent}}](/reference/fleet/install-elastic-agents.md)
-* [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current)
+* [{{es}} Guide](elasticsearch://reference/index.md)
* [{{es}} document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)
- * [{{es}} language clients](https://www.elastic.co/guide/en/elasticsearch/client/index.html)
+ * [{{es}} language clients](/reference/elasticsearch-clients/index.md)
* [Elastic web crawler](https://www.elastic.co/web-crawler)
* [Elastic connectors](elasticsearch://reference/search-connectors/index.md)
-* [Tutorial: Get started with vector search and generative AI](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-general-purpose.html)
diff --git a/manage-data/ingest/ingesting-timeseries-data.md b/manage-data/ingest/ingesting-timeseries-data.md
index 0742b57604..6a79c6d42d 100644
--- a/manage-data/ingest/ingesting-timeseries-data.md
+++ b/manage-data/ingest/ingesting-timeseries-data.md
@@ -20,13 +20,13 @@ In this section, we’ll help you determine which option is best for you.
## {{agent}} and Elastic integrations [ingest-ea]
-A single [{{agent}}](https://www.elastic.co/guide/en/fleet/current) can collect multiple types of data when it is [installed](/reference/fleet/install-elastic-agents.md) on a host computer. You can use standalone {{agent}}s and manage them locally on the systems where they are installed, or you can manage all of your agents and policies with the [Fleet UI in {{kib}}](/reference/fleet/manage-elastic-agents-in-fleet.md).
+A single [{{agent}}](/reference/fleet/index.md) can collect multiple types of data when it is [installed](/reference/fleet/install-elastic-agents.md) on a host computer. You can use standalone {{agent}}s and manage them locally on the systems where they are installed, or you can manage all of your agents and policies with the [Fleet UI in {{kib}}](/reference/fleet/manage-elastic-agents-in-fleet.md).
Use {{agent}} with one of hundreds of [Elastic integrations](https://docs.elastic.co/en/integrations) to simplify collecting, transforming, and visualizing data. Integrations include default ingestion rules, dashboards, and visualizations to help you start analyzing your data right away. Check out the [Integration quick reference](https://docs.elastic.co/en/integrations/all_integrations) to search for available integrations that can reduce your time to value.
-{{agent}} is the best option for collecting timestamped data for most data sources and use cases. If your data requires additional processing before going to {{es}}, you can use [{{agent}} processors](/reference/fleet/agent-processors.md), [{{ls}}](https://www.elastic.co/guide/en/logstash/current), or additional processing features in {{es}}. Check out [additional processing](/manage-data/ingest/transform-enrich.md) to see options.
+{{agent}} is the best option for collecting timestamped data for most data sources and use cases. If your data requires additional processing before going to {{es}}, you can use [{{agent}} processors](/reference/fleet/agent-processors.md), [{{ls}}](logstash://reference/index.md), or additional processing features in {{es}}. Check out [additional processing](/manage-data/ingest/transform-enrich.md) to see options.
-Ready to try [{{agent}}](https://www.elastic.co/guide/en/fleet/current)? Check out the [installation instructions](/reference/fleet/install-elastic-agents.md).
+Ready to try [{{agent}}](/reference/fleet/index.md)? Check out the [installation instructions](/reference/fleet/install-elastic-agents.md).
## {{beats}} [ingest-beats]
@@ -35,19 +35,19 @@ Ready to try [{{agent}}](https://www.elastic.co/guide/en/fleet/current)? Check o
Beats require that you install a separate Beat for each type of data you want to collect. A single Elastic Agent installed on a host can collect and transport multiple types of data.
-**Best practice:** Use [{{agent}}](https://www.elastic.co/guide/en/fleet/current) whenever possible. If your data source is not yet supported by {{agent}}, use {{beats}}. Check out the {{beats}} and {{agent}} [comparison](/manage-data/ingest/tools.md#additional-capabilities-beats-and-agent) for more info. When you are ready to upgrade, check out [Migrate from {{beats}} to {{agent}}](/reference/fleet/migrate-from-beats-to-elastic-agent.md).
+**Best practice:** Use [{{agent}}](/reference/fleet/index.md) whenever possible. If your data source is not yet supported by {{agent}}, use {{beats}}. Check out the {{beats}} and {{agent}} [comparison](/manage-data/ingest/tools.md#additional-capabilities-beats-and-agent) for more info. When you are ready to upgrade, check out [Migrate from {{beats}} to {{agent}}](/reference/fleet/migrate-from-beats-to-elastic-agent.md).
## OpenTelemetry (OTel) collectors [ingest-otel]
[OpenTelemetry](https://opentelemetry.io/docs) is a vendor-neutral observability framework for collecting, processing, and exporting telemetry data. Elastic is a member of the Cloud Native Computing Foundation (CNCF) and active contributor to the OpenTelemetry project.
-In addition to supporting upstream OTel development, Elastic provides [Elastic Distributions of OpenTelemetry](https://elastic.github.io/opentelemetry/), specifically designed to work with Elastic Observability. We’re also expanding [{{agent}}](https://www.elastic.co/guide/en/fleet/current) to use OTel collection.
+In addition to supporting upstream OTel development, Elastic provides [Elastic Distributions of OpenTelemetry](https://elastic.github.io/opentelemetry/), specifically designed to work with Elastic Observability. We’re also expanding [{{agent}}](/reference/fleet/index.md) to use OTel collection.
## Logstash [ingest-logstash]
-[{{ls}}](https://www.elastic.co/guide/en/logstash/current) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](logstash-docs-md://lsr//input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](logstash-docs-md://lsr/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](logstash-docs-md://lsr/output-plugins.md).
+[{{ls}}](logstash://reference/index.md) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](logstash-docs-md://lsr//input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](logstash-docs-md://lsr/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](logstash-docs-md://lsr/output-plugins.md).
Many users never need to use {{ls}}, but it’s available if you need it for:
diff --git a/manage-data/ingest/transform-enrich.md b/manage-data/ingest/transform-enrich.md
index 018c23b545..42a69ea9e1 100644
--- a/manage-data/ingest/transform-enrich.md
+++ b/manage-data/ingest/transform-enrich.md
@@ -38,7 +38,7 @@ Note that you can also perform transforms on existing {{es}} indices to pivot da
{{ls}} and the {{ls}} `elastic_integration filter`
: If you're using {{ls}} as your primary ingest tool, you can take advantage of its built-in pipeline capabilities to transform your data. You configure a pipeline by stringing together a series of input, output, filtering, and optional codec plugins to manipulate all incoming data.
-: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/) and other [{{ls}} filters](logstash-docs-md://lsr/filter-plugins.md) to [extend Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}.
+: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](logstash://reference/index.md) and other [{{ls}} filters](logstash-docs-md://lsr/filter-plugins.md) to [extend Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}.
Index mapping
: Index mapping lets you control the structure that incoming data has within an {{es}} index. You can define all of the fields that are included in the index and their respective data types. For example, you can set fields for dates, numbers, or geolocations, and define the fields to have specific formats.
diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines.md b/manage-data/ingest/transform-enrich/ingest-pipelines.md
index 7dd8eaa405..4e506cf66a 100644
--- a/manage-data/ingest/transform-enrich/ingest-pipelines.md
+++ b/manage-data/ingest/transform-enrich/ingest-pipelines.md
@@ -44,7 +44,7 @@ In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines**
To create a pipeline, click **Create pipeline > New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md).
::::{tip}
-The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md).
+The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](ecs://reference/index.md). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md).
::::
diff --git a/manage-data/lifecycle/rollup/getting-started-api.md b/manage-data/lifecycle/rollup/getting-started-api.md
index d874afba84..f2c677150e 100644
--- a/manage-data/lifecycle/rollup/getting-started-api.md
+++ b/manage-data/lifecycle/rollup/getting-started-api.md
@@ -273,5 +273,5 @@ In addition to being more complicated (date histogram and a terms aggregation, p
## Conclusion [_conclusion]
-This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the [REST API](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-api-quickref.html) for an overview of what is available.
+This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the [REST API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-get-jobs) for an overview of what is available.
diff --git a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md
index 848b7353d9..381b151c19 100644
--- a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md
+++ b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md
@@ -253,7 +253,7 @@ When creating your component templates, include:
* Your lifecycle policy in the `index.lifecycle.name` index setting.
::::{tip}
-Use the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current) when mapping your fields. ECS fields integrate with several {{stack}} features by default.
+Use the [Elastic Common Schema (ECS)](ecs://reference/index.md) when mapping your fields. ECS fields integrate with several {{stack}} features by default.
If you’re unsure how to map your fields, use [runtime fields](data-store/mapping/define-runtime-fields-in-search-request.md) to extract fields from [unstructured content](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#mapping-unstructured-content) at search time. For example, you can index a log message to a `wildcard` field and later extract IP addresses and other data from this field during a search.
diff --git a/reference/fleet/install-elastic-agents.md b/reference/fleet/install-elastic-agents.md
index fc8aa43cb0..7b82bfb477 100644
--- a/reference/fleet/install-elastic-agents.md
+++ b/reference/fleet/install-elastic-agents.md
@@ -108,7 +108,7 @@ For containerized environments, the servers {{agent}} flavor is installed using
#### Complete flavor [elastic-agent-complete-flavor]
-For containerized environments, the complete {{agent}} flavor is installed using the `elastic-agent-complete` command with an agent container package. This flavor includes all of the components in the servers flavor, and also includes additional dependencies to run browser monitors through Elastic Synthetics. It also includes the [journald](https://www.freedesktop.org/software/systemd/man/latest/systemd-journald.service.html) dependences necessary to use the [journald input](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-journald.html). Refer to [Synthetic monitoring via Elastic Agent and Fleet](/solutions/observability/synthetics/get-started.md) for more information.
+For containerized environments, the complete {{agent}} flavor is installed using the `elastic-agent-complete` command with an agent container package. This flavor includes all of the components in the servers flavor, and also includes additional dependencies to run browser monitors through Elastic Synthetics. It also includes the [journald](https://www.freedesktop.org/software/systemd/man/latest/systemd-journald.service.html) dependences necessary to use the [journald input](beats://reference/filebeat/filebeat-input-journald.md). Refer to [Synthetic monitoring via Elastic Agent and Fleet](/solutions/observability/synthetics/get-started.md) for more information.
## Resource requirements [elastic-agent-installation-resource-requirements]
diff --git a/reference/fleet/kafka-output-settings.md b/reference/fleet/kafka-output-settings.md
index 338a7669c6..95080a47cd 100644
--- a/reference/fleet/kafka-output-settings.md
+++ b/reference/fleet/kafka-output-settings.md
@@ -126,7 +126,7 @@ Use this option to set the Kafka topic for each {{agent}} event.
**Default topic** $$$kafka-output-topics-default$$$
: Set a default topic to use for events sent by {{agent}} to the Kafka output.
- You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)) field. Available fields include:
+ You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Schema (ECS)](ecs://reference/index.md) field. Available fields include:
* `data_stream_type`
* `data_stream.dataset`
diff --git a/reference/fleet/remote-elasticsearch-output.md b/reference/fleet/remote-elasticsearch-output.md
index 3694da8857..2865b5583a 100644
--- a/reference/fleet/remote-elasticsearch-output.md
+++ b/reference/fleet/remote-elasticsearch-output.md
@@ -80,7 +80,7 @@ Remote clusters require access to the [{{package-registry}}](/reference/fleet/in
1. Configure {{ccr}} on the remote cluster.
1. In the remote cluster, open the {{kib}} menu and go to **Stack Management > Remote Clusters**.
- 2. Refer to [Remote clusters](https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters.html) to add your main cluster (where the remote {{es}} output is configured) as a remote cluster.
+ 2. Refer to [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) to add your main cluster (where the remote {{es}} output is configured) as a remote cluster.
3. Go to **Stack Management > Cross-Cluster Replication**.
4. Create a follower index named `fleet-synced-integrations-ccr-