diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-suse-cloud.md b/deploy-manage/deploy/cloud-enterprise/configure-host-suse-cloud.md
index e3f6d26512..d397f7a1dc 100644
--- a/deploy-manage/deploy/cloud-enterprise/configure-host-suse-cloud.md
+++ b/deploy-manage/deploy/cloud-enterprise/configure-host-suse-cloud.md
@@ -20,9 +20,9 @@ If you want to install Elastic Cloud Enterprise on your own hosts, the steps for
Regardless of which approach you take, the steps in this section need to be performed on every host that you want to use with Elastic Cloud Enterprise.
-## Install Docker [ece-install-docker-sles12-cloud]
+## Install Docker [ece-install-docker-sles12-cloud]
-::::{important}
+::::{important}
Make sure to use a combination of Linux distribution and Docker version that is supported, following our official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise). Using unsupported combinations can cause multiple issues with you ECE environment, such as failures to create system deployments, to upgrade workload deployments, proxy timeouts, and more.
::::
@@ -63,7 +63,7 @@ Make sure to use a combination of Linux distribution and Docker version that is
-## Set up OS groups and user [ece_set_up_os_groups_and_user]
+## Set up OS groups and user [ece_set_up_os_groups_and_user]
1. If they don’t already exist, create the following OS groups:
@@ -80,18 +80,18 @@ Make sure to use a combination of Linux distribution and Docker version that is
-## Set up XFS on SLES [ece-xfs-setup-sles12-cloud]
+## Set up XFS on SLES [ece-xfs-setup-sles12-cloud]
XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On SLES 12 and 15, you need to set up an XFS file system and have quotas enabled.
Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space.
-::::{note}
+::::{note}
Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, nor is it provided as part of supporting Elastic Cloud Enterprise.
::::
-::::{important}
+::::{important}
You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly.
::::
@@ -124,7 +124,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Update the configurations settings [ece-update-config-sles-cloud]
+## Update the configurations settings [ece-update-config-sles-cloud]
1. Stop the Docker service:
@@ -162,7 +162,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
EOF
```
- ::::{important}
+ ::::{important}
The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`.
::::
@@ -178,7 +178,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for Elastic Cloud Enterprise as well.
- ::::{tip}
+ ::::{tip}
If you are using a user name other than `elastic`, adjust the configuration values accordingly.
::::
@@ -243,7 +243,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Configure the Docker daemon [ece-configure-docker-daemon-sles12-cloud]
+## Configure the Docker daemon [ece-configure-docker-daemon-sles12-cloud]
1. Edit `/etc/docker/daemon.json`, and make sure that the following configuration values are present:
@@ -321,7 +321,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
5. Reboot your system to ensure that all configuration changes take effect:
- ```literal
+ ```sh
sudo reboot
```
@@ -333,7 +333,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
7. After rebooting, verify that your Docker settings persist as expected:
- ```literal
+ ```sh
sudo docker info | grep Root
```
@@ -342,4 +342,3 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
If the command returns `Docker Root Dir: /var/lib/docker`, then you need to troubleshoot the previous configuration steps until the Docker settings are applied successfully before continuing with the installation process. For more information, check [Custom Docker daemon options](https://docs.docker.com/engine/admin/systemd/#/custom-docker-daemon-options) in the Docker documentation.
8. Repeat these steps on other hosts that you want to use with Elastic Cloud Enterprise or follow the steps in the next section to start installing Elastic Cloud Enterprise.
-
diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-suse-onprem.md b/deploy-manage/deploy/cloud-enterprise/configure-host-suse-onprem.md
index 632b9a0114..cd18a0c708 100644
--- a/deploy-manage/deploy/cloud-enterprise/configure-host-suse-onprem.md
+++ b/deploy-manage/deploy/cloud-enterprise/configure-host-suse-onprem.md
@@ -20,9 +20,9 @@ If you want to install Elastic Cloud Enterprise on your own hosts, the steps for
Regardless of which approach you take, the steps in this section need to be performed on every host that you want to use with Elastic Cloud Enterprise.
-## Install Docker [ece-install-docker-sles12-onprem]
+## Install Docker [ece-install-docker-sles12-onprem]
-::::{important}
+::::{important}
Make sure to use a combination of Linux distribution and Docker version that is supported, following our official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise). Using unsupported combinations can cause multiple issues with you ECE environment, such as failures to create system deployments, to upgrade workload deployments, proxy timeouts, and more.
::::
@@ -63,7 +63,7 @@ Make sure to use a combination of Linux distribution and Docker version that is
-## Set up OS groups and user [ece_set_up_os_groups_and_user_2]
+## Set up OS groups and user [ece_set_up_os_groups_and_user_2]
1. If they don’t already exist, create the following OS groups:
@@ -80,18 +80,18 @@ Make sure to use a combination of Linux distribution and Docker version that is
-## Set up XFS on SLES [ece-xfs-setup-sles12-onprem]
+## Set up XFS on SLES [ece-xfs-setup-sles12-onprem]
XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On SLES 12 and 15, you need to set up an XFS file system and have quotas enabled.
Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space.
-::::{note}
+::::{note}
Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, nor is it provided as part of supporting Elastic Cloud Enterprise.
::::
-::::{important}
+::::{important}
You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly.
::::
@@ -124,7 +124,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Update the configurations settings [ece-update-config-sles-onprem]
+## Update the configurations settings [ece-update-config-sles-onprem]
1. Stop the Docker service:
@@ -162,7 +162,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
EOF
```
- ::::{important}
+ ::::{important}
The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`.
::::
@@ -178,7 +178,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for Elastic Cloud Enterprise as well.
- ::::{tip}
+ ::::{tip}
If you are using a user name other than `elastic`, adjust the configuration values accordingly.
::::
@@ -243,7 +243,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Configure the Docker daemon [ece-configure-docker-daemon-sles12-onprem]
+## Configure the Docker daemon [ece-configure-docker-daemon-sles12-onprem]
1. Edit `/etc/docker/daemon.json`, and make sure that the following configuration values are present:
@@ -321,7 +321,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
5. Reboot your system to ensure that all configuration changes take effect:
- ```literal
+ ```sh
sudo reboot
```
@@ -333,7 +333,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
7. After rebooting, verify that your Docker settings persist as expected:
- ```literal
+ ```sh
sudo docker info | grep Root
```
@@ -342,4 +342,3 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
If the command returns `Docker Root Dir: /var/lib/docker`, then you need to troubleshoot the previous configuration steps until the Docker settings are applied successfully before continuing with the installation process. For more information, check [Custom Docker daemon options](https://docs.docker.com/engine/admin/systemd/#/custom-docker-daemon-options) in the Docker documentation.
8. Repeat these steps on other hosts that you want to use with Elastic Cloud Enterprise or follow the steps in the next section to start installing Elastic Cloud Enterprise.
-
diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-cloud.md b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-cloud.md
index 28203ca466..86d667482a 100644
--- a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-cloud.md
+++ b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-cloud.md
@@ -13,16 +13,16 @@ The following instructions show you how to prepare your hosts on 20.04 LTS (Foca
* [Configure the Docker daemon options](#ece-configure-docker-daemon-ubuntu-cloud)
-## Install Docker [ece-install-docker-ubuntu-cloud]
+## Install Docker [ece-install-docker-ubuntu-cloud]
Install Docker LTS version 24.0 for Ubuntu 20.04 or 22.04.
-::::{important}
+::::{important}
Make sure to use a combination of Linux distribution and Docker version that is supported, following our official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise). Using unsupported combinations can cause multiple issues with you ECE environment, such as failures to create system deployments, to upgrade workload deployments, proxy timeouts, and more.
::::
-::::{note}
+::::{note}
Docker 25 and higher are not compatible with ECE 3.7.
::::
@@ -56,18 +56,18 @@ Docker 25 and higher are not compatible with ECE 3.7.
-## Set up XFS quotas [ece-xfs-setup-ubuntu-cloud]
+## Set up XFS quotas [ece-xfs-setup-ubuntu-cloud]
XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On Ubuntu, you need to set up an XFS file system and have quotas enabled.
Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space.
-::::{note}
+::::{note}
Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, and it is not supported by Elastic Cloud Enterprise.
::::
-::::{important}
+::::{important}
You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly.
::::
@@ -101,7 +101,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Update the configurations settings [ece-update-config-ubuntu-cloud]
+## Update the configurations settings [ece-update-config-ubuntu-cloud]
1. Stop the Docker service:
@@ -139,7 +139,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
EOF
```
- ::::{important}
+ ::::{important}
The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`.
::::
@@ -154,7 +154,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for Elastic Cloud Enterprise as well.
- ::::{tip}
+ ::::{tip}
If you are using a user name other than `elastic`, adjust the configuration values accordingly.
::::
@@ -219,14 +219,14 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Configure the Docker daemon options [ece-configure-docker-daemon-ubuntu-cloud]
+## Configure the Docker daemon options [ece-configure-docker-daemon-ubuntu-cloud]
-::::{tip}
+::::{tip}
Docker creates a bridge IP address that can conflict with IP addresses on your internal network. To avoid an IP address conflict, change the `--bip=172.17.42.1/16` parameter in our examples to something that you know will work. If there is no conflict, you can omit the `--bip` parameter. The `--bip` parameter is internal to the host and can be set to the same IP for each host in the cluster. More information on Docker daemon options can be found in the [dockerd command line reference](https://docs.docker.com/engine/reference/commandline/dockerd/).
::::
-::::{tip}
+::::{tip}
You can specify `--log-opt max-size` and `--log-opt max-file` to define the Docker daemon containers log rotation.
::::
@@ -292,13 +292,13 @@ You can specify `--log-opt max-size` and `--log-opt max-file` to define the Dock
6. Reboot your system to ensure that all configuration changes take effect:
- ```literal
+ ```sh
sudo reboot
```
7. After rebooting, verify that your Docker settings persist as expected:
- ```literal
+ ```sh
sudo docker info | grep Root
```
@@ -307,4 +307,3 @@ You can specify `--log-opt max-size` and `--log-opt max-file` to define the Dock
If the command returns `Docker Root Dir: /var/lib/docker`, then you need to troubleshoot the previous configuration steps until the Docker settings are applied successfully before continuing with the installation process. For more information, check [Custom Docker daemon options](https://docs.docker.com/engine/admin/systemd/#/custom-docker-daemon-options) in the Docker documentation.
8. Repeat these steps on other hosts that you want to use with Elastic Cloud Enterprise or follow the steps in the next section to start installing Elastic Cloud Enterprise.
-
diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-onprem.md b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-onprem.md
index 302651ef83..c0a23de7db 100644
--- a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-onprem.md
+++ b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu-onprem.md
@@ -13,16 +13,16 @@ The following instructions show you how to prepare your hosts on 20.04 LTS (Foca
* [Configure the Docker daemon options](#ece-configure-docker-daemon-ubuntu-onprem)
-## Install Docker [ece-install-docker-ubuntu-onprem]
+## Install Docker [ece-install-docker-ubuntu-onprem]
Install Docker LTS version 24.0 for Ubuntu 20.04 or 22.04.
-::::{important}
+::::{important}
Make sure to use a combination of Linux distribution and Docker version that is supported, following our official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise). Using unsupported combinations can cause multiple issues with you ECE environment, such as failures to create system deployments, to upgrade workload deployments, proxy timeouts, and more.
::::
-::::{note}
+::::{note}
Docker 25 and higher are not compatible with ECE 3.7.
::::
@@ -56,18 +56,18 @@ Docker 25 and higher are not compatible with ECE 3.7.
-## Set up XFS quotas [ece-xfs-setup-ubuntu-onprem]
+## Set up XFS quotas [ece-xfs-setup-ubuntu-onprem]
XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On Ubuntu, you need to set up an XFS file system and have quotas enabled.
Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space.
-::::{note}
+::::{note}
Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, and it is not supported by Elastic Cloud Enterprise.
::::
-::::{important}
+::::{important}
You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly.
::::
@@ -101,7 +101,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Update the configurations settings [ece-update-config-ubuntu-onprem]
+## Update the configurations settings [ece-update-config-ubuntu-onprem]
1. Stop the Docker service:
@@ -139,7 +139,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
EOF
```
- ::::{important}
+ ::::{important}
The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`.
::::
@@ -154,7 +154,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for Elastic Cloud Enterprise as well.
- ::::{tip}
+ ::::{tip}
If you are using a user name other than `elastic`, adjust the configuration values accordingly.
::::
@@ -219,14 +219,14 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
-## Configure the Docker daemon options [ece-configure-docker-daemon-ubuntu-onprem]
+## Configure the Docker daemon options [ece-configure-docker-daemon-ubuntu-onprem]
-::::{tip}
+::::{tip}
Docker creates a bridge IP address that can conflict with IP addresses on your internal network. To avoid an IP address conflict, change the `--bip=172.17.42.1/16` parameter in our examples to something that you know will work. If there is no conflict, you can omit the `--bip` parameter. The `--bip` parameter is internal to the host and can be set to the same IP for each host in the cluster. More information on Docker daemon options can be found in the [dockerd command line reference](https://docs.docker.com/engine/reference/commandline/dockerd/).
::::
-::::{tip}
+::::{tip}
You can specify `--log-opt max-size` and `--log-opt max-file` to define the Docker daemon containers log rotation.
::::
@@ -292,13 +292,13 @@ You can specify `--log-opt max-size` and `--log-opt max-file` to define the Dock
6. Reboot your system to ensure that all configuration changes take effect:
- ```literal
+ ```sh
sudo reboot
```
7. After rebooting, verify that your Docker settings persist as expected:
- ```literal
+ ```sh
sudo docker info | grep Root
```
@@ -307,4 +307,3 @@ You can specify `--log-opt max-size` and `--log-opt max-file` to define the Dock
If the command returns `Docker Root Dir: /var/lib/docker`, then you need to troubleshoot the previous configuration steps until the Docker settings are applied successfully before continuing with the installation process. For more information, check [Custom Docker daemon options](https://docs.docker.com/engine/admin/systemd/#/custom-docker-daemon-options) in the Docker documentation.
8. Repeat these steps on other hosts that you want to use with Elastic Cloud Enterprise or follow the steps in the next section to start installing Elastic Cloud Enterprise.
-
diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
index cf5820db76..a3736cb6d6 100644
--- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
+++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
@@ -396,7 +396,7 @@ The [`elasticsearch output`](https://www.elastic.co/guide/en/logstash/current/pl
You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md)
-```logstash
+```yaml
kind: Secret
apiVersion: v1
metadata:
@@ -418,7 +418,7 @@ stringData:
The [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/plugins-filters-elastic_integration.html) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables.
-```logstash
+```json
elastic_integration {
pipeline_name => "logstash-pipeline"
hosts => [ "${ECK_ES_HOSTS}" ]
diff --git a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md
index fde3dcb275..6e8e5077c7 100644
--- a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md
+++ b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md
@@ -9,7 +9,7 @@ mapped_pages:
# Kibana task manager health monitoring [task-manager-health-monitoring]
-::::{warning}
+::::{warning}
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
::::
@@ -27,7 +27,7 @@ $ curl -X GET api/task_manager/_health
Monitoring the `_health` endpoint of each {{kib}} instance in the cluster is the recommended method of ensuring confidence in mission critical services such as Alerting, Actions, and Reporting.
-## Configuring the monitored health statistics [task-manager-configuring-health-monitoring]
+## Configuring the monitored health statistics [task-manager-configuring-health-monitoring]
The health monitoring API monitors the performance of Task Manager out of the box. However, certain performance considerations are deployment specific and you can configure them.
@@ -53,7 +53,7 @@ xpack.task_manager.monitored_task_execution_thresholds:
-## Consuming health stats [task-manager-consuming-health-stats]
+## Consuming health stats [task-manager-consuming-health-stats]
The health API is best consumed by via the `/api/task_manager/_health` endpoint.
@@ -79,14 +79,14 @@ By default, the health API runs at a regular cadence, and each time it runs, it
This message looks like:
-```log
+```txt
Detected potential performance issue with Task Manager. Set 'xpack.task_manager.monitored_stats_health_verbose_log.enabled: true' in your Kibana.yml to enable debug logging`
```
If this message appears, set [`xpack.task_manager.monitored_stats_health_verbose_log.enabled`](https://www.elastic.co/guide/en/kibana/current/task-manager-settings-kb.html#task-manager-settings) to `true` in your `kibana.yml`. This will start logging the health metrics at either a `warn` or `error` log level, depending on the detected severity of the potential problem.
-## Making sense of Task Manager health stats [making-sense-of-task-manager-health-stats]
+## Making sense of Task Manager health stats [making-sense-of-task-manager-health-stats]
The health monitoring API exposes three sections: `configuration`, `workload` and `runtime`:
@@ -103,7 +103,7 @@ The root `status` indicates the `status` of the system overall.
The Runtime `status` indicates whether task executions have exceeded any of the [configured health thresholds](#task-manager-configuring-health-monitoring). An `OK` status means none of the threshold have been exceeded. A `Warning` status means that at least one warning threshold has been exceeded. An `Error` status means that at least one error threshold has been exceeded.
-::::{important}
+::::{important}
Some tasks (such as [connectors](../manage-connectors.md)) will incorrectly report their status as successful even if the task failed. The runtime and workload block will return data about success and failures and will not take this into consideration.
To get a better sense of action failures, please refer to the [Event log index](../../explore-analyze/alerts/kibana/event-log-index.md) for more accurate context into failures and successes.
@@ -114,4 +114,3 @@ To get a better sense of action failures, please refer to the [Event log index](
The Capacity Estimation `status` indicates the sufficiency of the observed capacity. An `OK` status means capacity is sufficient. A `Warning` status means that capacity is sufficient for the scheduled recurring tasks, but non-recurring tasks often cause the cluster to exceed capacity. An `Error` status means that there is insufficient capacity across all types of tasks.
By monitoring the `status` of the system overall, and the `status` of specific task types of interest, you can evaluate the health of the {{kib}} Task Management system.
-
diff --git a/deploy-manage/tools/cross-cluster-replication/ccr-tutorial-initial-setup.md b/deploy-manage/tools/cross-cluster-replication/ccr-tutorial-initial-setup.md
index 1bb5907c4f..c1030f66a1 100644
--- a/deploy-manage/tools/cross-cluster-replication/ccr-tutorial-initial-setup.md
+++ b/deploy-manage/tools/cross-cluster-replication/ccr-tutorial-initial-setup.md
@@ -70,22 +70,22 @@ mapped_pages:
}
```
- ::::{important}
+ ::::{important}
Existing data on the cluster will not be replicated by `_ccr/auto_follow` even though the patterns may match. This function will only replicate newly created backing indices (as part of the data stream).
::::
- ::::{important}
+ ::::{important}
Use `leader_index_exclusion_patterns` to avoid recursion.
::::
- ::::{tip}
+ ::::{tip}
`follow_index_pattern` allows lowercase characters only.
::::
- ::::{tip}
+ ::::{tip}
This step cannot be executed via the {{kib}} UI due to the lack of an exclusion pattern in the UI. Use the API in this step.
::::
@@ -93,7 +93,7 @@ mapped_pages:
This example uses the input generator to demonstrate the document count in the clusters. Reconfigure this section to suit your own use case.
- ```logstash
+ ```json
### On Logstash server ###
### This is a logstash config file ###
input {
@@ -111,12 +111,12 @@ mapped_pages:
}
```
- ::::{important}
+ ::::{important}
The key point is that when `cluster A` is down, all traffic will be automatically redirected to `cluster B`. Once `cluster A` comes back, traffic is automatically redirected back to `cluster A` again. This is achieved by the option `hosts` where multiple ES cluster endpoints are specified in the array `[clusterA, clusterB]`.
::::
- ::::{tip}
+ ::::{tip}
Set up the same password for the same user on both clusters to use this load-balancing feature.
::::
@@ -148,5 +148,3 @@ mapped_pages:
```console
GET logs*/_search?size=0
```
-
-
diff --git a/docset.yml b/docset.yml
index 838501ee01..05de5870cf 100644
--- a/docset.yml
+++ b/docset.yml
@@ -493,3 +493,188 @@ subs:
icon-bug: "pass:[]"
icon-checkInCircleFilled: "pass:[]"
icon-warningFilled: "pass:[]"
+
+external_hosts:
+ - 50.10
+ - aka.ms
+ - aliyun.com
+ - amazon.com
+ - amazonaws.com
+ - amp.dev
+ - android.com
+ - ansible.com
+ - anthropic.com
+ - apache.org
+ - apple.com
+ - arxiv.org
+ - atlassian.com
+ - azure.com
+ - bouncycastle.org
+ - cbor.io
+ - census.gov
+ - cert-manager.io
+ - chromium.org
+ - cisa.gov
+ - cisecurity.org
+ - cmu.edu
+ - cncf.io
+ - co.
+ - codesandbox.io
+ - cohere.com
+ - columbia.edu
+ - concourse-ci.org
+ - contentstack.io
+ - curl.se
+ - dbeaver.io
+ - dbvis.com
+ - deque.com
+ - die.net
+ - digitalocean.com
+ - direnv.net
+ - dnschecker.org
+ - docker.com
+ - dso.mil
+ - eicar.org
+ - ela.st
+ - elastic-cloud.com
+ - elasticsearch.org
+ - elstc.co
+ - epsg.org
+ - example.com
+ - falco.org
+ - freedesktop.org
+ - gdal.org
+ - gin-gonic.com
+ - git-lfs.com
+ - github.io
+ - githubusercontent.com
+ - go.dev
+ - godoc.org
+ - golang.org
+ - google.com
+ - google.dev
+ - googleapis.com
+ - googleblog.com
+ - gorillatoolkit.org
+ - gradle.org
+ - handlebarsjs.com
+ - haxx.se
+ - helm.io
+ - helm.sh
+ - heroku.com
+ - huggingface.co
+ - ibm.com
+ - ietf.org
+ - ijmlc.org
+ - istio.io
+ - jaegertracing.io
+ - java.net
+ - javadoc.io
+ - javalin.io
+ - jenkins.io
+ - jina.ai
+ - json.org
+ - kernel.org
+ - kubernetes.io
+ - letsencrypt.org
+ - linkerd.io
+ - lmstudio.ai
+ - loft.sh
+ - man7.org
+ - mariadb.org
+ - markdownguide.org
+ - maven.org
+ - maxmind.com
+ - metacpan.org
+ - micrometer.io
+ - microsoft.com
+ - microstrategy.com
+ - min.io
+ - minio.io
+ - mistral.ai
+ - mit.edu
+ - mitre.org
+ - momentjs.com
+ - mozilla.org
+ - mvnrepository.com
+ - mysql.com
+ - navattic.com
+ - nginx.com
+ - nginx.org
+ - ngrok.com
+ - nist.gov
+ - nlog-project.org
+ - nodejs.dev
+ - nodejs.org
+ - npmjs.com
+ - ntp.org
+ - nuget.org
+ - numeraljs.com
+ - oasis-open.org
+ - office.com
+ - okta.com
+ - openai.com
+ - openebs.io
+ - opengroup.org
+ - openid.net
+ - openjdk.org
+ - openmaptiles.org
+ - openpolicyagent.org
+ - openshift.com
+ - openssl.org
+ - openstreetmap.org
+ - opentelemetry.io
+ - openweathermap.org
+ - operatorhub.io
+ - oracle.com
+ - osquery.io
+ - outlook.com
+ - owasp.org
+ - pagerduty.com
+ - palletsprojects.com
+ - pastebin.com
+ - playwright.dev
+ - podman.io
+ - postgresql.org
+ - pypi.org
+ - python.org
+ - qlik.com
+ - readthedocs.io
+ - recurly.com
+ - redhat.com
+ - rust-lang.org
+ - salesforce.com
+ - scikit-learn.org
+ - sdkman.io
+ - searchkit.co
+ - semver.org
+ - serilog.net
+ - sigstore.dev
+ - slack.com
+ - snowballstem.org
+ - sonatype.org
+ - sourceforge.net
+ - sourcemaps.info
+ - spring.io
+ - sql-workbench.eu
+ - stackexchange.com
+ - stunnel.org
+ - swiftype.com
+ - tableau.com
+ - talosintelligence.com
+ - telerik.com
+ - terraform.io
+ - trimet.org
+ - umd.edu
+ - urlencoder.org
+ - vaultproject.io
+ - victorops.com
+ - virustotal.com
+ - w3.org
+ - web.dev
+ - webhook.site
+ - wikipedia.org
+ - wolfi.dev
+ - wttr.in
+ - yaml.org
+ - youtube.com
diff --git a/explore-analyze/visualize/maps/maps-connect-to-ems.md b/explore-analyze/visualize/maps/maps-connect-to-ems.md
index 524ff0511c..988d64ae63 100644
--- a/explore-analyze/visualize/maps/maps-connect-to-ems.md
+++ b/explore-analyze/visualize/maps/maps-connect-to-ems.md
@@ -47,7 +47,7 @@ curl -I 'https://tiles.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=ag
Server response
-```regex
+```txt
HTTP/2 200
server: BaseHTTP/0.6 Python/3.11.4
date: Mon, 20 Nov 2023 15:08:46 GMT
@@ -71,7 +71,7 @@ alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
::::::
::::::{tab-item} Request
-```regex
+```txt
Host: tiles.maps.elastic.co
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0
Accept: */*
@@ -90,7 +90,7 @@ TE: trailers
::::::
::::::{tab-item} Response
-```regex
+```txt
server: BaseHTTP/0.6 Python/3.11.4
date: Mon, 20 Nov 2023 17:53:10 GMT
content-type: application/json; charset=utf-8
@@ -127,7 +127,7 @@ $ curl -I 'https://tiles.maps.elastic.co/data/v3/1/1/0.pbf?elastic_tile_service_
Server response
-```regex
+```txt
HTTP/2 200
content-encoding: gzip
content-length: 144075
@@ -153,7 +153,7 @@ alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
::::::
::::::{tab-item} Request
-```regex
+```txt
Host: tiles.maps.elastic.co
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0
Accept: */*
@@ -170,7 +170,7 @@ TE: trailers
::::::
::::::{tab-item} Response
-```regex
+```txt
content-encoding: gzip
content-length: 101691
access-control-allow-origin: *
@@ -208,7 +208,7 @@ curl -I 'https://tiles.maps.elastic.co/styles/osm-bright-desaturated/sprite.png'
Server response
-```regex
+```txt
HTTP/2 200
content-length: 17181
access-control-allow-origin: *
@@ -231,7 +231,7 @@ alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
::::::
::::::{tab-item} Request
-```regex
+```txt
Host: tiles.maps.elastic.co
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0
Accept: image/avif,image/webp,*/*
@@ -250,7 +250,7 @@ TE: trailers
::::::
::::::{tab-item} Response
-```regex
+```txt
content-length: 17181
access-control-allow-origin: *
access-control-allow-methods: GET, OPTIONS, HEAD
@@ -290,7 +290,7 @@ curl -I 'https://vector.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=a
Server response
-```regex
+```txt
HTTP/2 200
x-guploader-uploadid: ABPtcPp_BvMdBDO5jVlutETVHmvpOachwjilw4AkIKwMrOQJ4exR9Eln4g0LkW3V_LLSEpvjYLtUtFmO0Uwr61XXUhoP_A
x-goog-generation: 1689593295246576
@@ -320,7 +320,7 @@ alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
::::::
::::::{tab-item} Request
-```regex
+```txt
Host: vector.maps.elastic.co
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0
Accept: */*
@@ -338,7 +338,7 @@ Cache-Control: no-cache
::::::
::::::{tab-item} Response
-```regex
+```txt
x-guploader-uploadid: ABPtcPoUFrCmjBeebnfRxSZp44ZHsZ-_iQg7794RU1Z7Lb2cNNxXsMRkIDa5s7VBEfyehvo-_9rcm1A3HfYW8geguUxKrw
x-goog-generation: 1689593295246576
x-goog-metageneration: 1
@@ -381,7 +381,7 @@ curl -I 'https://vector.maps.elastic.co/files/world_countries_v7.topo.json?elast
Server response
-```regex
+```txt
HTTP/2 200
x-guploader-uploadid: ABPtcPpmMffchVgfHIr-SSC00WORo145oV-1q0asjqRvjLV_7cIgyfLRfofXV-BG7huMYABFypblcgdgXRBARhpo2c88ow
x-goog-generation: 1689593325442971
@@ -411,7 +411,7 @@ alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
::::::
::::::{tab-item} Request
-```regex
+```txt
Host: vector.maps.elastic.co
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0
Accept: */*
@@ -429,7 +429,7 @@ Cache-Control: no-cache
::::::
::::::{tab-item} Response
-```regex
+```txt
x-guploader-uploadid: ABPtcPqIDSg5tyavvwwtJQa8a8iycoXOCkHBp_2YJbJJnQgb5XMD7nFwRUogg00Ou27VFIs95v7L99OMnvXR1bcb9RW-xQ
x-goog-generation: 1689593325442971
x-goog-metageneration: 1
@@ -631,4 +631,3 @@ With {{hosted-ems}} running, add the `map.emsUrl` configuration key in your [kib
### Logging [elastic-maps-server-logging]
Logs are generated in [ECS JSON format](https://www.elastic.co/guide/en/ecs/{{ecs_version}}) and emitted to the standard output and to `/var/log/elastic-maps-server/elastic-maps-server.log`. The server won’t rotate the logs automatically but the `logrotate` tool is installed in the image. Mount `/dev/null` to the default log path if you want to disable the output to that file.
-
diff --git a/manage-data/ingest/transform-enrich/example-parse-logs.md b/manage-data/ingest/transform-enrich/example-parse-logs.md
index 84550859eb..14f3c06e94 100644
--- a/manage-data/ingest/transform-enrich/example-parse-logs.md
+++ b/manage-data/ingest/transform-enrich/example-parse-logs.md
@@ -13,7 +13,7 @@ In this example tutorial, you’ll use an [ingest pipeline](ingest-pipelines.md)
The logs you want to parse look similar to this:
-```log
+```txt
212.87.37.154 - - [05/May/2099:16:21:15 +0000] "GET /favicon.ico HTTP/1.1" 200 3638 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"
```
diff --git a/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md b/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md
index e5c4fd0bf8..2c0359b326 100644
--- a/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md
+++ b/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md
@@ -9,7 +9,7 @@ The format of your logs (structured or plaintext) influences your log ingestion
Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example:
-```log
+```txt
2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer
2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication
2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController
diff --git a/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md b/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md
index 6bae3cc8fb..120cc51a74 100644
--- a/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md
+++ b/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md
@@ -24,7 +24,7 @@ Make your logs more useful by extracting structured fields from your unstructure
Follow the steps below to see how the following unstructured log data is indexed by default:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
```
@@ -306,7 +306,7 @@ Check the following common issues and solutions with timestamps:
Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
```
@@ -393,7 +393,7 @@ Once you’ve extracted the `log.level` field, you can query for high-severity l
Let’s say you have the following logs with varying severities:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
@@ -474,7 +474,7 @@ The `host.ip` field is part of the [Elastic Common Schema (ECS)](https://www.ela
This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
@@ -742,7 +742,7 @@ By default, an ingest pipeline sends your log data to a single data stream. To s
This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md
index fe904aa7db..22d786b354 100644
--- a/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md
+++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md
@@ -16,19 +16,19 @@ In order to use synonyms sets in {{es}}, you need to:
* [Configure synonyms token filters and analyzers](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-synonym-token-filters)
-## Store your synonyms set [synonyms-store-synonyms]
+## Store your synonyms set [synonyms-store-synonyms]
Your synonyms sets need to be stored in {{es}} so your analyzers can refer to them. There are three ways to store your synonyms sets:
-### Synonyms API [synonyms-store-synonyms-api]
+### Synonyms API [synonyms-store-synonyms-api]
You can use the [synonyms APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/synonyms-apis.html) to manage synonyms sets. This is the most flexible approach, as it allows to dynamically define and modify synonyms sets.
Changes in your synonyms sets will automatically reload the associated analyzers.
-### Synonyms File [synonyms-store-synonyms-file]
+### Synonyms File [synonyms-store-synonyms-file]
You can store your synonyms set in a file.
@@ -36,7 +36,7 @@ A synonyms set file needs to be uploaded to all your cluster nodes, and be locat
An example synonyms file:
-```synonyms
+```markdown
# Blank lines and lines starting with pound are comments.
# Explicit mappings match any token sequence on the left hand side of "=>"
@@ -77,28 +77,28 @@ When a synonyms set is updated, search analyzers that use it need to be refreshe
This manual syncing and reloading makes this approach less flexible than using the [synonyms API](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-store-synonyms-api).
-### Inline [synonyms-store-synonyms-inline]
+### Inline [synonyms-store-synonyms-inline]
You can test your synonyms by adding them directly inline in your token filter definition.
-::::{warning}
+::::{warning}
Inline synonyms are not recommended for production usage. A large number of inline synonyms increases cluster size unnecessarily and can lead to performance issues.
::::
-### Configure synonyms token filters and analyzers [synonyms-synonym-token-filters]
+### Configure synonyms token filters and analyzers [synonyms-synonym-token-filters]
Once your synonyms sets are created, you can start configuring your token filters and analyzers to use them.
-::::{warning}
+::::{warning}
Synonyms sets must exist before they can be added to indices. If an index is created referencing a nonexistent synonyms set, the index will remain in a partially created and inoperable state. The only way to recover from this scenario is to ensure the synonyms set exists then either delete and re-create the index, or close and re-open the index.
::::
-::::{warning}
+::::{warning}
Invalid synonym rules can cause errors when applying analyzer changes. For reloadable analyzers, this prevents reloading and applying changes. You must correct errors in the synonym rules and reload the analyzer.
An index with invalid synonym rules cannot be reopened, making it inoperable when:
@@ -118,7 +118,7 @@ An index with invalid synonym rules cannot be reopened, making it inoperable whe
Check each synonym token filter documentation for configuration details and instructions on adding it to an analyzer.
-### Test your analyzer [synonyms-test-analyzer]
+### Test your analyzer [synonyms-test-analyzer]
You can test an analyzer configuration without modifying your index settings. Use the [analyze API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-analyze.html) to test your analyzer chain:
@@ -138,7 +138,7 @@ GET /_analyze
```
-### Apply synonyms at index or search time [synonyms-apply-synonyms]
+### Apply synonyms at index or search time [synonyms-apply-synonyms]
Analyzers can be applied at [index time or search time](../../../manage-data/data-store/text-analysis/index-search-analysis.md).
@@ -184,4 +184,3 @@ The following example adds `my_analyzer` as a search analyzer to the `title` fie
}
}
```
-
diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
index d6201c012f..ca386c9978 100644
--- a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
+++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
@@ -1082,7 +1082,7 @@ GET cohere-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "cohere-embeddings",
@@ -1145,7 +1145,7 @@ GET elser-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "elser-embeddings",
@@ -1203,7 +1203,7 @@ GET hugging-face-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `hugging-face-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "hugging-face-embeddings",
@@ -1270,7 +1270,7 @@ GET openai-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `openai-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "openai-embeddings",
@@ -1328,7 +1328,7 @@ GET azure-openai-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-openai-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "azure-openai-embeddings",
@@ -1386,7 +1386,7 @@ GET azure-ai-studio-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-ai-studio-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "azure-ai-studio-embeddings",
@@ -1502,7 +1502,7 @@ GET mistral-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `mistral-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "mistral-embeddings",
@@ -1560,7 +1560,7 @@ GET amazon-bedrock-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `amazon-bedrock-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "amazon-bedrock-embeddings",
@@ -1618,7 +1618,7 @@ GET alibabacloud-ai-search-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `alibabacloud-ai-search-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "alibabacloud-ai-search-embeddings",
diff --git a/raw-migrated-files/observability-docs/observability/apm-common-problems.md b/raw-migrated-files/observability-docs/observability/apm-common-problems.md
index 7e2e411ae6..20d18876e1 100644
--- a/raw-migrated-files/observability-docs/observability/apm-common-problems.md
+++ b/raw-migrated-files/observability-docs/observability/apm-common-problems.md
@@ -127,7 +127,7 @@ I/O Timeouts can occur when your timeout settings across the stack are not confi
You may see an error like the one below in the {{apm-agent}} logs, and/or a similar error on the APM Server side:
-```logs
+```txt
[ElasticAPM] APM Server responded with an error:
"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout"
```
@@ -156,7 +156,7 @@ The symptom of a mapping explosion is that transactions and spans are not indexe
In the agent logs, you won’t see a sign of failures as the APM server asynchronously sends the data it received from the agents to {{es}}. However, the APM server and {{es}} log a warning like this:
-```logs
+```txt
{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in [INDEX_NAME] has been exceeded\"}
```
diff --git a/raw-migrated-files/observability-docs/observability/application-logs.md b/raw-migrated-files/observability-docs/observability/application-logs.md
index f20c7f73aa..81df8cc230 100644
--- a/raw-migrated-files/observability-docs/observability/application-logs.md
+++ b/raw-migrated-files/observability-docs/observability/application-logs.md
@@ -9,7 +9,7 @@ The format of your logs (structured or plaintext) influences your log ingestion
Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example:
-```log
+```txt
2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer
2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication
2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController
diff --git a/raw-migrated-files/observability-docs/observability/logs-parse.md b/raw-migrated-files/observability-docs/observability/logs-parse.md
index 0802eef084..0cffc98997 100644
--- a/raw-migrated-files/observability-docs/observability/logs-parse.md
+++ b/raw-migrated-files/observability-docs/observability/logs-parse.md
@@ -16,7 +16,7 @@ Make your logs more useful by extracting structured fields from your unstructure
Follow the steps below to see how the following unstructured log data is indexed by default:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
```
@@ -291,7 +291,7 @@ Check the following common issues and solutions with timestamps:
Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
```
@@ -378,7 +378,7 @@ Once you’ve extracted the `log.level` field, you can query for high-severity l
Let’s say you have the following logs with varying severities:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
@@ -459,7 +459,7 @@ The `host.ip` field is part of the [Elastic Common Schema (ECS)](https://www.ela
This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
@@ -727,7 +727,7 @@ By default, an ingest pipeline sends your log data to a single data stream. To s
This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream:
-```log
+```txt
2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.
2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed.
2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue.
diff --git a/solutions/observability/apps/configure-apm-agent-central-configuration.md b/solutions/observability/apps/configure-apm-agent-central-configuration.md
index 492b4acc16..834532d52a 100644
--- a/solutions/observability/apps/configure-apm-agent-central-configuration.md
+++ b/solutions/observability/apps/configure-apm-agent-central-configuration.md
@@ -58,13 +58,13 @@ You may see either of the following HTTP 403 errors from APM Server when it atte
APM agent log:
-```log
+```txt
"Your Elasticsearch configuration does not support agent config queries. Check your configurations at `output.elasticsearch` or `apm-server.agent.config.elasticsearch`."
```
APM Server log:
-```log
+```txt
rejecting fetch request: no valid elasticsearch config
```
@@ -76,4 +76,3 @@ To fix this error, ensure that APM Server has all the required privileges. For m
#### HTTP 401 errors [_http_401_errors]
If you get an HTTP 401 error from APM Server, make sure that you’re using an API key that is configured to **Beats**. For details on how to create and configure a compatible API key, refer to [Create an API key for writing events](grant-access-using-api-keys.md#apm-beats-api-key-publish).
-
diff --git a/solutions/search/semantic-search/cohere-es.md b/solutions/search/semantic-search/cohere-es.md
index 74e3b5ddf3..f2d434f534 100644
--- a/solutions/search/semantic-search/cohere-es.md
+++ b/solutions/search/semantic-search/cohere-es.md
@@ -309,10 +309,9 @@ for document in response.documents:
The response will look similar to this:
-```consol-result
+```console-result
Query: What is biosimilarity?
Response: Biosimilarity is based on the comparability concept, which has been used successfully for several decades to ensure close similarity of a biological product before and after a manufacturing change. Over the last 10 years, experience with biosimilars has shown that even complex biotechnology-derived proteins can be copied successfully.
Sources:
Interchangeability of Biosimilars: A European Perspective: (...)
```
-
diff --git a/solutions/search/semantic-search/semantic-search-elser.md b/solutions/search/semantic-search/semantic-search-elser.md
index b05ea6c165..565ee0c107 100644
--- a/solutions/search/semantic-search/semantic-search-elser.md
+++ b/solutions/search/semantic-search/semantic-search-elser.md
@@ -164,7 +164,7 @@ GET my-index/_search
The result is the top 10 documents that are closest in meaning to your query text from the `my-index` index sorted by their relevancy. The result also contains the extracted tokens for each of the relevant search results with their weights. Tokens are learned associations capturing relevance, they are not synonyms. To learn more about what tokens are, refer to [this page](../../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md#elser-tokens). It is possible to exclude tokens from source, refer to [this section](../vector/sparse-vector-elser.md#save-space) to learn more.
-```consol-result
+```console-result
"hits": {
"total": {
"value": 10000,
diff --git a/solutions/search/semantic-search/semantic-search-inference.md b/solutions/search/semantic-search/semantic-search-inference.md
index 5928409d91..e797533284 100644
--- a/solutions/search/semantic-search/semantic-search-inference.md
+++ b/solutions/search/semantic-search/semantic-search-inference.md
@@ -1086,7 +1086,7 @@ GET cohere-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "cohere-embeddings",
@@ -1149,7 +1149,7 @@ GET elser-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "elser-embeddings",
@@ -1207,7 +1207,7 @@ GET hugging-face-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `hugging-face-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "hugging-face-embeddings",
@@ -1274,7 +1274,7 @@ GET openai-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `openai-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "openai-embeddings",
@@ -1332,7 +1332,7 @@ GET azure-openai-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-openai-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "azure-openai-embeddings",
@@ -1390,7 +1390,7 @@ GET azure-ai-studio-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-ai-studio-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "azure-ai-studio-embeddings",
@@ -1506,7 +1506,7 @@ GET mistral-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `mistral-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "mistral-embeddings",
@@ -1564,7 +1564,7 @@ GET amazon-bedrock-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `amazon-bedrock-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "amazon-bedrock-embeddings",
@@ -1622,7 +1622,7 @@ GET alibabacloud-ai-search-embeddings/_search
As a result, you receive the top 10 documents that are closest in meaning to the query from the `alibabacloud-ai-search-embeddings` index sorted by their proximity to the query:
-```consol-result
+```console-result
"hits": [
{
"_index": "alibabacloud-ai-search-embeddings",
diff --git a/solutions/search/vector/sparse-vector-elser.md b/solutions/search/vector/sparse-vector-elser.md
index 5ce91d5fd8..e523f751da 100644
--- a/solutions/search/vector/sparse-vector-elser.md
+++ b/solutions/search/vector/sparse-vector-elser.md
@@ -164,7 +164,7 @@ GET my-index/_search
The result is the top 10 documents that are closest in meaning to your query text from the `my-index` index sorted by their relevancy. The result also contains the extracted tokens for each of the relevant search results with their weights. Tokens are learned associations capturing relevance, they are not synonyms. To learn more about what tokens are, refer to [this page](../../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md#elser-tokens). It is possible to exclude tokens from source, refer to [this section](#save-space) to learn more.
-```consol-result
+```console-result
"hits": {
"total": {
"value": 10000,
diff --git a/troubleshoot/elasticsearch/high-jvm-memory-pressure.md b/troubleshoot/elasticsearch/high-jvm-memory-pressure.md
index 3ada700bfa..fa0dc6e8ed 100644
--- a/troubleshoot/elasticsearch/high-jvm-memory-pressure.md
+++ b/troubleshoot/elasticsearch/high-jvm-memory-pressure.md
@@ -51,7 +51,7 @@ JVM Memory Pressure = `used_in_bytes` / `max_in_bytes`
As memory usage increases, garbage collection becomes more frequent and takes longer. You can track the frequency and length of garbage collection events in [`elasticsearch.log`](../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md). For example, the following event states {{es}} spent more than 50% (21 seconds) of the last 40 seconds performing garbage collection.
-```log
+```txt
[timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
```
diff --git a/troubleshoot/ingest/fleet/common-problems.md b/troubleshoot/ingest/fleet/common-problems.md
index faed3c5d75..2870141a9b 100644
--- a/troubleshoot/ingest/fleet/common-problems.md
+++ b/troubleshoot/ingest/fleet/common-problems.md
@@ -248,7 +248,7 @@ You will also need to set `ssl.verification_mode: none` in the Output settings i
To enroll in {{fleet}}, {{agent}} must connect to the {{fleet-server}} instance. If the agent is unable to connect, you see the following failure:
-```output
+```txt
fail to enroll: fail to execute request to {fleet-server}:Post http://fleet-server:8220/api/fleet/agents/enroll?: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
```
diff --git a/troubleshoot/observability/apm-agent-go/apm-go-agent.md b/troubleshoot/observability/apm-agent-go/apm-go-agent.md
index c42db51fdf..3fb0301a18 100644
--- a/troubleshoot/observability/apm-agent-go/apm-go-agent.md
+++ b/troubleshoot/observability/apm-agent-go/apm-go-agent.md
@@ -28,7 +28,7 @@ With logging enabled, use [`ELASTIC_APM_LOG_LEVEL`](https://www.elastic.co/guide
Be sure to execute a few requests to your application before posting your log files. Each request should add lines similar to these in the logs:
-```log
+```txt
{"level":"debug","time":"2020-07-23T11:46:32+08:00","message":"sent request with 100 transactions, 0 spans, 0 errors, 0 metricsets"}
```
@@ -42,4 +42,3 @@ In the unlikely event the agent causes disruptions to a production application,
If you have access to [dynamic configuration](https://www.elastic.co/guide/en/apm/agent/go/current/configuration.html#dynamic-configuration), you can disable the recording of events by setting [`ELASTIC_APM_RECORDING`](https://www.elastic.co/guide/en/apm/agent/go/current/configuration.html#config-recording) to `false`. When changed at runtime from a supported source, there’s no need to restart your application.
If that doesn’t work, or you don’t have access to dynamic configuration, you can disable the agent by setting [`ELASTIC_APM_ACTIVE`](https://www.elastic.co/guide/en/apm/agent/go/current/configuration.html#config-active) to `false`. Restart your application for the changes to apply.
-
diff --git a/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md b/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md
index fff4b465db..49c901a611 100644
--- a/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md
+++ b/troubleshoot/observability/troubleshoot-your-universal-profiling-agent-deployment.md
@@ -13,7 +13,7 @@ You can use the Universal Profiling Agent logs to find errors.
The following is an example of a *healthy* Universal Profiling Agent output:
-```logs
+```txt
time="..." level=info msg="Starting Prodfiler Host Agent v2.4.0 (revision develop-5cce978a, build timestamp 12345678910)"
time="..." level=info msg="Interpreter tracers: perl,php,python,hotspot,ruby,v8"
time="..." level=info msg="Automatically determining environment and machine ID ..."
@@ -34,7 +34,7 @@ time="..." level=info msg="Attached sched monitor"
A Universal Profiling Agent deployment is working if the output of the following command is empty:
-```logs
+```txt
head host-agent.log -n 15 | grep "level=error"
```
@@ -44,25 +44,25 @@ If running this command outputs error-level logs, the following are possible cau
If the Universal Profiling Agent is running on an unsupported kernel version, the following is logged:
- ```logs
+ ```txt
Universal Profiling Agent requires kernel version 4.19 or newer but got 3.10.0
```
If eBPF features are not available in the kernel, the Universal Profiling Agent fails to start, and one of the following is logged:
- ```logs
+ ```txt
Failed to probe eBPF syscall
```
or
- ```logs
+ ```txt
Failed to probe tracepoint
```
* The Universal Profiling Agent is not able to connect to {{ecloud}}. In this case, a similar message to the following is logged:
- ```logs
+ ```txt
Failed to setup gRPC connection (retrying...): context deadline exceeded
```
@@ -70,13 +70,13 @@ If running this command outputs error-level logs, the following are possible cau
* The secret token is not valid, or it has been changed. In this case, the Universal Profiling Agent gent shuts down, and logs a similar message to the following:
- ```logs
+ ```txt
rpc error: code = Unauthenticated desc = authentication failed
```
* The Universal Profiling Agent is unable to send data to your deployment. In this case, a similar message to the following is logged:
- ```logs
+ ```txt
Failed to report hostinfo (retrying...): rpc error: code = Unimplemented desc = unknown service collectionagent.CollectionAgent"
```
@@ -84,7 +84,7 @@ If running this command outputs error-level logs, the following are possible cau
* The collector (part of the backend in {{ecloud}} that receives data from the Universal Profiling Agent) ran out of memory. In this case, a similar message to the following is logged:
- ```logs
+ ```txt
Error: failed to invoke XXX(): Unavailable rpc error: code = Unavailable desc = unexpected HTTP status code received from server: 502 (Bad Gateway); transport: received unexpected content-type "application/json; charset=UTF-8"
```
@@ -94,7 +94,7 @@ If running this command outputs error-level logs, the following are possible cau
* The Universal Profiling Agent is incompatible with the {{stack}} version. In this case, the following message is logged:
- ```logs
+ ```txt
rpc error: code = FailedPrecondition desc= HostAgent version is unsupported, please upgrade to the latest version
```
@@ -102,7 +102,7 @@ If running this command outputs error-level logs, the following are possible cau
* You are using a Universal Profling Agent from a newer {{stack}} version, configured to connect to an older {{stack}} version cluster. In this case, the following message is logged:
- ```logs
+ ```txt
rpc error: code = FailedPrecondition desc= Backend is incompatible with HostAgent, please check your configuration
```
@@ -230,4 +230,3 @@ In the support request, specify if your issue deals with the Universal Profiling
## Send feedback [profiling-send-feedback]
If troubleshooting and support are not fixing your issues, or you have any other feedback that you want to share about the product, send the Universal Profiling team an email at `profiling-feedback@elastic.co`.
-