diff --git a/deploy-manage/api-keys/serverless-project-api-keys.md b/deploy-manage/api-keys/serverless-project-api-keys.md
index b50e8971d6..1581f03812 100644
--- a/deploy-manage/api-keys/serverless-project-api-keys.md
+++ b/deploy-manage/api-keys/serverless-project-api-keys.md
@@ -14,7 +14,7 @@ In serverless projects, the following types of API keys exist:
- **Personal** API keys, that you can create to allow external services to access your serverless project on behalf of a user.
- **Managed** API keys, created and managed by {{kib}} to correctly run background tasks.
-You can manage your keys in **{{project-settings}} → {{manage-app}} → {{api-keys-app}}**:
+You can manage your keys in **{{project-settings}} → {{manage-app}} → API keys**:
:::{image} /deploy-manage/images/serverless-api-key-management.png
:alt: API keys UI
@@ -24,7 +24,7 @@ You can manage your keys in **{{project-settings}} → {{manage-app}} → {{api-
## Create an API key [api-keys-create-an-api-key]
-In **{{api-keys-app}}**, click **Create API key**:
+In **API keys**, click **Create API key**:
:::{image} /deploy-manage/images/serverless-create-personal-api-key.png
:alt: Create API key UI
@@ -77,11 +77,11 @@ For the `role_descriptors` object schema, check out the [`/_security/api_key` en
## Update an API key [api-keys-update-an-api-key]
-In **{{api-keys-app}}**, click on the name of the key. You can update only **Restrict privileges** and **Include metadata**.
+In **API keys**, click on the name of the key. You can update only **Restrict privileges** and **Include metadata**.
## View and delete API keys [api-keys-view-and-delete-api-keys]
-The **{{api-keys-app}}** app lists your API keys, including the name, date created, and status. When API keys expire, the status changes from `Active` to `Expired`.
+The **API keys** app lists your API keys, including the name, date created, and status. When API keys expire, the status changes from `Active` to `Expired`.
You can delete API keys individually or in bulk.
diff --git a/deploy-manage/autoscaling/autoscaling-deciders.md b/deploy-manage/autoscaling/autoscaling-deciders.md
index 920898cd98..baf8f0d223 100644
--- a/deploy-manage/autoscaling/autoscaling-deciders.md
+++ b/deploy-manage/autoscaling/autoscaling-deciders.md
@@ -68,7 +68,7 @@ The estimation of expected additional data is based on past indexing that occurr
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes.
-### {{api-examples-title}} [autoscaling-proactive-storage-decider-examples]
+### Examples [autoscaling-proactive-storage-decider-examples]
This example puts an autoscaling policy named `my_autoscaling_policy`, overriding the proactive decider’s `forecast_window` to be 10 minutes.
@@ -143,7 +143,7 @@ Both `num_anomaly_jobs_in_queue` and `num_analytics_jobs_in_queue` are designed
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset.
-### {{api-examples-title}} [autoscaling-machine-learning-decider-examples]
+### Examples [autoscaling-machine-learning-decider-examples]
This example creates an autoscaling policy named `my_autoscaling_policy` that overrides the default configuration of the {{ml}} decider.
@@ -198,7 +198,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) `fixed` decider responds w
: (Optional, integer) Number of nodes to use when calculating capacity. Defaults to `1`.
-### {{api-examples-title}} [autoscaling-fixed-decider-examples]
+### Examples [autoscaling-fixed-decider-examples]
This example puts an autoscaling policy named `my_autoscaling_policy`, enabling and configuring the fixed decider.
diff --git a/deploy-manage/deploy/self-managed/air-gapped-install.md b/deploy-manage/deploy/self-managed/air-gapped-install.md
index b1b3a6ac1f..0508f9b11a 100644
--- a/deploy-manage/deploy/self-managed/air-gapped-install.md
+++ b/deploy-manage/deploy/self-managed/air-gapped-install.md
@@ -243,7 +243,7 @@ WantedBy=default.target
## {{artifact-registry}} [air-gapped-elastic-artifact-registry]
-Air-gapped install of the {{artifact-registry}} is necessary in order to enable {{agent}} deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to [Host your own artifact registry for binary downloads](/reference/fleet/air-gapped.md#host-artifact-registry) in the {{fleet}} and {{elastic-agent}} Guide.
+Air-gapped install of the {{artifact-registry}} is necessary in order to enable {{agent}} deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to [Host your own artifact registry for binary downloads](/reference/fleet/air-gapped.md#host-artifact-registry) in the {{fleet}} and {{agent}} Guide.
::::{note}
When setting up own web server, such as NGINX, to function as the {{artifact-registry}}, it is recommended not to use TLS as there are, currently, no direct ways to establish certificate trust between {{agents}} and this service.
diff --git a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md
index a4be49658a..c1025436de 100644
--- a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md
+++ b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md
@@ -41,13 +41,13 @@ Some {{stack}} features also require specific node roles:
* {{ccs-cap}} and {{ccr}} require the `remote_cluster_client` role.
* {{stack-monitor-app}} and ingest pipelines require the `ingest` role.
-* {{fleet}}, the {{security-app}}, and {{transforms}} require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
+* {{fleet}}, the {{security-app}}, and transforms require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
* {{ml-cap}} features, such as {{anomaly-detect}}, require the `ml` role.
::::
-As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and {{transform}} nodes.
+As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and transform nodes.
## Change the role of a node [change-node-role]
@@ -82,7 +82,7 @@ The following is a list of the roles that a node can perform in a cluster. A nod
* [Ingest node](#node-ingest-node) (`ingest`): Ingest nodes are able to apply an [ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to not include the `ingest` role from nodes that have the `master` or `data` roles.
* [Remote-eligible node](#remote-node) (`remote_cluster_client`): A node that is eligible to act as a remote client.
* [Machine learning node](#ml-node-role) (`ml`): A node that can run {{ml-features}}. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. For more information, see [Machine learning settings](../../deploy/self-managed/configure-elasticsearch.md) and [Machine learning in the {{stack}}](/explore-analyze/machine-learning.md).
-* [{{transform-cap}} node](#transform-node-role) (`transform`): A node that can perform {{transforms}}. If you want to use {{transforms}}, there must be at least one {{transform}} node in your cluster. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
+* [Transform node](#transform-node-role) (`transform`): A node that can perform transforms. If you want to use transforms, there must be at least one transform node in your cluster. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
::::{admonition} Coordinating node
:class: note
@@ -299,15 +299,15 @@ node.roles: [ ml, remote_cluster_client]
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{ml}} jobs or {{dfeeds}}. If you use {{ccs}} in your {{anomaly-jobs}}, the `remote_cluster_client` role is also required on all master-eligible nodes. Otherwise, the {{dfeed}} cannot start. See [Remote-eligible node](#remote-node).
-### {{transform-cap}} node [transform-node-role]
+### Transform node [transform-node-role]
-{{transform-cap}} nodes run {{transforms}} and handle {{transform}} API requests. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md).
+Transform nodes run transforms and handle transform API requests. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md).
-To create a dedicated {{transform}} node, set:
+To create a dedicated transform node, set:
```yaml
node.roles: [ transform, remote_cluster_client ]
```
-The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{transforms}}. See [Remote-eligible node](#remote-node).
+The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in transforms. See [Remote-eligible node](#remote-node).
diff --git a/deploy-manage/monitor/stack-monitoring/es-local-exporter.md b/deploy-manage/monitor/stack-monitoring/es-local-exporter.md
index 250082a9ef..d83b5883e8 100644
--- a/deploy-manage/monitor/stack-monitoring/es-local-exporter.md
+++ b/deploy-manage/monitor/stack-monitoring/es-local-exporter.md
@@ -13,7 +13,7 @@ products:
:::{include} _snippets/legacy-warning.md
:::
-The `local` exporter is the default exporter in {{monitoring}}. It routes data back into the same (local) cluster. In other words, it uses the production cluster as the monitoring cluster. For example:
+The `local` exporter is the default exporter in {{monitor-features}}. It routes data back into the same (local) cluster. In other words, it uses the production cluster as the monitoring cluster. For example:
```yaml
xpack.monitoring.exporters.my_local_exporter: <1>
@@ -27,7 +27,7 @@ This exporter exists to provide a convenient option when hardware is simply not
* All indexing impacts the local cluster and the nodes that hold the monitoring indices' shards.
* Most collectors run on the elected master node. Therefore most indexing occurs with the elected master node as the coordinating node, which is a bad practice.
-* Any usage of {{monitoring}} for {{kib}} uses the local cluster’s resources for searches and aggregations, which means that they might not be available for non-monitoring tasks.
+* Any usage of {{monitor-features}} for {{kib}} uses the local cluster’s resources for searches and aggregations, which means that they might not be available for non-monitoring tasks.
* If the local cluster goes down, the monitoring cluster has inherently gone down with it (and vice versa), which generally defeats the purpose of monitoring.
For the `local` exporter, all setup occurs only on the elected master node. This means that if you do not see any monitoring templates or ingest pipelines, the elected master node is having issues or it is not configured in the same way. Unlike the `http` exporter, the `local` exporter has the advantage of accessing the monitoring cluster’s up-to-date cluster state. It can therefore always check that the templates and ingest pipelines exist without a performance penalty. If the elected master node encounters errors while trying to create the monitoring resources, it logs errors, ignores that collection, and tries again after the next collection.
diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md
index 58335ae002..e9d76aa1d8 100644
--- a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md
+++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md
@@ -14,7 +14,7 @@ products:
:::{include} _snippets/legacy-warning.md
:::
-Collectors, as their name implies, collect things. Each collector runs once for each collection interval to obtain data from the public APIs in {{es}} and {{xpack}} that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the [exporters](es-monitoring-exporters.md) to be sent to the monitoring clusters. Regardless of the number of exporters, each collector only runs once per collection interval.
+Collectors, as their name implies, collect things. Each collector runs once for each collection interval to obtain data from the public APIs in {{es}} that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the [exporters](es-monitoring-exporters.md) to be sent to the monitoring clusters. Regardless of the number of exporters, each collector only runs once per collection interval.
There is only one collector per data type gathered. In other words, for any monitoring document that is created, it comes from a single collector rather than being merged from multiple collectors. The {{es}} {{monitor-features}} currently have a few collectors because the goal is to minimize overlap between them for optimal performance.
diff --git a/deploy-manage/monitor/stack-monitoring/es-pause-export.md b/deploy-manage/monitor/stack-monitoring/es-pause-export.md
index 92cabcde1b..7331d07002 100644
--- a/deploy-manage/monitor/stack-monitoring/es-pause-export.md
+++ b/deploy-manage/monitor/stack-monitoring/es-pause-export.md
@@ -14,7 +14,7 @@ products:
:::{include} _snippets/legacy-warning.md
:::
-To stop generating {{monitoring}} data in {{es}}, disable data collection:
+To stop generating monitoring data in {{es}}, disable data collection:
```yaml
xpack.monitoring.collection.enabled: false
@@ -43,7 +43,7 @@ xpack.monitoring.exporters.my_http_exporter:
1. Disable the named exporter. If the same name as an existing exporter is not used, then this will create a completely new exporter that is completely ignored. This value can be set dynamically by using cluster settings.
-::::{note}
+::::{note}
Defining a disabled exporter prevents the default exporter from being created.
::::
diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md
index e91a9ce0c9..419c8e5e5c 100644
--- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md
+++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md
@@ -72,7 +72,7 @@ To learn about monitoring in general, refer to [](/deploy-manage/monitor/stack-m
For more information, see [Monitoring settings in {{es}}](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md) and [the Cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings).
4. [Install {{metricbeat}}](beats://reference/metricbeat/metricbeat-installation-configuration.md) on the same server as {{kib}}.
-5. Enable the {{kib}} {{xpack}} module in {{metricbeat}}.
+5. Enable the {{kib}} X-Pack module in {{metricbeat}}.
For example, to enable the default configuration in the `modules.d` directory, run the following command:
@@ -82,7 +82,7 @@ To learn about monitoring in general, refer to [](/deploy-manage/monitor/stack-m
For more information, see [Specify which modules to run](beats://reference/metricbeat/configuration-metricbeat.md) and [{{kib}} module](beats://reference/metricbeat/metricbeat-module-kibana.md).
-6. Configure the {{kib}} {{xpack}} module in {{metricbeat}}.
+6. Configure the {{kib}} X-Pack module in {{metricbeat}}.
The `modules.d/kibana-xpack.yml` file contains the following settings:
diff --git a/deploy-manage/remote-clusters/remote-clusters-migrate.md b/deploy-manage/remote-clusters/remote-clusters-migrate.md
index 9753902523..01a0961f49 100644
--- a/deploy-manage/remote-clusters/remote-clusters-migrate.md
+++ b/deploy-manage/remote-clusters/remote-clusters-migrate.md
@@ -109,7 +109,7 @@ On the remote cluster:
On the local cluster, stop any persistent tasks that refer to the remote cluster:
-* Use the [Stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
+* Use the [Stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
* Use the [Close jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-close-job) API to close any anomaly detection jobs.
* Use the [Pause auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-auto-follow-pattern) API to pause any auto-follow {{ccr}}.
* Use the [Pause follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-follow) API to pause any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
@@ -218,7 +218,7 @@ On the local cluster:
Resume any persistent tasks that you stopped earlier. Tasks should be restarted by the same user or API key that created the task before the migration. Ensure the roles of this user or API key have been updated with the required `remote_indices` or `remote_cluster` privileges. For users, tasks capture the caller’s credentials when started and run in that user’s security context. For API keys, restarting a task will update the task with the updated API key.
-* Use the [Start {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
+* Use the [Start transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
* Use the [Open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) API to open any anomaly detection jobs.
* Use the [Resume follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-follow) API to resume any auto-follow {{ccr}}.
* Use the [Resume auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-auto-follow-pattern) API to resume any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
diff --git a/docset.yml b/docset.yml
index b1588ec869..77f3e46c2a 100644
--- a/docset.yml
+++ b/docset.yml
@@ -129,10 +129,7 @@ subs:
project-settings: "Project settings"
manage-app: "Management"
index-manage-app: "Index Management"
- rules-app: "Rules"
saved-objects-app: "Saved Objects"
- api-keys-app: "API keys"
- connectors-app: "Connectors"
reports-app: "Reports"
app-search-crawler: "App Search web crawler"
agent: "Elastic Agent"
@@ -145,7 +142,6 @@ subs:
artifact-registry: "Elastic Artifact Registry"
aws: "AWS"
stack: "Elastic Stack"
- xpack: "X-Pack"
es: "Elasticsearch"
kib: "Kibana"
ls: "Logstash"
@@ -157,7 +153,6 @@ subs:
packetbeat: "Packetbeat"
winlogbeat: "Winlogbeat"
esql: "ES|QL"
- elastic-agent: "Elastic Agent"
k8s: "Kubernetes"
security-features: "security features"
operator-feature: "operator privileges feature"
@@ -178,7 +173,6 @@ subs:
webhook-cm: "Webhook - Case Management"
opsgenie: "Opsgenie"
hive: "TheHive"
- monitoring: "X-Pack monitoring"
monitor-features: "monitoring features"
stack-monitor-features: "Elastic Stack monitoring features"
watcher: "Watcher"
@@ -226,10 +220,6 @@ subs:
watcher-transform: "payload transform"
watcher-transforms: "payload transforms"
watcher-transforms-cap: "Payload transforms"
- transform: "transform"
- transforms: "transforms"
- transform-cap: "Transform"
- transforms-cap: "Transforms"
dfanalytics-cap: "Data frame analytics"
dfanalytics: "data frame analytics"
dfanalytics-job: "data frame analytics analytics job"
@@ -271,8 +261,6 @@ subs:
private-locations: "Private Locations"
esh: "ES-Hadoop"
observability: "Observability"
- api-query-parms-title: "Query parameters"
- api-examples-title: "Examples"
agent-pull: "https://github.com/elastic/elastic-agent/pull/"
es-pull: "https://github.com/elastic/elasticsearch/pull/"
kib-pull: "https://github.com/elastic/kibana/pull/"
diff --git a/explore-analyze/alerts-cases/alerts.md b/explore-analyze/alerts-cases/alerts.md
index 9c5ce6f123..711c95b8c5 100644
--- a/explore-analyze/alerts-cases/alerts.md
+++ b/explore-analyze/alerts-cases/alerts.md
@@ -14,7 +14,7 @@ products:
# Alerts
-## {{rules-app}} [rules]
+## {{rules-ui}} [rules]
In general, a rule consists of three parts:
@@ -56,7 +56,7 @@ When defining actions in a rule, you specify:
* An action frequency
* A mapping of rule values to properties exposed for that type of action
-Each action uses a connector, which provides connection information for a {{kib}} service or third party integration, depending on where you want to send the notifications. The specific list of connectors that you can use in your rule vary by project type. Refer to [{{connectors-app}}](../../deploy-manage/manage-connectors.md).
+Each action uses a connector, which provides connection information for a {{kib}} service or third party integration, depending on where you want to send the notifications. The specific list of connectors that you can use in your rule vary by project type. Refer to [{{connectors-ui}}](../../deploy-manage/manage-connectors.md).
After you select a connector, set the *action frequency*. If you want to reduce the number of notifications you receive without affecting their timeliness, some rule types support alert summaries. For example, if you create an {{es}} query rule, you can set the action frequency such that you receive summaries of the new, ongoing, and recovered alerts on a custom interval:
diff --git a/explore-analyze/alerts-cases/alerts/rule-types.md b/explore-analyze/alerts-cases/alerts/rule-types.md
index 23d3764326..6696da2983 100644
--- a/explore-analyze/alerts-cases/alerts/rule-types.md
+++ b/explore-analyze/alerts-cases/alerts/rule-types.md
@@ -25,7 +25,7 @@ Some rule types are subscription features, while others are free features. For a
| --- | --- |
| [{{es}} query](rule-type-es-query.md) | Run a user-configured {{es}} query, compare the number of matches to a configured threshold, and schedule actions to run when the threshold condition is met. |
| [Index threshold](rule-type-index-threshold.md) | Aggregate field values from documents using {{es}} queries, compare them to threshold values, and schedule actions to run when the thresholds are met. |
-| [{{transform-cap}} rules](../../transforms/transform-alerts.md) | {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
+| [Transform rules](../../transforms/transform-alerts.md) | {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
| [Tracking containment](geo-alerting.md) | Run an {{es}} query to determine if any documents are currently contained in any boundaries from a specified boundary index and generate alerts when a rule’s conditions are met. |
## {{observability}} rules [observability-rules]
diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md
index f49bbd556a..72cbb43814 100644
--- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md
+++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md
@@ -69,7 +69,7 @@ It’s an online model and updated continuously. Old parts of the model are prun
There is a set of benchmarks to monitor the performance of the {{anomaly-detect}} algorithms and to ensure no regression occurs as the methods are continuously developed and refined. They are called "data scenarios" and consist of 3 things:
* a dataset (stored as an {{es}} snapshot),
-* a {{ml}} config ({{anomaly-detect}}, dfanalysis, {{transform}}, or {{infer}}),
+* a {{ml}} config ({{anomaly-detect}}, dfanalysis, transform, or {{infer}}),
* an arbitrary set of static assertions (bucket counts, anomaly scores, accuracy value, and so on).
Performance metrics are collected from each and every scenario run and they are persisted in an Elastic Cloud cluster. This information is then used to track the performance over time, across the different builds, mainly to detect any regressions in the performance (both result quality and compute time).
diff --git a/explore-analyze/machine-learning/data-frame-analytics.md b/explore-analyze/machine-learning/data-frame-analytics.md
index 38a7d17dd7..2cbb90500e 100644
--- a/explore-analyze/machine-learning/data-frame-analytics.md
+++ b/explore-analyze/machine-learning/data-frame-analytics.md
@@ -13,7 +13,7 @@ products:
# Data frame analytics [ml-dfanalytics]
::::{important}
-Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [{{transforms-cap}}](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
+Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [Transforms](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
::::
{{dfanalytics-cap}} enable you to perform different analyses of your data and annotate it with the results. Consult [Setup and security](setting-up-machine-learning.md) to learn more about the license and the security privileges that are required to use {{dfanalytics}}.
diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md
index 8c86773a1c..a4ddf3c618 100644
--- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md
+++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md
@@ -38,7 +38,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
If possible, prepare your input data such that it has less classes. A {{classanalysis}} with many classes takes more time to run than a binary {{classification}} job. The relationship between the number of classes and the runtime is roughly linear.
-You might also need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
+You might also need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
To learn more about how to prepare your data, refer to [the relevant section](ml-dfa-overview.md#prepare-transform-data) of the supervised learning overview.
diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md
index 475a8e58a5..c0feeabfb9 100644
--- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md
+++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md
@@ -45,7 +45,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
## 3. Prepare and transform data [dfa-outlier-detection-prepare-data]
-{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
+{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
You can find an example of how to transform your data into an entity-centric index in [this section](#weblogs-outliers).
@@ -116,17 +116,17 @@ The evaluate {{dfanalytics}} API can return the false positive rate (`fpr`) and
The goal of {{oldetection}} is to find the most unusual documents in an index. Let’s try to detect unusual behavior in the [data logs sample data set](../../index.md#gs-get-data-into-kibana).
-1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating {{transforms}}, you also need `manage_data_frame_transforms` cluster privileges.
+1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating transforms, you also need `manage_data_frame_transforms` cluster privileges.
-2. Create a {{transform}} that generates an entity-centric index with numeric or boolean data to analyze.
+2. Create a transform that generates an entity-centric index with numeric or boolean data to analyze.
In this example, we’ll use the web logs sample data and pivot the data such that we get a new index that contains a network usage summary for each client IP.
- In particular, create a {{transform}} that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
- You can preview the {{transform}} before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
+ In particular, create a transform that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
+ You can preview the transform before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
:::{image} /explore-analyze/images/machine-learning-logs-transform-preview.jpg
- :alt: Creating a {{transform}} in {{kib}}
+ :alt: Creating a transform in {{kib}}
:screenshot:
:::
- Alternatively, you can use the [preview {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
+ Alternatively, you can use the [preview transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
::::{dropdown} API example
@@ -218,15 +218,15 @@ POST _transform/_preview
::::
- For more details about creating {{transforms}}, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
+ For more details about creating transforms, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
-3. Start the {{transform}}.
+3. Start the transform.
::::{tip}
-Even though resource utilization is automatically adjusted based on the cluster load, a {{transform}} increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
+Even though resource utilization is automatically adjusted based on the cluster load, a transform increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
::::
- You can start, stop, and manage {{transforms}} in {{kib}}. Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
+ You can start, stop, and manage transforms in {{kib}}. Alternatively, you can use the [start transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
::::{dropdown} API example
@@ -352,7 +352,7 @@ GET weblog-outliers/_search?q="111.237.144.54"
Now that you’ve found unusual behavior in the sample data set, consider how you might apply these steps to other data sets. If you have data that is already marked up with true outliers, you can determine how well the {{oldetection}} algorithms perform by using the evaluate {{dfanalytics}} API. See [6. Evaluate the results](#ml-outlier-detection-evaluate).
::::{tip}
-If you do not want to keep the {{transform}} and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete {{transforms}} and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
+If you do not want to keep the transform and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete transforms and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
::::
## Further reading [outlier-detection-reading]
diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md
index 54f1cddba0..f8f6577918 100644
--- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md
+++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md
@@ -58,7 +58,7 @@ An important requirement is a data set that is large enough to train a model. Fo
Before you train the model, consider preprocessing the data. In practice, the type of preprocessing depends on the nature of the data set. Preprocessing can include, but is not limited to, mitigating redundancy, reducing biases, applying standards and/or conventions, data normalization, and so on.
-{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
+{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
### Train, test, iterate [train-test-iterate]
@@ -76,7 +76,7 @@ Once the model is trained, you can evaluate how well it predicts previously unse
You have trained the model and are satisfied with the performance. The last step is to deploy your trained model and start using it on new data.
-The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous {{transform}} or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
+The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous transform or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
### Next steps [next-steps]
diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md
index c30051bb85..b765dd8636 100644
--- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md
+++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md
@@ -32,7 +32,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
{{regression-cap}} is a supervised {{ml}} method, which means you need to supply a labeled training data set. This data set must have values for the {{feature-vars}} and the {{depvar}} which are used to train the model. This information is used during training to identify relationships among the various characteristics of the data and the predicted value. This labeled data set also plays a critical role in model evaluation.
-You might also need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{regression}}.
+You might also need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{regression}}.
To learn more about how to prepare your data, refer to [the relevant section](ml-dfa-overview.md#prepare-transform-data) of the supervised learning overview.
@@ -234,16 +234,16 @@ To predict the number of minutes delayed for each flight:
3. Optionally improve the quality of the analysis by adding a query that removes erroneous data. In this case, we omit flights with a distance of 0 kilometers or less.
4. Choose `FlightDelayMin` as the {{depvar}}, which is the field that we want to predict.
5. Add `Cancelled`, `FlightDelay`, and `FlightDelayType` to the list of excluded fields. These fields will be excluded from the analysis. It is recommended to exclude fields that either contain erroneous data or describe the `dependent_variable`.
-
+
The wizard includes a scatterplot matrix, which enables you to explore the relationships between the numeric fields. The color of each point is affected by the value of the {{depvar}} for that document, as shown in the legend. You can highlight an area in one of the charts and the corresponding area is also highlighted in the rest of the chart. You can use this matrix to help you decide which fields to include or exclude from the analysis.
-
+
:::{image} /explore-analyze/images/machine-learning-flightdata-regression-scatterplot.png
:alt: A scatterplot matrix for three fields in {{kib}}
:screenshot:
:::
If you want these charts to represent data from a larger sample size or from a randomized selection of documents, you can change the default behavior. However, a larger sample size might slow down the performance of the matrix and a randomized selection might put more load on the cluster due to the more intensive query.
-
+
6. Choose a training percent of `90` which means it randomly selects 90% of the source data for training.
7. If you want to experiment with [{{feat-imp}}](ml-feature-importance.md), specify a value in the advanced configuration options. In this example, we choose to return a maximum of 5 {{feat-imp}} values per document. This option affects the speed of the analysis, so by default it is disabled.
8. Use a model memory limit of at least 50 MB. If the job requires more than this amount of memory, it fails to start. If the available memory on the node is limited, this setting makes it possible to prevent job execution.
diff --git a/explore-analyze/transforms.md b/explore-analyze/transforms.md
index fefad235d8..62f022ca85 100644
--- a/explore-analyze/transforms.md
+++ b/explore-analyze/transforms.md
@@ -12,17 +12,17 @@ products:
---
# Transforming data [transforms]
-{{transforms-cap}} enable you to convert existing {{es}} indices into summarized indices, which provide opportunities for new insights and analytics. For example, you can use {{transforms}} to pivot your data into entity-centric indices that summarize the behavior of users or sessions or other entities in your data. Or you can use {{transforms}} to find the latest document among all the documents that have a certain unique key.
+Transforms enable you to convert existing {{es}} indices into summarized indices, which provide opportunities for new insights and analytics. For example, you can use transforms to pivot your data into entity-centric indices that summarize the behavior of users or sessions or other entities in your data. Or you can use transforms to find the latest document among all the documents that have a certain unique key.
* [Overview](transforms/transform-overview.md)
* [Setup](transforms/transform-setup.md)
-* [When to use {{transforms}}](transforms/transform-usage.md)
-* [Generating alerts for {{transforms}}](transforms/transform-alerts.md)
-* [{{transforms-cap}} at scale](transforms/transform-scale.md)
+* [When to use transforms](transforms/transform-usage.md)
+* [Generating alerts for transforms](transforms/transform-alerts.md)
+* [Transforms at scale](transforms/transform-scale.md)
* [How checkpoints work](transforms/transform-checkpoints.md)
* [API quick reference](transforms/transform-api-quickref.md)
* [Tutorial: Transforming the eCommerce sample data](transforms/ecommerce-transforms.md)
* [Examples](transforms/transform-examples.md)
* [Painless examples](transforms/transform-painless-examples.md)
-* [Troubleshooting {{transforms}}](../troubleshoot/elasticsearch/transform-troubleshooting.md)
+* [Troubleshooting transforms](../troubleshoot/elasticsearch/transform-troubleshooting.md)
* [Limitations](transforms/transform-limitations.md)
diff --git a/explore-analyze/transforms/ecommerce-transforms.md b/explore-analyze/transforms/ecommerce-transforms.md
index 50673a0748..8f0f787ec8 100644
--- a/explore-analyze/transforms/ecommerce-transforms.md
+++ b/explore-analyze/transforms/ecommerce-transforms.md
@@ -10,29 +10,29 @@ products:
# Tutorial: Transforming the eCommerce sample data [ecommerce-transforms]
-[{{transforms-cap}}](../transforms.md) enable you to retrieve information from an {{es}} index, transform it, and store it in another index. Let’s use the [{{kib}} sample data](/explore-analyze/index.md) to demonstrate how you can pivot and summarize your data with {{transforms}}.
+[Transforms](../transforms.md) enable you to retrieve information from an {{es}} index, transform it, and store it in another index. Let’s use the [{{kib}} sample data](/explore-analyze/index.md) to demonstrate how you can pivot and summarize your data with transforms.
-1. Verify that your environment is set up properly to use {{transforms}}. If the {{es}} {{security-features}} are enabled, to complete this tutorial you need a user that has authority to preview and create {{transforms}}. You must also have specific index privileges for the source and destination indices. See [Setup](transform-setup.md).
+1. Verify that your environment is set up properly to use transforms. If the {{es}} {{security-features}} are enabled, to complete this tutorial you need a user that has authority to preview and create transforms. You must also have specific index privileges for the source and destination indices. See [Setup](transform-setup.md).
2. Choose your *source index*.
In this example, we’ll use the eCommerce orders sample data. If you’re not already familiar with the `kibana_sample_data_ecommerce` index, use the **Revenue** dashboard in {{kib}} to explore the data. Consider what insights you might want to derive from this eCommerce data.
-3. Choose the pivot type of {{transform}} and play with various options for grouping and aggregating the data.
+3. Choose the pivot type of transform and play with various options for grouping and aggregating the data.
- There are two types of {{transforms}}, but first we’ll try out *pivoting* your data, which involves using at least one field to group it and applying at least one aggregation. You can preview what the transformed data will look like, so go ahead and play with it! You can also enable histogram charts to get a better understanding of the distribution of values in your data.
+ There are two types of transforms, but first we’ll try out *pivoting* your data, which involves using at least one field to group it and applying at least one aggregation. You can preview what the transformed data will look like, so go ahead and play with it! You can also enable histogram charts to get a better understanding of the distribution of values in your data.
For example, you might want to group the data by product ID and calculate the total number of sales for each product and its average price. Alternatively, you might want to look at the behavior of individual customers and calculate how much each customer spent in total and how many different categories of products they purchased. Or you might want to take the currencies or geographies into consideration. What are the most interesting ways you can transform and interpret this data?
- Go to the **Transforms** management page in {{kib}} using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then use the wizard to create a {{transform}}:
+ Go to the **Transforms** management page in {{kib}} using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then use the wizard to create a transform:
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot1.png
- :alt: Creating a simple {{transform}} in {{kib}}
+ :alt: Creating a simple transform in {{kib}}
:screenshot:
:::
Group the data by customer ID and add one or more aggregations to learn more about each customer’s orders. For example, let’s calculate the sum of products they purchased, the total price of their purchases, the maximum number of products that they purchased in a single order, and their total number of orders. We’ll accomplish this by using the [`sum` aggregation](elasticsearch://reference/aggregations/search-aggregations-metrics-sum-aggregation.md) on the `total_quantity` and `taxless_total_price` fields, the [`max` aggregation](elasticsearch://reference/aggregations/search-aggregations-metrics-max-aggregation.md) on the `total_quantity` field, and the [`cardinality` aggregation](elasticsearch://reference/aggregations/search-aggregations-metrics-cardinality-aggregation.md) on the `order_id` field:
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot2.png
- :alt: Adding multiple aggregations to a {{transform}} in {{kib}}
+ :alt: Adding multiple aggregations to a transform in {{kib}}
:screenshot:
:::
@@ -40,7 +40,7 @@ products:
If you’re interested in a subset of the data, you can optionally include a [query](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) element. In this example, we’ve filtered the data so that we’re only looking at orders with a `currency` of `EUR`. Alternatively, we could group the data by that field too. If you want to use more complex queries, you can create your {{dataframe}} from a [saved search](../discover/save-open-search.md).
::::
- If you prefer, you can use the [preview {{transforms}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform).
+ If you prefer, you can use the [preview transforms API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform).
::::{dropdown} API example
```console
@@ -92,22 +92,22 @@ products:
::::
-4. When you are satisfied with what you see in the preview, create the {{transform}}.
- 1. Supply a {{transform}} ID, the name of the destination index and optionally a description. If the destination index does not exist, it will be created automatically when you start the {{transform}}.
- 2. Decide whether you want the {{transform}} to run once or continuously. Since this sample data index is unchanging, let's use the default behavior and run the {{transform}} once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the {{transform}} can use to check which entities have changed. In general, it's a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
- 3. Optionally, you can configure a retention policy that applies to your {{transform}}. Select a date field that is used to identify old documents in the destination index and provide a maximum age. Documents that are older than the configured value are removed from the destination index.
+4. When you are satisfied with what you see in the preview, create the transform.
+ 1. Supply a transform ID, the name of the destination index and optionally a description. If the destination index does not exist, it will be created automatically when you start the transform.
+ 2. Decide whether you want the transform to run once or continuously. Since this sample data index is unchanging, let's use the default behavior and run the transform once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the transform can use to check which entities have changed. In general, it's a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
+ 3. Optionally, you can configure a retention policy that applies to your transform. Select a date field that is used to identify old documents in the destination index and provide a maximum age. Documents that are older than the configured value are removed from the destination index.
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot3.png
- :alt: Adding transfrom ID and retention policy to a {{transform}} in {{kib}}
+ :alt: Adding transfrom ID and retention policy to a transform in {{kib}}
:screenshot:
:::
- In {{kib}}, before you finish creating the {{transform}}, you can copy the preview {{transform}} API request to your clipboard. This information is useful later when you’re deciding whether you want to manually create the destination index.
+ In {{kib}}, before you finish creating the transform, you can copy the preview transform API request to your clipboard. This information is useful later when you’re deciding whether you want to manually create the destination index.
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot4.png
:alt: Copy the Dev Console statement of the transform preview to the clipboard
:screenshot:
:::
- If you prefer, you can use the [create {{transforms}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
+ If you prefer, you can use the [create transforms API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
::::{dropdown} API example
```console
@@ -173,9 +173,9 @@ products:
::::
5. Optional: Create the destination index.
- If the destination index does not exist, it is created the first time you start your {{transform}}. A pivot transform deduces the mappings for the destination index from the source indices and the transform aggregations. If there are fields in the destination index that are derived from scripts (for example, if you use [`scripted_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) or [`bucket_scripts`](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md) aggregations), they’re created with [dynamic mappings](../../manage-data/data-store/mapping/dynamic-mapping.md). You can use the preview {{transform}} API to preview the mappings it will use for the destination index. In {{kib}}, if you copied the API request to your clipboard, paste it into the console, then refer to the `generated_dest_index` object in the API response.
+ If the destination index does not exist, it is created the first time you start your transform. A pivot transform deduces the mappings for the destination index from the source indices and the transform aggregations. If there are fields in the destination index that are derived from scripts (for example, if you use [`scripted_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) or [`bucket_scripts`](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md) aggregations), they’re created with [dynamic mappings](../../manage-data/data-store/mapping/dynamic-mapping.md). You can use the preview transform API to preview the mappings it will use for the destination index. In {{kib}}, if you copied the API request to your clipboard, paste it into the console, then refer to the `generated_dest_index` object in the API response.
::::{note}
- {{transforms-cap}} might have more configuration options provided by the APIs than the options available in {{kib}}. For example, you can set an ingest pipeline for `dest` by calling the [Create {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform). For all the {{transform}} configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
+ Transforms might have more configuration options provided by the APIs than the options available in {{kib}}. For example, you can set an ingest pipeline for `dest` by calling the [Create transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform). For all the transform configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
::::
::::{dropdown} API example
@@ -249,10 +249,10 @@ products:
::::
- In some instances the deduced mappings might be incompatible with the actual data. For example, numeric overflows might occur or dynamically mapped fields might contain both numbers and strings. To avoid this problem, create your destination index before you start the {{transform}}. For more information, see the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create).
+ In some instances the deduced mappings might be incompatible with the actual data. For example, numeric overflows might occur or dynamically mapped fields might contain both numbers and strings. To avoid this problem, create your destination index before you start the transform. For more information, see the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create).
::::{dropdown} API example
- You can use the information from the {{transform}} preview to create the destination index. For example:
+ You can use the information from the transform preview to create the destination index. For example:
```console
PUT /ecommerce-customers
@@ -290,20 +290,20 @@ products:
::::
-6. Start the {{transform}}.
+6. Start the transform.
::::{tip}
- Even though resource utilization is automatically adjusted based on the cluster load, a {{transform}} increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
+ Even though resource utilization is automatically adjusted based on the cluster load, a transform increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
::::
- You can start, stop, reset, and manage {{transforms}} in {{kib}}:
+ You can start, stop, reset, and manage transforms in {{kib}}:
:::{image} /explore-analyze/images/elasticsearch-reference-manage-transforms.png
- :alt: Managing {{transforms}} in {{kib}}
+ :alt: Managing transforms in {{kib}}
:screenshot:
:::
- Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform), [stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) and [reset {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform) APIs.
+ Alternatively, you can use the [start transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform), [stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) and [reset transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform) APIs.
- If you reset a {{transform}}, all checkpoints, states, and the destination index (if it was created by the {{transform}}) are deleted. The {{transform}} is ready to start again as if it were newly created.
+ If you reset a transform, all checkpoints, states, and the destination index (if it was created by the transform) are deleted. The transform is ready to start again as if it were newly created.
::::{dropdown} API example
```console
@@ -313,7 +313,7 @@ products:
::::
::::{tip}
- If you chose a batch {{transform}}, it is a single operation that has a single checkpoint. You cannot restart it when it’s complete. {{ctransforms-cap}} differ in that they continually increment and process checkpoints as new source data is ingested.
+ If you chose a batch transform, it is a single operation that has a single checkpoint. You cannot restart it when it’s complete. {{ctransforms-cap}} differ in that they continually increment and process checkpoints as new source data is ingested.
::::
7. Explore the data in your new index.
@@ -323,10 +323,10 @@ products:
:screenshot:
:::
-8. Optional: Create another {{transform}}, this time using the `latest` method.
+8. Optional: Create another transform, this time using the `latest` method.
This method populates the destination index with the latest documents for each unique key value. For example, you might want to find the latest orders (sorted by the `order_date` field) for each customer or for each country and region.
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-latest1.png
- :alt: Creating a latest {{transform}} in {{kib}}
+ :alt: Creating a latest transform in {{kib}}
:screenshot:
:::
@@ -354,9 +354,9 @@ products:
::::
::::{tip}
- If the destination index does not exist, it is created the first time you start your {{transform}}. Unlike pivot {{transforms}}, however, latest {{transforms}} do not deduce mapping definitions when they create the index. Instead, they use dynamic mappings. To use explicit mappings, create the destination index before you start the {{transform}}.
+ If the destination index does not exist, it is created the first time you start your transform. Unlike pivot transforms, however, latest transforms do not deduce mapping definitions when they create the index. Instead, they use dynamic mappings. To use explicit mappings, create the destination index before you start the transform.
::::
-9. If you do not want to keep a {{transform}}, you can delete it in {{kib}} or use the [delete {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform). By default, when you delete a {{transform}}, its destination index and {{kib}} index patterns remain.
+9. If you do not want to keep a transform, you can delete it in {{kib}} or use the [delete transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform). By default, when you delete a transform, its destination index and {{kib}} index patterns remain.
-Now that you’ve created simple {{transforms}} for {{kib}} sample data, consider possible use cases for your own data. For more ideas, see [When to use {{transforms}}](transform-usage.md) and [Examples](transform-examples.md).
+Now that you’ve created simple transforms for {{kib}} sample data, consider possible use cases for your own data. For more ideas, see [When to use transforms](transform-usage.md) and [Examples](transform-examples.md).
diff --git a/explore-analyze/transforms/transform-alerts.md b/explore-analyze/transforms/transform-alerts.md
index d536617331..3b27450846 100644
--- a/explore-analyze/transforms/transform-alerts.md
+++ b/explore-analyze/transforms/transform-alerts.md
@@ -10,15 +10,15 @@ products:
# Generating alerts for transforms [transform-alerts]
-{{kib}} {{alert-features}} include support for {{transform}} health rules, which check the health of {{ctransforms}} with certain conditions. If the conditions of the rule are met, an alert is created and the associated actions run. For example, you can create a rule to check if a {{ctransform}} is started and to notify you in an email if it is not. To learn more about {{kib}} {{alert-features}}, refer to [Alerting](../alerts-cases/alerts/alerting-getting-started.md).
+{{kib}} {{alert-features}} include support for transform health rules, which check the health of {{ctransforms}} with certain conditions. If the conditions of the rule are met, an alert is created and the associated actions run. For example, you can create a rule to check if a {{ctransform}} is started and to notify you in an email if it is not. To learn more about {{kib}} {{alert-features}}, refer to [Alerting](../alerts-cases/alerts/alerting-getting-started.md).
## Creating a rule [creating-transform-rules]
-You can create {{transform}} rules under **{{stack-manage-app}} > {{rules-ui}}**.
+You can create transform rules under **{{stack-manage-app}} > {{rules-ui}}**.
-1. Click **Create rule** and select the {{transform}} health rule type.
+1. Click **Create rule** and select the transform health rule type.
2. Give a name to the rule and optionally provide tags.
-3. Select the {{transform}} or {{transforms}} to include. You can also use a special character (`*`) to apply the rule to all your {{transforms}}. {{transforms-cap}} created after the rule are automatically included.
+3. Select the transform or transforms to include. You can also use a special character (`*`) to apply the rule to all your transforms. Transforms created after the rule are automatically included.
:::{image} /explore-analyze/images/elasticsearch-reference-transform-check-config.png
:alt: Selecting health check
:screenshot:
@@ -26,11 +26,11 @@ You can create {{transform}} rules under **{{stack-manage-app}} > {{rules-ui}}**
4. The following health checks are available and enabled by default:
- *{{transform-cap}} is not started*
- : Get alerts when a {{transform}} is not started or is not indexing any data. The notification message recommends the necessary actions to solve the error.
+ *Transform is not started*
+ : Get alerts when a transform is not started or is not indexing any data. The notification message recommends the necessary actions to solve the error.
- *Unhealthy {{transform}}*
- : Get alerts when a {{transform}} has an unhealthy status. The notification message contains status details and related issues.
+ *Unhealthy transform*
+ : Get alerts when a transform has an unhealthy status. The notification message contains status details and related issues.
5. Set the check interval, which defines how often to evaluate the rule conditions.
6. In the advanced options, you can change the number of consecutive runs that must meet the rule conditions before an alert occurs. The default value is 1.
@@ -71,11 +71,11 @@ There is a set of variables that you can use to customize the notification messa
After you save the configurations, the rule appears in the **{{rules-ui}}** list where you can check its status and see the overview of its configuration information.
-The name of an alert is always the same as the {{transform}} ID of the associated {{transform}} that triggered it. You can mute the notifications for a particular {{transform}} on the page of the rule that lists the individual alerts. You can open it through **{{rules-ui}}** by selecting the rule name.
+The name of an alert is always the same as the transform ID of the associated transform that triggered it. You can mute the notifications for a particular transform on the page of the rule that lists the individual alerts. You can open it through **{{rules-ui}}** by selecting the rule name.
## Action variables [transform-action-variables]
-The following variables are specific to the {{transform}} health rule type. You can also specify [variables common to all rules](../alerts-cases/alerts/rule-action-variables.md).
+The following variables are specific to the transform health rule type. You can also specify [variables common to all rules](../alerts-cases/alerts/rule-action-variables.md).
`context.message`
: A preconstructed message for the rule. For example: `Transform test-1 is not started.`
diff --git a/explore-analyze/transforms/transform-api-quickref.md b/explore-analyze/transforms/transform-api-quickref.md
index dd7057fa25..6b505ffe26 100644
--- a/explore-analyze/transforms/transform-api-quickref.md
+++ b/explore-analyze/transforms/transform-api-quickref.md
@@ -9,23 +9,23 @@ products:
- id: elasticsearch
---
-# {{transforms-cap}} API quick reference [transform-api-quickref]
+# Transforms API quick reference [transform-api-quickref]
-All {{transform}} endpoints have the following base:
+All transform endpoints have the following base:
```js
_transform/
```
-* [Create {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform)
-* [Delete {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform)
-* [Get {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform)
-* [Get {{transforms}} statistics](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats)
-* [Preview {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform)
-* [Reset {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform)
-* [Schedule now {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-schedule-now-transform)
-* [Start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform)
-* [Stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform)
-* [Update {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform)
+* [Create transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform)
+* [Delete transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform)
+* [Get transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform)
+* [Get transforms statistics](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats)
+* [Preview transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform)
+* [Reset transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform)
+* [Schedule now transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-schedule-now-transform)
+* [Start transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform)
+* [Stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform)
+* [Update transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform)
-For the full list, see [*{{transform-cap}} APIs*](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
+For the full list, see [*Transform APIs*](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
diff --git a/explore-analyze/transforms/transform-checkpoints.md b/explore-analyze/transforms/transform-checkpoints.md
index d632d32883..f7751e4999 100644
--- a/explore-analyze/transforms/transform-checkpoints.md
+++ b/explore-analyze/transforms/transform-checkpoints.md
@@ -11,17 +11,17 @@ products:
# How checkpoints work [transform-checkpoints]
-Each time a {{transform}} examines the source indices and creates or updates the destination index, it generates a *checkpoint*.
+Each time a transform examines the source indices and creates or updates the destination index, it generates a *checkpoint*.
-If your {{transform}} runs only once, there is logically only one checkpoint. If your {{transform}} runs continuously, however, it creates checkpoints as it ingests and transforms new source data. The `sync` property of the {{transform}} configures checkpointing by specifying a time field.
+If your transform runs only once, there is logically only one checkpoint. If your transform runs continuously, however, it creates checkpoints as it ingests and transforms new source data. The `sync` property of the transform configures checkpointing by specifying a time field.
To create a checkpoint, the {{ctransform}}:
1. Checks for changes to source indices.
- Using a simple periodic timer, the {{transform}} checks for changes to the source indices. This check is done based on the interval defined in the transform’s `frequency` property.
+ Using a simple periodic timer, the transform checks for changes to the source indices. This check is done based on the interval defined in the transform’s `frequency` property.
- If new data is ingested with a slight delay, it might not be immediately available when the {{transform}} runs. To prevent missing documents, you can use the `delay` parameter in the `sync` configuration. This shifts the search window backward, ensuring that late-arriving data is included before a checkpoint processes it. Adjusting this value based on your data ingestion patterns can help ensure completeness.
+ If new data is ingested with a slight delay, it might not be immediately available when the transform runs. To prevent missing documents, you can use the `delay` parameter in the `sync` configuration. This shifts the search window backward, ensuring that late-arriving data is included before a checkpoint processes it. Adjusting this value based on your data ingestion patterns can help ensure completeness.
If the source indices remain unchanged or if a checkpoint is already in progress then it waits for the next timer.
@@ -29,21 +29,21 @@ To create a checkpoint, the {{ctransform}}:
2. Identifies which entities or time buckets have changed.
- The {{transform}} searches to see which entities or time buckets have changed between the last and the new checkpoint. The {{transform}} uses the values to synchronize the source and destination indices with fewer operations than a full re-run.
+ The transform searches to see which entities or time buckets have changed between the last and the new checkpoint. The transform uses the values to synchronize the source and destination indices with fewer operations than a full re-run.
3. Updates the destination index (the {{dataframe}}) with the changes.
- The {{transform}} applies changes related to either new or changed entities or time buckets to the destination index. The set of changes can be paginated. The {{transform}} performs a composite aggregation similarly to the batch {{transform}} operation, however it also injects query filters based on the previous step to reduce the amount of work. After all changes have been applied, the checkpoint is complete.
+ The transform applies changes related to either new or changed entities or time buckets to the destination index. The set of changes can be paginated. The transform performs a composite aggregation similarly to the batch transform operation, however it also injects query filters based on the previous step to reduce the amount of work. After all changes have been applied, the checkpoint is complete.
-This checkpoint process involves both search and indexing activity on the cluster. We have attempted to favor control over performance while developing {{transforms}}. We decided it was preferable for the {{transform}} to take longer to complete, rather than to finish quickly and take precedence in resource consumption. That being said, the cluster still requires enough resources to support both the composite aggregation search and the indexing of its results.
+This checkpoint process involves both search and indexing activity on the cluster. We have attempted to favor control over performance while developing transforms. We decided it was preferable for the transform to take longer to complete, rather than to finish quickly and take precedence in resource consumption. That being said, the cluster still requires enough resources to support both the composite aggregation search and the indexing of its results.
::::{tip}
-If the cluster experiences unsuitable performance degradation due to the {{transform}}, stop the {{transform}} and refer to [Performance considerations](transform-overview.md#transform-performance).
+If the cluster experiences unsuitable performance degradation due to the transform, stop the transform and refer to [Performance considerations](transform-overview.md#transform-performance).
::::
-## Using the ingest timestamp for syncing the {{transform}} [sync-field-ingest-timestamp]
+## Using the ingest timestamp for syncing the transform [sync-field-ingest-timestamp]
-In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.
+In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the transform. This is the most optimal way for transforms to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your transform.
If you don't have a `event.ingested` field or it isn't populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or through {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
@@ -62,13 +62,13 @@ PUT _ingest/pipeline/set_ingest_time
}
```
-After you created the ingest pipeline, apply it to the source indices of your {{transform}}. The pipeline adds the field `event.ingested` to every document with the value of the ingest timestamp. Configure the `sync`.`time`.`field` property of your {{transform}} to use the field by using the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform) for new {{transforms}} or the [update {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform) for existing {{transforms}}. The `event.ingested` field is used for syncing the {{transform}}.
+After you created the ingest pipeline, apply it to the source indices of your transform. The pipeline adds the field `event.ingested` to every document with the value of the ingest timestamp. Configure the `sync`.`time`.`field` property of your transform to use the field by using the [create transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform) for new transforms or the [update transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform) for existing transforms. The `event.ingested` field is used for syncing the transform.
Refer to [Add a pipeline to an indexing request](../../manage-data/ingest/transform-enrich/ingest-pipelines.md#add-pipeline-to-indexing-request) and [Ingest pipelines](../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to learn more about how to use an ingest pipeline.
## Change detection heuristics [ml-transform-checkpoint-heuristics]
-When the {{transform}} runs in continuous mode, it updates the documents in the destination index as new data comes in. The {{transform}} uses a set of heuristics called change detection to update the destination index with fewer operations.
+When the transform runs in continuous mode, it updates the documents in the destination index as new data comes in. The transform uses a set of heuristics called change detection to update the destination index with fewer operations.
In this example, the data is grouped by host names. Change detection detects which host names have changed, for example, host `A`, `C` and `G` and only updates documents with those hosts but does not update documents that store information about host `B`, `D`, or any other host that are not changed.
@@ -76,12 +76,12 @@ Another heuristic can be applied for time buckets when a `date_histogram` is use
## Error handling [ml-transform-checkpoint-errors]
-Failures in {{transforms}} tend to be related to searching or indexing. To increase the resiliency of {{transforms}}, the cursor positions of the aggregated search and the changed entities search are tracked in memory and persisted periodically.
+Failures in transforms tend to be related to searching or indexing. To increase the resiliency of transforms, the cursor positions of the aggregated search and the changed entities search are tracked in memory and persisted periodically.
Checkpoint failures can be categorized as follows:
-* Temporary failures: The checkpoint is retried. If 10 consecutive failures occur, the {{transform}} has a failed status. For example, this situation might occur when there are shard failures and queries return only partial results.
-* Irrecoverable failures: The {{transform}} immediately fails. For example, this situation occurs when the source index is not found.
-* Adjustment failures: The {{transform}} retries with adjusted settings. For example, if a parent circuit breaker memory errors occur during the composite aggregation, the {{transform}} receives partial results. The aggregated search is retried with a smaller number of buckets. This retry is performed at the interval defined in the `frequency` property for the {{transform}}. If the search is retried to the point where it reaches a minimal number of buckets, an irrecoverable failure occurs.
+* Temporary failures: The checkpoint is retried. If 10 consecutive failures occur, the transform has a failed status. For example, this situation might occur when there are shard failures and queries return only partial results.
+* Irrecoverable failures: The transform immediately fails. For example, this situation occurs when the source index is not found.
+* Adjustment failures: The transform retries with adjusted settings. For example, if a parent circuit breaker memory errors occur during the composite aggregation, the transform receives partial results. The aggregated search is retried with a smaller number of buckets. This retry is performed at the interval defined in the `frequency` property for the transform. If the search is retried to the point where it reaches a minimal number of buckets, an irrecoverable failure occurs.
-If the node running the {{transforms}} fails, the {{transform}} restarts from the most recent persisted cursor position. This recovery process might repeat some of the work the {{transform}} had already done, but it ensures data consistency.
+If the node running the transforms fails, the transform restarts from the most recent persisted cursor position. This recovery process might repeat some of the work the transform had already done, but it ensures data consistency.
diff --git a/explore-analyze/transforms/transform-examples.md b/explore-analyze/transforms/transform-examples.md
index 3059e610db..3ab53c8f5e 100644
--- a/explore-analyze/transforms/transform-examples.md
+++ b/explore-analyze/transforms/transform-examples.md
@@ -9,9 +9,9 @@ products:
- id: elasticsearch
---
-# {{transforms-cap}} examples [transform-examples]
+# Transforms examples [transform-examples]
-These examples demonstrate how to use {{transforms}} to derive useful insights from your data. All the examples use one of the [{{kib}} sample datasets](/explore-analyze/index.md). For a more detailed, step-by-step example, see [Tutorial: Transforming the eCommerce sample data](ecommerce-transforms.md).
+These examples demonstrate how to use transforms to derive useful insights from your data. All the examples use one of the [{{kib}} sample datasets](/explore-analyze/index.md). For a more detailed, step-by-step example, see [Tutorial: Transforming the eCommerce sample data](ecommerce-transforms.md).
* [Finding your best customers](#example-best-customers)
* [Finding air carriers with the most delays](#example-airline)
@@ -22,14 +22,14 @@ These examples demonstrate how to use {{transforms}} to derive useful insights f
## Finding your best customers [example-best-customers]
-This example uses the eCommerce orders sample data set to find the customers who spent the most in a hypothetical webshop. Let’s use the `pivot` type of {{transform}} such that the destination index contains the number of orders, the total price of the orders, the amount of unique products and the average price per order, and the total amount of ordered products for each customer.
+This example uses the eCommerce orders sample data set to find the customers who spent the most in a hypothetical webshop. Let’s use the `pivot` type of transform such that the destination index contains the number of orders, the total price of the orders, the amount of unique products and the average price per order, and the total amount of ordered products for each customer.
:::{image} /explore-analyze/images/elasticsearch-reference-transform-ex1-1.jpg
-:alt: Finding your best customers with {{transforms}} in {{kib}}
+:alt: Finding your best customers with transforms in {{kib}}
:screenshot:
:::
-Alternatively, you can use the [preview {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
+Alternatively, you can use the [preview transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
:::::{dropdown} API example
```console
@@ -57,14 +57,14 @@ POST _transform/_preview
}
```
-1. The destination index for the {{transform}}. It is ignored by `_preview`.
-2. Two `group_by` fields is selected. This means the {{transform}} contains a unique row per `user` and `customer_id` combination. Within this data set, both these fields are unique. By including both in the {{transform}}, it gives more context to the final results.
+1. The destination index for the transform. It is ignored by `_preview`.
+2. Two `group_by` fields is selected. This means the transform contains a unique row per `user` and `customer_id` combination. Within this data set, both these fields are unique. By including both in the transform, it gives more context to the final results.
::::{note}
In the example above, condensed JSON formatting is used for easier readability of the pivot object.
::::
-The preview {{transforms}} API enables you to see the layout of the {{transform}} in advance, populated with some sample values. For example:
+The preview transforms API enables you to see the layout of the transform in advance, populated with some sample values. For example:
```js
{
@@ -85,14 +85,14 @@ The preview {{transforms}} API enables you to see the layout of the {{transform}
:::::
-This {{transform}} makes it easier to answer questions such as:
+This transform makes it easier to answer questions such as:
* Which customers spend the most?
* Which customers spend the most per order?
* Which customers order most often?
* Which customers ordered the least number of different products?
-It’s possible to answer these questions using aggregations alone, however {{transforms}} allow us to persist this data as a customer centric index. This enables us to analyze data at scale and gives more flexibility to explore and navigate data from a customer centric perspective. In some cases, it can even make creating visualizations much simpler.
+It’s possible to answer these questions using aggregations alone, however transforms allow us to persist this data as a customer centric index. This enables us to analyze data at scale and gives more flexibility to explore and navigate data from a customer centric perspective. In some cases, it can even make creating visualizations much simpler.
## Finding air carriers with the most delays [example-airline]
@@ -137,7 +137,7 @@ POST _transform/_preview
```
1. Filter the source data to select only flights that are not cancelled.
-2. The destination index for the {{transform}}. It is ignored by `_preview`.
+2. The destination index for the transform. It is ignored by `_preview`.
3. The data is grouped by the `Carrier` field which contains the airline name.
4. This `bucket_script` performs calculations on the results that are returned by the aggregation. In this particular example, it calculates what percentage of travel time was taken up by delays.
@@ -158,7 +158,7 @@ The preview shows you that the new index would contain data like this for each c
}
```
-This {{transform}} makes it easier to answer questions such as:
+This transform makes it easier to answer questions such as:
* Which air carrier has the most delays as a percentage of flight time?
@@ -224,13 +224,13 @@ PUT _transform/suspicious_client_ips
}
```
-1. The destination index for the {{transform}}.
-2. Configures the {{transform}} to run continuously. It uses the `timestamp` field to synchronize the source and destination indices. The worst case ingestion delay is 60 seconds.
+1. The destination index for the transform.
+2. Configures the transform to run continuously. It uses the `timestamp` field to synchronize the source and destination indices. The worst case ingestion delay is 60 seconds.
3. The data is grouped by the `clientip` field.
4. Filter aggregation that counts the occurrences of successful (`200`) responses in the `response` field. The following two aggregations (`error404` and `error5xx`) count the error responses by error codes, matching an exact value or a range of response codes.
5. This `bucket_script` calculates the duration of the `clientip` access based on the results of the aggregation.
-After you create the {{transform}}, you must start it:
+After you create the transform, you must start it:
```console
POST _transform/suspicious_client_ips/_start
@@ -279,7 +279,7 @@ The search result shows you data like this for each client IP:
Like other Kibana sample data sets, the web log sample dataset contains timestamps relative to when you installed it, including timestamps in the future. The {{ctransform}} will pick up the data points once they are in the past. If you installed the web log sample dataset some time ago, you can uninstall and reinstall it and the timestamps will change.
::::
-This {{transform}} makes it easier to answer questions such as:
+This transform makes it easier to answer questions such as:
* Which client IPs are transferring the most amounts of data?
* Which client IPs are interacting with a high number of different URLs?
@@ -288,23 +288,23 @@ This {{transform}} makes it easier to answer questions such as:
## Finding the last log event for each IP address [example-last-log]
-This example uses the web log sample data set to find the last log from an IP address. Let’s use the `latest` type of {{transform}} in continuous mode. It copies the most recent document for each unique key from the source index to the destination index and updates the destination index as new data comes into the source index.
+This example uses the web log sample data set to find the last log from an IP address. Let’s use the `latest` type of transform in continuous mode. It copies the most recent document for each unique key from the source index to the destination index and updates the destination index as new data comes into the source index.
Pick the `clientip` field as the unique key; the data is grouped by this field. Select `timestamp` as the date field that sorts the data chronologically. For continuous mode, specify a date field that is used to identify new documents, and an interval between checks for changes in the source index.
:::{image} /explore-analyze/images/elasticsearch-reference-transform-ex4-1.jpg
-:alt: Finding the last log event for each IP address with {{transforms}} in {{kib}}
+:alt: Finding the last log event for each IP address with transforms in {{kib}}
:screenshot:
:::
Let’s assume that we’re interested in retaining documents only for IP addresses that appeared recently in the log. You can define a retention policy and specify a date field that is used to calculate the age of a document. This example uses the same date field that is used to sort the data. Then set the maximum age of a document; documents that are older than the value you set will be removed from the destination index.
:::{image} /explore-analyze/images/elasticsearch-reference-transform-ex4-2.jpg
-:alt: Defining retention policy for {{transforms}} in {{kib}}
+:alt: Defining retention policy for transforms in {{kib}}
:screenshot:
:::
-This {{transform}} creates the destination index that contains the latest login date for each client IP. As the {{transform}} runs in continuous mode, the destination index will be updated as new data that comes into the source index. Finally, every document that is older than 30 days will be removed from the destination index due to the applied retention policy.
+This transform creates the destination index that contains the latest login date for each client IP. As the transform runs in continuous mode, the destination index will be updated as new data that comes into the source index. Finally, every document that is older than 30 days will be removed from the destination index due to the applied retention policy.
::::{dropdown} API example
```console
@@ -345,11 +345,11 @@ PUT _transform/last-log-from-clientip
1. Specifies the field for grouping the data.
2. Specifies the date field that is used for sorting the data.
-3. Sets the interval for the {{transform}} to check for changes in the source index.
+3. Sets the interval for the transform to check for changes in the source index.
4. Contains the time field and delay settings used to synchronize the source and destination indices.
5. Specifies the retention policy for the transform. Documents that are older than the configured value will be removed from the destination index.
-After you create the {{transform}}, start it:
+After you create the transform, start it:
```console
POST _transform/last-log-from-clientip/_start
@@ -357,7 +357,7 @@ POST _transform/last-log-from-clientip/_start
::::
-After the {{transform}} processes the data, search the destination index:
+After the transform processes the data, search the destination index:
```console
GET last-log-from-clientip/_search
@@ -411,15 +411,15 @@ The search result shows you data like this for each client IP:
}
```
-This {{transform}} makes it easier to answer questions such as:
+This transform makes it easier to answer questions such as:
* What was the most recent log event associated with a specific IP address?
## Finding client IPs that sent the most bytes to the server [example-bytes]
-This example uses the web log sample data set to find the client IP that sent the most bytes to the server in every hour. The example uses a `pivot` {{transform}} with a [`top_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-top-metrics.md) aggregation.
+This example uses the web log sample data set to find the client IP that sent the most bytes to the server in every hour. The example uses a `pivot` transform with a [`top_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-top-metrics.md) aggregation.
-Group the data by a [date histogram](elasticsearch://reference/aggregations/search-aggregations-bucket-composite-aggregation.md#_date_histogram) on the time field with an interval of one hour. Use a [max aggregation](elasticsearch://reference/aggregations/search-aggregations-metrics-max-aggregation.md) on the `bytes` field to get the maximum amount of data that is sent to the server. Without the `max` aggregation, the API call still returns the client IP that sent the most bytes, however, the amount of bytes that it sent is not returned. In the `top_metrics` property, specify `clientip` and `geo.src`, then sort them by the `bytes` field in descending order. The {{transform}} returns the client IP that sent the biggest amount of data and the 2-letter ISO code of the corresponding location.
+Group the data by a [date histogram](elasticsearch://reference/aggregations/search-aggregations-bucket-composite-aggregation.md#_date_histogram) on the time field with an interval of one hour. Use a [max aggregation](elasticsearch://reference/aggregations/search-aggregations-metrics-max-aggregation.md) on the `bytes` field to get the maximum amount of data that is sent to the server. Without the `max` aggregation, the API call still returns the client IP that sent the most bytes, however, the amount of bytes that it sent is not returned. In the `top_metrics` property, specify `clientip` and `geo.src`, then sort them by the `bytes` field in descending order. The transform returns the client IP that sent the biggest amount of data and the 2-letter ISO code of the corresponding location.
```console
POST _transform/_preview
diff --git a/explore-analyze/transforms/transform-limitations.md b/explore-analyze/transforms/transform-limitations.md
index e0bbad2090..f9f396cc6d 100644
--- a/explore-analyze/transforms/transform-limitations.md
+++ b/explore-analyze/transforms/transform-limitations.md
@@ -9,97 +9,97 @@ products:
- id: elasticsearch
---
-# {{transforms-cap}} limitations [transform-limitations]
+# Transforms limitations [transform-limitations]
-The following limitations and known problems apply to the Elastic {{transform}} feature. The limitations are grouped into the following categories:
+The following limitations and known problems apply to the Elastic transform feature. The limitations are grouped into the following categories:
-* [Configuration limitations](#transform-config-limitations) apply to the configuration process of the {{transforms}}.
-* [Operational limitations](#transform-operational-limitations) affect the behavior of the {{transforms}} that are running.
-* [Limitations in {{kib}}](#transform-ui-limitations) only apply to {{transforms}} managed through the user interface.
+* [Configuration limitations](#transform-config-limitations) apply to the configuration process of the transforms.
+* [Operational limitations](#transform-operational-limitations) affect the behavior of the transforms that are running.
+* [Limitations in {{kib}}](#transform-ui-limitations) only apply to transforms managed through the user interface.
## Configuration limitations [transform-config-limitations]
-### Field names prefixed with underscores are omitted from latest {{transforms}} [transforms-underscore-limitation]
+### Field names prefixed with underscores are omitted from latest transforms [transforms-underscore-limitation]
-If you use the `latest` type of {{transform}} and the source index has field names that start with an underscore (_) character, they are assumed to be internal fields. Those fields are omitted from the documents in the destination index.
+If you use the `latest` type of transform and the source index has field names that start with an underscore (_) character, they are assumed to be internal fields. Those fields are omitted from the documents in the destination index.
-### {{transforms-cap}} support {{ccs}} if the remote cluster is configured properly [transforms-ccs-limitation]
+### Transforms support {{ccs}} if the remote cluster is configured properly [transforms-ccs-limitation]
-If you use [{{ccs}}](../../solutions/search/cross-cluster-search.md), the remote cluster must support the search and aggregations you use in your {{transforms}}. {{transforms-cap}} validate their configuration; if you use {{ccs}} and the validation fails, make sure that the remote cluster supports the query and aggregations you use.
+If you use [{{ccs}}](../../solutions/search/cross-cluster-search.md), the remote cluster must support the search and aggregations you use in your transforms. Transforms validate their configuration; if you use {{ccs}} and the validation fails, make sure that the remote cluster supports the query and aggregations you use.
-### Using scripts in {{transforms}} [transform-painless-limitation]
+### Using scripts in transforms [transform-painless-limitation]
-{{transforms-cap}} support scripting in every case when aggregations support them. However, there are certain factors you might want to consider when using scripts in {{transforms}}:
+Transforms support scripting in every case when aggregations support them. However, there are certain factors you might want to consider when using scripts in transforms:
-* {{transforms-cap}} cannot deduce index mappings for output fields when the fields are created by a script. In this case, you might want to create the mappings of the destination index yourself prior to creating the transform.
-* Scripted fields may increase the runtime of the {{transform}}.
-* {{transforms-cap}} cannot optimize queries when you use scripts for all the groupings defined in `group_by`, you will receive a warning message when you use scripts this way.
+* Transforms cannot deduce index mappings for output fields when the fields are created by a script. In this case, you might want to create the mappings of the destination index yourself prior to creating the transform.
+* Scripted fields may increase the runtime of the transform.
+* Transforms cannot optimize queries when you use scripts for all the groupings defined in `group_by`, you will receive a warning message when you use scripts this way.
-### Deprecation warnings for Painless scripts in {{transforms}} [transform-painless-warning-limitation]
+### Deprecation warnings for Painless scripts in transforms [transform-painless-warning-limitation]
-If a {{transform}} contains Painless scripts that use deprecated syntax, deprecation warnings are displayed when the {{transform}} is previewed or started. However, it is not possible to check for deprecation warnings across all {{transforms}} as a bulk action because running the required queries might be a resource intensive process. Therefore any deprecation warnings due to deprecated Painless syntax are not available in the Upgrade Assistant.
+If a transform contains Painless scripts that use deprecated syntax, deprecation warnings are displayed when the transform is previewed or started. However, it is not possible to check for deprecation warnings across all transforms as a bulk action because running the required queries might be a resource intensive process. Therefore any deprecation warnings due to deprecated Painless syntax are not available in the Upgrade Assistant.
-### {{transforms-cap}} perform better on indexed fields [transform-runtime-field-limitation]
+### Transforms perform better on indexed fields [transform-runtime-field-limitation]
-{{transforms-cap}} sort data by a user-defined time field, which is frequently accessed. If the time field is a [runtime field](../../manage-data/data-store/mapping/runtime-fields.md), the performance impact of calculating field values at query time can significantly slow the {{transform}}. Use an indexed field as a time field when using {{transforms}}.
+Transforms sort data by a user-defined time field, which is frequently accessed. If the time field is a [runtime field](../../manage-data/data-store/mapping/runtime-fields.md), the performance impact of calculating field values at query time can significantly slow the transform. Use an indexed field as a time field when using transforms.
### {{ctransform-cap}} scheduling limitations [transform-scheduling-limitations]
-A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the {{transform}} search/index operations has other users in your cluster. Also, retries occur at `frequency` interval.
+A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the transform search/index operations has other users in your cluster. Also, retries occur at `frequency` interval.
## Operational limitations [transform-operational-limitations]
### Aggregation responses may be incompatible with destination index mappings [transform-aggresponse-limitations]
-When a pivot {{transform}} is first started, it will deduce the mappings required for the destination index. This process is based on the field types of the source index and the aggregations used. If the fields are derived from [`scripted_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) or [`bucket_scripts`](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md), [dynamic mappings](../../manage-data/data-store/mapping/dynamic-mapping.md) will be used. In some instances the deduced mappings may be incompatible with the actual data. For example, numeric overflows might occur or dynamically mapped fields might contain both numbers and strings. Check {{es}} logs if you think this may have occurred.
+When a pivot transform is first started, it will deduce the mappings required for the destination index. This process is based on the field types of the source index and the aggregations used. If the fields are derived from [`scripted_metrics`](elasticsearch://reference/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) or [`bucket_scripts`](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md), [dynamic mappings](../../manage-data/data-store/mapping/dynamic-mapping.md) will be used. In some instances the deduced mappings may be incompatible with the actual data. For example, numeric overflows might occur or dynamically mapped fields might contain both numbers and strings. Check {{es}} logs if you think this may have occurred.
You can view the deduced mappings by using the [preview transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform). See the `generated_dest_index` object in the API response.
-If it’s required, you may define custom mappings prior to starting the {{transform}} by creating a custom destination index using the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create). As deduced mappings cannot be overwritten by an index template, use the create index API to define custom mappings. The index templates only apply to fields derived from scripts that use dynamic mappings.
+If it’s required, you may define custom mappings prior to starting the transform by creating a custom destination index using the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create). As deduced mappings cannot be overwritten by an index template, use the create index API to define custom mappings. The index templates only apply to fields derived from scripts that use dynamic mappings.
-### Batch {{transforms}} may not account for changed documents [transform-batch-limitations]
+### Batch transforms may not account for changed documents [transform-batch-limitations]
-A batch {{transform}} uses a [composite aggregation](elasticsearch://reference/aggregations/search-aggregations-bucket-composite-aggregation.md) which allows efficient pagination through all buckets. Composite aggregations do not yet support a search context, therefore if the source data is changed (deleted, updated, added) while the batch {{dataframe}} is in progress, then the results may not include these changes.
+A batch transform uses a [composite aggregation](elasticsearch://reference/aggregations/search-aggregations-bucket-composite-aggregation.md) which allows efficient pagination through all buckets. Composite aggregations do not yet support a search context, therefore if the source data is changed (deleted, updated, added) while the batch {{dataframe}} is in progress, then the results may not include these changes.
### {{ctransform-cap}} consistency does not account for deleted or updated documents [transform-consistency-limitations]
-While the process for {{transforms}} allows the continual recalculation of the {{transform}} as new data is being ingested, it does also have some limitations.
+While the process for transforms allows the continual recalculation of the transform as new data is being ingested, it does also have some limitations.
Changed entities will only be identified if their time field has also been updated and falls within the range of the action to check for changes. This has been designed in principle for, and is suited to, the use case where new data is given a timestamp for the time of ingest.
If the indices that fall within the scope of the source index pattern are removed, for example when deleting historical time-based indices, then the composite aggregation performed in consecutive checkpoint processing will search over different source data, and entities that only existed in the deleted index will not be removed from the {{dataframe}} destination index.
-Depending on your use case, you may wish to recreate the {{transform}} entirely after deletions. Alternatively, if your use case is tolerant to historical archiving, you may wish to include a max ingest timestamp in your aggregation. This will allow you to exclude results that have not been recently updated when viewing the destination index.
+Depending on your use case, you may wish to recreate the transform entirely after deletions. Alternatively, if your use case is tolerant to historical archiving, you may wish to include a max ingest timestamp in your aggregation. This will allow you to exclude results that have not been recently updated when viewing the destination index.
-### Deleting a {{transform}} does not delete the destination index or {{kib}} index pattern [transform-deletion-limitations]
+### Deleting a transform does not delete the destination index or {{kib}} index pattern [transform-deletion-limitations]
-When deleting a {{transform}} using `DELETE _transform/index` neither the destination index nor the {{kib}} index pattern, should one have been created, are deleted. These objects must be deleted separately.
+When deleting a transform using `DELETE _transform/index` neither the destination index nor the {{kib}} index pattern, should one have been created, are deleted. These objects must be deleted separately.
### Handling dynamic adjustment of aggregation page size [transform-aggregation-page-limitations]
-During the development of {{transforms}}, control was favoured over performance. In the design considerations, it is preferred for the {{transform}} to take longer to complete quietly in the background rather than to finish quickly and take precedence in resource consumption.
+During the development of transforms, control was favoured over performance. In the design considerations, it is preferred for the transform to take longer to complete quietly in the background rather than to finish quickly and take precedence in resource consumption.
-Composite aggregations are well suited for high cardinality data enabling pagination through results. If a [circuit breaker](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md) memory exception occurs when performing the composite aggregated search then we try again reducing the number of buckets requested. This circuit breaker is calculated based upon all activity within the cluster, not just activity from {{transforms}}, so it therefore may only be a temporary resource availability issue.
+Composite aggregations are well suited for high cardinality data enabling pagination through results. If a [circuit breaker](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md) memory exception occurs when performing the composite aggregated search then we try again reducing the number of buckets requested. This circuit breaker is calculated based upon all activity within the cluster, not just activity from transforms, so it therefore may only be a temporary resource availability issue.
-For a batch {{transform}}, the number of buckets requested is only ever adjusted downwards. The lowering of value may result in a longer duration for the {{transform}} checkpoint to complete. For {{ctransforms}}, the number of buckets requested is reset back to its default at the start of every checkpoint and it is possible for circuit breaker exceptions to occur repeatedly in the {{es}} logs.
+For a batch transform, the number of buckets requested is only ever adjusted downwards. The lowering of value may result in a longer duration for the transform checkpoint to complete. For {{ctransforms}}, the number of buckets requested is reset back to its default at the start of every checkpoint and it is possible for circuit breaker exceptions to occur repeatedly in the {{es}} logs.
-The {{transform}} retrieves data in batches which means it calculates several buckets at once. Per default this is 500 buckets per search/index operation. The default can be changed using `max_page_search_size` and the minimum value is 10. If failures still occur once the number of buckets requested has been reduced to its minimum, then the {{transform}} will be set to a failed state.
+The transform retrieves data in batches which means it calculates several buckets at once. Per default this is 500 buckets per search/index operation. The default can be changed using `max_page_search_size` and the minimum value is 10. If failures still occur once the number of buckets requested has been reduced to its minimum, then the transform will be set to a failed state.
### Handling dynamic adjustments for many terms [transform-dynamic-adjustments-limitations]
-For each checkpoint, entities are identified that have changed since the last time the check was performed. This list of changed entities is supplied as a [terms query](elasticsearch://reference/query-languages/query-dsl/query-dsl-terms-query.md) to the {{transform}} composite aggregation, one page at a time. Then updates are applied to the destination index for each page of entities.
+For each checkpoint, entities are identified that have changed since the last time the check was performed. This list of changed entities is supplied as a [terms query](elasticsearch://reference/query-languages/query-dsl/query-dsl-terms-query.md) to the transform composite aggregation, one page at a time. Then updates are applied to the destination index for each page of entities.
The page `size` is defined by `max_page_search_size` which is also used to define the number of buckets returned by the composite aggregation search. The default value is 500, the minimum is 10.
-The index setting [`index.max_terms_count`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-settings) defines the maximum number of terms that can be used in a terms query. The default value is 65536. If `max_page_search_size` exceeds `index.max_terms_count` the {{transform}} will fail.
+The index setting [`index.max_terms_count`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-settings) defines the maximum number of terms that can be used in a terms query. The default value is 65536. If `max_page_search_size` exceeds `index.max_terms_count` the transform will fail.
-Using smaller values for `max_page_search_size` may result in a longer duration for the {{transform}} checkpoint to complete.
+Using smaller values for `max_page_search_size` may result in a longer duration for the transform checkpoint to complete.
-### Handling of failed {{transforms}} [transform-failed-limitations]
+### Handling of failed transforms [transform-failed-limitations]
-Failed {{transforms}} remain as a persistent task and should be handled appropriately, either by deleting it or by resolving the root cause of the failure and re-starting.
+Failed transforms remain as a persistent task and should be handled appropriately, either by deleting it or by resolving the root cause of the failure and re-starting.
-When using the API to delete a failed {{transform}}, first stop it using `_stop?force=true`, then delete it.
+When using the API to delete a failed transform, first stop it using `_stop?force=true`, then delete it.
### {{ctransforms-cap}} may give incorrect results if documents are not yet available to search [transform-availability-limitations]
@@ -111,39 +111,39 @@ If using a `sync.time.field` that represents the data ingest time and using a ze
### Support for date nanoseconds data type [transform-date-nanos]
-If your data uses the [date nanosecond data type](elasticsearch://reference/elasticsearch/mapping-reference/date_nanos.md), aggregations are nonetheless on millisecond resolution. This limitation also affects the aggregations in your {{transforms}}.
+If your data uses the [date nanosecond data type](elasticsearch://reference/elasticsearch/mapping-reference/date_nanos.md), aggregations are nonetheless on millisecond resolution. This limitation also affects the aggregations in your transforms.
### Data streams as destination indices are not supported [transform-data-streams-destination]
-{{transforms-cap}} update data in the destination index which requires writing into the destination. [Data streams](../../manage-data/data-store/data-streams.md) are designed to be append-only, which means you cannot send update or delete requests directly to a data stream. For this reason, data streams are not supported as destination indices for {{transforms}}.
+Transforms update data in the destination index which requires writing into the destination. [Data streams](../../manage-data/data-store/data-streams.md) are designed to be append-only, which means you cannot send update or delete requests directly to a data stream. For this reason, data streams are not supported as destination indices for transforms.
### ILM as destination index may cause duplicated documents [transform-ilm-destination]
-[ILM](../../manage-data/lifecycle/index-lifecycle-management.md) is not recommended to use as a {{transform}} destination index. {{transforms-cap}} update documents in the current destination, and cannot delete documents in the indices previously used by ILM. This may lead to duplicated documents when you use {{transforms}} combined with ILM in case of a rollover.
+[ILM](../../manage-data/lifecycle/index-lifecycle-management.md) is not recommended to use as a transform destination index. Transforms update documents in the current destination, and cannot delete documents in the indices previously used by ILM. This may lead to duplicated documents when you use transforms combined with ILM in case of a rollover.
-If you use ILM to have time-based indices, consider using the [Date index name](elasticsearch://reference/enrich-processor/date-index-name-processor.md) instead. The processor works without duplicated documents if your {{transform}} contains a `group_by` based on `date_histogram`.
+If you use ILM to have time-based indices, consider using the [Date index name](elasticsearch://reference/enrich-processor/date-index-name-processor.md) instead. The processor works without duplicated documents if your transform contains a `group_by` based on `date_histogram`.
### SLO transforms at scale may misreport state and increase PIT activity
-When running a large number of SLO {{transforms}}, two types of limitations can appear:
+When running a large number of SLO transforms, two types of limitations can appear:
-- {{transforms-cap}} may [misreport](#transforms-inaccurate-errors) their state and suggest deletion even though they are still valid
+- Transforms may [misreport](#transforms-inaccurate-errors) their state and suggest deletion even though they are still valid
- Point-in-time (PIT) requests may [accumulate](#transforms-pit-overloads), creating extra activity on the cluster and slowing down operations
-#### {{transforms-cap}} can return inaccurate errors that suggest deletion [transforms-inaccurate-errors]
+#### Transforms can return inaccurate errors that suggest deletion [transforms-inaccurate-errors]
-The {{transforms-cap}} API and the {{transforms-cap}} management page in {{kib}} may display misleading error messages for {{transforms}} created by service level objectives (SLOs).
+The Transforms API and the Transforms management page in {{kib}} may display misleading error messages for transforms created by service level objectives (SLOs).
The message typically reads:
```console
Found task for transform [...], but no configuration for it. To delete this transform use DELETE with force=true.
```
-Although the error suggests that the {{transform}} is broken and should be deleted, this is usually not the case. The error occurs because of a limitation in how the {{transforms}} are queried and presented by the API and UI. Deleting the {{transform}} can break related SLOs.
+Although the error suggests that the transform is broken and should be deleted, this is usually not the case. The error occurs because of a limitation in how the transforms are queried and presented by the API and UI. Deleting the transform can break related SLOs.
-If you encounter this error, do not delete the {{transform}}. If a {{transform}} was already deleted, you can restore it by resetting the SLO, either in the {{kib}} or with the API:
+If you encounter this error, do not delete the transform. If a transform was already deleted, you can restore it by resetting the SLO, either in the {{kib}} or with the API:
-- In {{kib}}, go to the **Observability** > **SLOs application**, open the menu for the affected SLO, and click **Reset**.
+- In {{kib}}, go to the **Observability** > **SLOs application**, open the menu for the affected SLO, and click **Reset**.
- If you use the API, run the following request:
@@ -151,15 +151,15 @@ If you encounter this error, do not delete the {{transform}}. If a {{transform}}
POST kbn:/api/observability/slos//_reset
```
-#### Large numbers of {{transform}} can cause PIT overloads [transforms-pit-overloads]
+#### Large numbers of transform can cause PIT overloads [transforms-pit-overloads]
-{{transforms-cap}} rely on point-in-time (PIT) searches to ensure that queries remain consistent during data changes. Each {{transform}} can open and close multiple PITs during its lifetime.
+Transforms rely on point-in-time (PIT) searches to ensure that queries remain consistent during data changes. Each transform can open and close multiple PITs during its lifetime.
-When many {{transforms}} run concurrently, especially in environments with large numbers of SLOs (hundreds to more than a thousand transforms), PITs can be opened and closed in quick succession. Because PITs are closed asynchronously, the close operation does not wait for the previous request to complete. This can create a backlog of PIT close requests, known as a PIT overload.
+When many transforms run concurrently, especially in environments with large numbers of SLOs (hundreds to more than a thousand transforms), PITs can be opened and closed in quick succession. Because PITs are closed asynchronously, the close operation does not wait for the previous request to complete. This can create a backlog of PIT close requests, known as a PIT overload.
A PIT overload increases the memory pressure on hot nodes. This may result in rising memory usage and long garbage collection cycles. In practice, searches and writes may slow down or queue until the node recovers, and in severe cases a restart may be required.
-As a workaround, you can disable PIT for a specific {{transform}}:
+As a workaround, you can disable PIT for a specific transform:
```console
POST _transform//_update
@@ -174,14 +174,14 @@ To fix this issue permanently, upgrade to the following versions or later: 8.18.
## Limitations in {{kib}} [transform-ui-limitations]
-### {{transforms-cap}} are visible in all {{kib}} spaces [transform-space-limitations]
+### Transforms are visible in all {{kib}} spaces [transform-space-limitations]
-[Spaces](../../deploy-manage/manage-spaces.md) enable you to organize your source and destination indices and other saved objects in {{kib}} and to see only the objects that belong to your space. However, a {{transform}} is a long running task which is managed on cluster level and therefore not limited in scope to certain spaces. Space awareness can be implemented for a {{data-source}} under **Stack Management > Kibana** which allows privileges to the {{transform}} destination index.
+[Spaces](../../deploy-manage/manage-spaces.md) enable you to organize your source and destination indices and other saved objects in {{kib}} and to see only the objects that belong to your space. However, a transform is a long running task which is managed on cluster level and therefore not limited in scope to certain spaces. Space awareness can be implemented for a {{data-source}} under **Stack Management > Kibana** which allows privileges to the transform destination index.
-### Up to 1,000 {{transforms}} are listed in {{kib}} [transform-kibana-limitations]
+### Up to 1,000 transforms are listed in {{kib}} [transform-kibana-limitations]
-The {{transforms}} management page in {{kib}} lists up to 1000 {{transforms}}.
+The transforms management page in {{kib}} lists up to 1000 transforms.
-### {{kib}} might not support every {{transform}} configuration option [transform-ui-support]
+### {{kib}} might not support every transform configuration option [transform-ui-support]
-There might be configuration options available through the {{transform}} APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
+There might be configuration options available through the transform APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
diff --git a/explore-analyze/transforms/transform-overview.md b/explore-analyze/transforms/transform-overview.md
index 1908c41283..3b653c73ac 100644
--- a/explore-analyze/transforms/transform-overview.md
+++ b/explore-analyze/transforms/transform-overview.md
@@ -9,64 +9,64 @@ products:
- id: elasticsearch
---
-# {{transforms-cap}} overview [transform-overview]
+# Transforms overview [transform-overview]
You can choose either of the following methods to transform your data: [pivot](#pivot-transform-overview) or [latest](#latest-transform-overview).
::::{important}
-* All {{transforms}} leave your source index intact. They create a new index that is dedicated to the transformed data.
-* {{transforms-cap}} might have more configuration options provided by the APIs than the options available in {{kib}}. For all the {{transform}} configuration options, refer to the [API documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
+* All transforms leave your source index intact. They create a new index that is dedicated to the transformed data.
+* Transforms might have more configuration options provided by the APIs than the options available in {{kib}}. For all the transform configuration options, refer to the [API documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
::::
-{{transforms-cap}} are persistent tasks; they are stored in cluster state which makes them resilient for node failures. Refer to [How checkpoints work](transform-checkpoints.md) and [Error handling](transform-checkpoints.md#ml-transform-checkpoint-errors) to learn more about the machinery behind {{transforms}}.
+Transforms are persistent tasks; they are stored in cluster state which makes them resilient for node failures. Refer to [How checkpoints work](transform-checkpoints.md) and [Error handling](transform-checkpoints.md#ml-transform-checkpoint-errors) to learn more about the machinery behind transforms.
-## Pivot {{transforms}} [pivot-transform-overview]
+## Pivot transforms [pivot-transform-overview]
-You can use {{transforms}} to *pivot* your data into a new entity-centric index. By transforming and summarizing your data, it becomes possible to visualize and analyze it in alternative and interesting ways.
+You can use transforms to *pivot* your data into a new entity-centric index. By transforming and summarizing your data, it becomes possible to visualize and analyze it in alternative and interesting ways.
-A lot of {{es}} indices are organized as a stream of events: each event is an individual document, for example a single item purchase. {{transforms-cap}} enable you to summarize this data, bringing it into an organized, more analysis-friendly format. For example, you can summarize all the purchases of a single customer.
+A lot of {{es}} indices are organized as a stream of events: each event is an individual document, for example a single item purchase. Transforms enable you to summarize this data, bringing it into an organized, more analysis-friendly format. For example, you can summarize all the purchases of a single customer.
-{{transforms-cap}} enable you to define a pivot, which is a set of features that transform the index into a different, more digestible format. Pivoting results in a summary of your data in a new index.
+Transforms enable you to define a pivot, which is a set of features that transform the index into a different, more digestible format. Pivoting results in a summary of your data in a new index.
To define a pivot, first you select one or more fields that you will use to group your data. You can select categorical fields (terms) and numerical fields for grouping. If you use numerical fields, the field values are bucketed using an interval that you specify.
-The second step is deciding how you want to aggregate the grouped data. When using aggregations, you practically ask questions about the index. There are different types of aggregations, each with its own purpose and output. To learn more about the supported aggregations and group-by fields, see [Create {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
+The second step is deciding how you want to aggregate the grouped data. When using aggregations, you practically ask questions about the index. There are different types of aggregations, each with its own purpose and output. To learn more about the supported aggregations and group-by fields, see [Create transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
As an optional step, you can also add a query to further limit the scope of the aggregation.
-The {{transform}} performs a composite aggregation that paginates through all the data defined by the source index query. The output of the aggregation is stored in a *destination index*. Each time the {{transform}} queries the source index, it creates a *checkpoint*. You can decide whether you want the {{transform}} to run once or continuously. A *batch {{transform}}* is a single operation that has a single checkpoint. *{{ctransforms-cap}}* continually increment and process checkpoints as new source data is ingested.
+The transform performs a composite aggregation that paginates through all the data defined by the source index query. The output of the aggregation is stored in a *destination index*. Each time the transform queries the source index, it creates a *checkpoint*. You can decide whether you want the transform to run once or continuously. A *batch transform* is a single operation that has a single checkpoint. *{{ctransforms-cap}}* continually increment and process checkpoints as new source data is ingested.
Imagine that you run a webshop that sells clothes. Every order creates a document that contains a unique order ID, the name and the category of the ordered product, its price, the ordered quantity, the exact date of the order, and some customer information (name, gender, location, and so on). Your data set contains all the transactions from last year.
-If you want to check the sales in the different categories in your last fiscal year, define a {{transform}} that groups the data by the product categories (women's shoes, men's clothing, and so on) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.
+If you want to check the sales in the different categories in your last fiscal year, define a transform that groups the data by the product categories (women's shoes, men's clothing, and so on) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.
:::{image} /explore-analyze/images/elasticsearch-reference-pivot-preview.png
-:alt: Example of a pivot {{transform}} preview in {{kib}}
+:alt: Example of a pivot transform preview in {{kib}}
:screenshot:
:::
-## Latest {{transforms}} [latest-transform-overview]
+## Latest transforms [latest-transform-overview]
-You can use the `latest` type of {{transform}} to copy the most recent documents into a new index. You must identify one or more fields as the unique key for grouping your data, as well as a date field that sorts the data chronologically. For example, you can use this type of {{transform}} to keep track of the latest purchase for each customer or the latest event for each host.
+You can use the `latest` type of transform to copy the most recent documents into a new index. You must identify one or more fields as the unique key for grouping your data, as well as a date field that sorts the data chronologically. For example, you can use this type of transform to keep track of the latest purchase for each customer or the latest event for each host.
:::{image} /explore-analyze/images/elasticsearch-reference-latest-preview.png
-:alt: Example of a latest {{transform}} preview in {{kib}}
+:alt: Example of a latest transform preview in {{kib}}
:screenshot:
:::
-As in the case of a pivot, a latest {{transform}} can run once or continuously. It performs a composite aggregation on the data in the source index and stores the output in the destination index. If the {{transform}} runs continuously, new unique key values are automatically added to the destination index and the most recent documents for existing key values are automatically updated at each checkpoint.
+As in the case of a pivot, a latest transform can run once or continuously. It performs a composite aggregation on the data in the source index and stores the output in the destination index. If the transform runs continuously, new unique key values are automatically added to the destination index and the most recent documents for existing key values are automatically updated at each checkpoint.
## Performance considerations [transform-performance]
-{{transforms-cap}} perform search aggregations on the source indices then index the results into the destination index. Therefore, a {{transform}} never takes less time or uses less resources than the aggregation and indexing processes.
+Transforms perform search aggregations on the source indices then index the results into the destination index. Therefore, a transform never takes less time or uses less resources than the aggregation and indexing processes.
-If your {{transform}} must process a lot of historic data, it has high resource usage initially—particularly during the first checkpoint.
+If your transform must process a lot of historic data, it has high resource usage initially—particularly during the first checkpoint.
-For better performance, make sure that your search aggregations and queries are optimized and that your {{transform}} is processing only necessary data. Consider whether you can apply a source query to the {{transform}} to reduce the scope of data it processes. Also consider whether the cluster has sufficient resources in place to support both the composite aggregation search and the indexing of its results.
+For better performance, make sure that your search aggregations and queries are optimized and that your transform is processing only necessary data. Consider whether you can apply a source query to the transform to reduce the scope of data it processes. Also consider whether the cluster has sufficient resources in place to support both the composite aggregation search and the indexing of its results.
-If you prefer to spread out the impact on your cluster (at the cost of a slower {{transform}}), you can throttle the rate at which it performs search and index requests. Set the `docs_per_second` limit when you [create](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform) or [update](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform) your {{transform}}. If you want to calculate the current rate, use the following information from the [get {{transform}} stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats):
+If you prefer to spread out the impact on your cluster (at the cost of a slower transform), you can throttle the rate at which it performs search and index requests. Set the `docs_per_second` limit when you [create](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform) or [update](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-update-transform) your transform. If you want to calculate the current rate, use the following information from the [get transform stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats):
```
documents_processed / search_time_in_ms * 1000
diff --git a/explore-analyze/transforms/transform-painless-examples.md b/explore-analyze/transforms/transform-painless-examples.md
index d36ada8ff3..7319bf02f3 100644
--- a/explore-analyze/transforms/transform-painless-examples.md
+++ b/explore-analyze/transforms/transform-painless-examples.md
@@ -15,7 +15,7 @@ products:
The examples that use the `scripted_metric` aggregation are not supported on {{es}} Serverless.
::::
-These examples demonstrate how to use Painless in {{transforms}}. You can learn more about the Painless scripting language in the [Painless guide](elasticsearch://reference/scripting-languages/painless/painless.md).
+These examples demonstrate how to use Painless in transforms. You can learn more about the Painless scripting language in the [Painless guide](elasticsearch://reference/scripting-languages/painless/painless.md).
* [Getting top hits by using scripted metric aggregation](#painless-top-hits)
* [Getting time features by using aggregations](#painless-time-features)
@@ -26,14 +26,14 @@ These examples demonstrate how to use Painless in {{transforms}}. You can learn
::::{note}
-* While the context of the following examples is the {{transform}} use case, the Painless scripts in the snippets below can be used in other {{es}} search aggregations, too.
-* All the following examples use scripts, {{transforms}} cannot deduce mappings of output fields when the fields are created by a script. {{transforms-cap}} don’t create any mappings in the destination index for these fields, which means they get dynamically mapped. Create the destination index prior to starting the {{transform}} in case you want explicit mappings.
+* While the context of the following examples is the transform use case, the Painless scripts in the snippets below can be used in other {{es}} search aggregations, too.
+* All the following examples use scripts, transforms cannot deduce mappings of output fields when the fields are created by a script. Transforms don’t create any mappings in the destination index for these fields, which means they get dynamically mapped. Create the destination index prior to starting the transform in case you want explicit mappings.
::::
## Getting top hits by using scripted metric aggregation [painless-top-hits]
-This snippet shows how to find the latest document, in other words the document with the latest timestamp. From a technical perspective, it helps to achieve the function of a [Top hits](elasticsearch://reference/aggregations/search-aggregations-metrics-top-hits-aggregation.md) by using scripted metric aggregation in a {{transform}}, which provides a metric output.
+This snippet shows how to find the latest document, in other words the document with the latest timestamp. From a technical perspective, it helps to achieve the function of a [Top hits](elasticsearch://reference/aggregations/search-aggregations-metrics-top-hits-aggregation.md) by using scripted metric aggregation in a transform, which provides a metric output.
::::{important}
This example uses a `scripted_metric` aggregation which is not supported on {{es}} Serverless.
@@ -178,7 +178,7 @@ You can also use the power of [stored scripts](https://www.elastic.co/docs/api/d
## Getting time features by using aggregations [painless-time-features]
-This snippet shows how to extract time based features by using Painless in a {{transform}}. The snippet uses an index where `@timestamp` is defined as a `date` type field.
+This snippet shows how to extract time based features by using Painless in a transform. The snippet uses an index where `@timestamp` is defined as a `date` type field.
```js
"aggregations": {
@@ -264,7 +264,7 @@ PUT _transform/data_log
## Counting HTTP responses by using scripted metric aggregation [painless-count-http]
-You can count the different HTTP response types in a web log data set by using scripted metric aggregation as part of the {{transform}}. You can achieve a similar function with filter aggregations, check the [Finding suspicious client IPs](transform-examples.md#example-clientips) example for details.
+You can count the different HTTP response types in a web log data set by using scripted metric aggregation as part of the transform. You can achieve a similar function with filter aggregations, check the [Finding suspicious client IPs](transform-examples.md#example-clientips) example for details.
The example below assumes that the HTTP response codes are stored as keywords in the `response` field of the documents.
@@ -303,7 +303,7 @@ This example uses a `scripted_metric` aggregation which is not supported on {{es
}
```
-1. The `aggregations` object of the {{transform}} that contains all aggregations.
+1. The `aggregations` object of the transform that contains all aggregations.
2. Object of the `scripted_metric` aggregation.
3. This `scripted_metric` performs a distributed operation on the web log data to count specific types of HTTP responses (error, success, and other).
4. The `init_script` creates a `responses` array in the `state` object with three properties (`error`, `success`, `other`) with long data type.
@@ -313,7 +313,7 @@ This example uses a `scripted_metric` aggregation which is not supported on {{es
## Comparing indices by using scripted metric aggregations [painless-compare]
-This example shows how to compare the content of two indices by a {{transform}} that uses a scripted metric aggregation.
+This example shows how to compare the content of two indices by a transform that uses a scripted metric aggregation.
::::{important}
This example uses a `scripted_metric` aggregation which is not supported on {{es}} Serverless.
diff --git a/explore-analyze/transforms/transform-scale.md b/explore-analyze/transforms/transform-scale.md
index e4a789854f..2aa4be65e3 100644
--- a/explore-analyze/transforms/transform-scale.md
+++ b/explore-analyze/transforms/transform-scale.md
@@ -1,5 +1,5 @@
---
-navigation_title: "{{transforms-cap}} at scale"
+navigation_title: "Transforms at scale"
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/transform-scale.html
applies_to:
@@ -11,27 +11,27 @@ products:
# Transforms at scale [transform-scale]
-{{transforms-cap}} convert existing {{es}} indices into summarized indices, which provide opportunities for new insights and analytics. The search and index operations performed by {{transforms}} use standard {{es}} features so similar considerations for working with {{es}} at scale are often applicable to {{transforms}}. If you experience performance issues, start by identifying the bottleneck areas (search, indexing, processing, or storage) then review the relevant considerations in this guide to improve performance. It also helps to understand how {{transforms}} work as different considerations apply depending on whether or not your transform is running in continuous mode or in batch.
+Transforms convert existing {{es}} indices into summarized indices, which provide opportunities for new insights and analytics. The search and index operations performed by transforms use standard {{es}} features so similar considerations for working with {{es}} at scale are often applicable to transforms. If you experience performance issues, start by identifying the bottleneck areas (search, indexing, processing, or storage) then review the relevant considerations in this guide to improve performance. It also helps to understand how transforms work as different considerations apply depending on whether or not your transform is running in continuous mode or in batch.
In this guide, you’ll learn how to:
-* Understand the impact of configuration options on the performance of {{transforms}}.
+* Understand the impact of configuration options on the performance of transforms.
**Prerequisites:**
-These guildelines assume you have a {{transform}} you want to tune, and you’re already familiar with:
+These guildelines assume you have a transform you want to tune, and you’re already familiar with:
-* [How {{transforms}} work](transform-overview.md).
-* [How to set up {{transforms}}](transform-setup.md).
-* [How {{transform}} checkpoints work in continuous mode](transform-checkpoints.md).
+* [How transforms work](transform-overview.md).
+* [How to set up transforms](transform-setup.md).
+* [How transform checkpoints work in continuous mode](transform-checkpoints.md).
-The following considerations are not sequential – the numbers help to navigate between the list items; you can take action on one or more of them in any order. Most of the recommendations apply to both continuous and batch {{transforms}}. If a list item only applies to one {{transform}} type, this exception is highlighted in the description.
+The following considerations are not sequential – the numbers help to navigate between the list items; you can take action on one or more of them in any order. Most of the recommendations apply to both continuous and batch transforms. If a list item only applies to one transform type, this exception is highlighted in the description.
The keywords in parenthesis at the end of each recommendation title indicates the bottleneck area that may be improved by following the given recommendation.
-## Measure {{transforms}} performance [measure-performance]
+## Measure transforms performance [measure-performance]
-In order to optimize {{transform}} performance, start by identifying the areas where most work is being done. The **Stats** interface of the **{{transforms-cap}}** page in {{kib}} contains information that covers three main areas: indexing, searching, and processing time (alternatively, you can use the [{{transforms}} stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats)). If, for example, the results show that the highest proportion of time is spent on search, then prioritize efforts on optimizing the search query of the {{transform}}. {{transforms-cap}} also has [Rally support](https://esrally.readthedocs.io) that makes it possible to run performance checks on {{transforms}} configurations if it is required. If you optimized the crucial factors and you still experience performance issues, you may also want to consider improving your hardware.
+In order to optimize transform performance, start by identifying the areas where most work is being done. The **Stats** interface of the **Transforms** page in {{kib}} contains information that covers three main areas: indexing, searching, and processing time (alternatively, you can use the [transforms stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats)). If, for example, the results show that the highest proportion of time is spent on search, then prioritize efforts on optimizing the search query of the transform. Transforms also has [Rally support](https://esrally.readthedocs.io) that makes it possible to run performance checks on transforms configurations if it is required. If you optimized the crucial factors and you still experience performance issues, you may also want to consider improving your hardware.
## 1. Optimize `frequency` (index) [frequency]
@@ -39,17 +39,17 @@ In a {{ctransform}}, the `frequency` configuration option sets the interval betw
## 2. Increase the number of shards of the destination index (index) [increase-shards-dest-index]
-Depending on the size of the destination index, you may consider increasing its shard count. {{transforms-cap}} use one shard by default when creating the destination index. To override the index settings, create the destination index before starting the {{transform}}. For more information about how the number of shards affects scalability and resilience, refer to [Get ready for production](../../deploy-manage/index.md)
+Depending on the size of the destination index, you may consider increasing its shard count. Transforms use one shard by default when creating the destination index. To override the index settings, create the destination index before starting the transform. For more information about how the number of shards affects scalability and resilience, refer to [Get ready for production](../../deploy-manage/index.md)
::::{tip}
-Use the [Preview {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) to check the settings that the {{transform}} would use to create the destination index. You can copy and adjust these in order to create the destination index prior to starting the {{transform}}.
+Use the [Preview transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) to check the settings that the transform would use to create the destination index. You can copy and adjust these in order to create the destination index prior to starting the transform.
::::
## 3. Profile and optimize your search queries (search) [search-queries]
-If you have defined a {{transform}} source index `query`, ensure it is as efficient as possible. Use the **Search Profiler** under **Dev Tools** in {{kib}} to get detailed timing information about the execution of individual components in the search request. Alternatively, you can use the [Profile](elasticsearch://reference/elasticsearch/rest-apis/search-profile.md). The results give you insight into how search requests are executed at a low level so that you can understand why certain requests are slow, and take steps to improve them.
+If you have defined a transform source index `query`, ensure it is as efficient as possible. Use the **Search Profiler** under **Dev Tools** in {{kib}} to get detailed timing information about the execution of individual components in the search request. Alternatively, you can use the [Profile](elasticsearch://reference/elasticsearch/rest-apis/search-profile.md). The results give you insight into how search requests are executed at a low level so that you can understand why certain requests are slow, and take steps to improve them.
-{{transforms-cap}} execute standard {{es}} search requests. There are different ways to write {{es}} queries, and some of them are more efficient than others. Consult [*Tune for search speed*](../../deploy-manage/production-guidance/optimize-performance/search-speed.md) to learn more about {{es}} performance tuning.
+Transforms execute standard {{es}} search requests. There are different ways to write {{es}} queries, and some of them are more efficient than others. Consult [*Tune for search speed*](../../deploy-manage/production-guidance/optimize-performance/search-speed.md) to learn more about {{es}} performance tuning.
## 4. Limit the scope of the source query (search) [limit-source-query]
@@ -76,13 +76,13 @@ Too few shards may mean that the benefits of distributing the workload cannot be
## 6. Tune `max_page_search_size` (search) [tune-max-page-search-size]
-The `max_page_search_size` {{transform}} configuration option defines the number of buckets that are returned for each search request. The default value is 500. If you increase this value, you get better throughput at the cost of higher latency and memory usage.
+The `max_page_search_size` transform configuration option defines the number of buckets that are returned for each search request. The default value is 500. If you increase this value, you get better throughput at the cost of higher latency and memory usage.
-The ideal value of this parameter is highly dependent on your use case. If your {{transform}} executes memory-intensive aggregations – for example, cardinality or percentiles – then increasing `max_page_search_size` requires more available memory. If memory limits are exceeded, a circuit breaker exception occurs.
+The ideal value of this parameter is highly dependent on your use case. If your transform executes memory-intensive aggregations – for example, cardinality or percentiles – then increasing `max_page_search_size` requires more available memory. If memory limits are exceeded, a circuit breaker exception occurs.
## 7. Use indexed fields in your source indices (search) [indexed-fields-in-source]
-Runtime fields and scripted fields are not indexed fields; their values are only extracted or computed at search time. While these fields provide flexibility in how you access your data, they increase performance costs at search time. If {{transform}} performance using runtime fields or scripted fields is a concern, you may wish to consider using indexed fields instead. For performance reasons, we do not recommend using a runtime field as the time field that synchronizes a {{ctransform}}.
+Runtime fields and scripted fields are not indexed fields; their values are only extracted or computed at search time. While these fields provide flexibility in how you access your data, they increase performance costs at search time. If transform performance using runtime fields or scripted fields is a concern, you may wish to consider using indexed fields instead. For performance reasons, we do not recommend using a runtime field as the time field that synchronizes a {{ctransform}}.
## 8. Use index sorting (search, process) [index-sorting-group-by-ordering]
diff --git a/explore-analyze/transforms/transform-setup.md b/explore-analyze/transforms/transform-setup.md
index a65109af41..0109cd7f44 100644
--- a/explore-analyze/transforms/transform-setup.md
+++ b/explore-analyze/transforms/transform-setup.md
@@ -13,30 +13,30 @@ products:
## Requirements overview [requirements-overview]
-To use {{transforms}}, you must have:
+To use transforms, you must have:
-* at least one [{{transform}} node](../../deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#transform-node-role),
+* at least one [transform node](../../deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#transform-node-role),
* management features visible in the {{kib}} space, and
* security privileges that:
- * grant use of {{transforms}}, and
+ * grant use of transforms, and
* grant access to source and destination indices
## Security privileges [transform-privileges]
-Assigning security privileges affects how users access {{transforms}}. Consider the two main categories:
+Assigning security privileges affects how users access transforms. Consider the two main categories:
-* **[{{es}} API user](#transform-es-security-privileges)**: uses an {{es}} client, cURL, or {{kib}} **{{dev-tools-app}}** to access {{transforms}} through {{es}} APIs. This scenario requires {{es}} security privileges.
-* **[{{kib}} user](#transform-kib-security-privileges)**: uses {{transforms}} in {{kib}}. This scenario requires {{kib}} feature privileges *and* {{es}} security privileges.
+* **[{{es}} API user](#transform-es-security-privileges)**: uses an {{es}} client, cURL, or {{kib}} **{{dev-tools-app}}** to access transforms through {{es}} APIs. This scenario requires {{es}} security privileges.
+* **[{{kib}} user](#transform-kib-security-privileges)**: uses transforms in {{kib}}. This scenario requires {{kib}} feature privileges *and* {{es}} security privileges.
### {{es}} API user [transform-es-security-privileges]
-To *manage* {{transforms}}, you must meet all of the following requirements:
+To *manage* transforms, you must meet all of the following requirements:
* `transform_admin` built-in role or `manage_transform` cluster privileges,
* `read` and `view_index_metadata` index privileges on source indices, and
* `create_index`, `index`, `manage`, and `read` index privileges on destination indices. If a `retention_policy` is configured, `delete` index privilege is also required on the destination index.
-To view only the configuration and status of {{transforms}}, you must have:
+To view only the configuration and status of transforms, you must have:
* `transform_user` built-in role or `monitor_transform` cluster privileges
@@ -44,7 +44,7 @@ For more information about {{es}} roles and privileges, refer to [Built-in roles
### {{kib}} user [transform-kib-security-privileges]
-Within a {{kib}} space, for full access to {{transforms}}, you must meet all of the following requirements:
+Within a {{kib}} space, for full access to transforms, you must meet all of the following requirements:
* Management features visible in the {{kib}} space, including `Data View Management` and `Stack Monitoring`,
* `monitoring_user` built-in role,
@@ -53,9 +53,9 @@ Within a {{kib}} space, for full access to {{transforms}}, you must meet all of
* data views for your source indices,
* `read` and `view_index_metadata` index privileges on source indices, and
* `create_index`, `index`, `manage`, and `read` index privileges on destination indices. Additionally, when using a `retention_policy`, `delete` index privilege is required on destination indices.
-* `read_pipeline` cluster privileges, if the {{transform}} uses an ingest pipeline
+* `read_pipeline` cluster privileges, if the transform uses an ingest pipeline
-Within a {{kib}} space, for read-only access to {{transforms}}, you must meet all of the following requirements:
+Within a {{kib}} space, for read-only access to transforms, you must meet all of the following requirements:
* Management features visible in the {{kib}} space, including `Stack Monitoring`,
* `monitoring_user` built-in role,
@@ -68,6 +68,6 @@ For more information and {{kib}} security features, see [{{kib}} role management
## {{kib}} spaces [transform-kib-spaces]
-[Spaces](../../deploy-manage/manage-spaces.md) enable you to organize your source and destination indices and other saved objects in {{kib}} and to see only the objects that belong to your space. However, a {{transform}} is a long running task which is managed on cluster level and therefore not limited in scope to certain spaces. Space awareness can be implemented for a {{data-source}} under **Stack Management > Kibana** which allows privileges to the {{transform}} destination index.
+[Spaces](../../deploy-manage/manage-spaces.md) enable you to organize your source and destination indices and other saved objects in {{kib}} and to see only the objects that belong to your space. However, a transform is a long running task which is managed on cluster level and therefore not limited in scope to certain spaces. Space awareness can be implemented for a {{data-source}} under **Stack Management > Kibana** which allows privileges to the transform destination index.
-To successfully create {{transforms}} in {{kib}}, you must be logged into a space where the source indices are visible and the `Data View Management` and `Stack Monitoring` features are visible.
+To successfully create transforms in {{kib}}, you must be logged into a space where the source indices are visible and the `Data View Management` and `Stack Monitoring` features are visible.
diff --git a/explore-analyze/transforms/transform-usage.md b/explore-analyze/transforms/transform-usage.md
index 14a118b75d..ab4a53b892 100644
--- a/explore-analyze/transforms/transform-usage.md
+++ b/explore-analyze/transforms/transform-usage.md
@@ -14,24 +14,24 @@ products:
Why does this occur? A web session duration is an example of a behavioral attribute not held on any one log record; it has to be derived by finding the first and last records for each session in our weblogs. This derivation requires some complex query expressions and a lot of memory to connect all the data points. If you have an ongoing background process that fuses related events from one index into entity-centric summaries in another index, you get a more useful, joined-up picture. This new index is sometimes referred to as a *{{dataframe}}*.
-You might want to consider using {{transforms}} instead of aggregations when:
+You might want to consider using transforms instead of aggregations when:
* You need a complete *feature index* rather than a top-N set of items.
- In {{ml}}, you often need a complete set of behavioral features rather than the top-N. For example, if you are predicting customer churn, you might look at features such as the number of website visits in the last week, the total number of sales, or the number of emails sent. The {{stack}} {{ml-features}} create models based on this multi-dimensional feature space, so they benefit from the full feature indices that are created by {{transforms}}.
+ In {{ml}}, you often need a complete set of behavioral features rather than the top-N. For example, if you are predicting customer churn, you might look at features such as the number of website visits in the last week, the total number of sales, or the number of emails sent. The {{stack}} {{ml-features}} create models based on this multi-dimensional feature space, so they benefit from the full feature indices that are created by transforms.
- This scenario also applies when you are trying to search across the results of an aggregation or multiple aggregations. Aggregation results can be ordered or filtered, but there are [limitations to ordering](elasticsearch://reference/aggregations/search-aggregations-bucket-terms-aggregation.md#search-aggregations-bucket-terms-aggregation-order) and [filtering by bucket selector](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-selector-aggregation.md) is constrained by the maximum number of buckets returned. If you want to search all aggregation results, you need to create the complete {{dataframe}}. If you need to sort or filter the aggregation results by multiple fields, {{transforms}} are particularly useful.
+ This scenario also applies when you are trying to search across the results of an aggregation or multiple aggregations. Aggregation results can be ordered or filtered, but there are [limitations to ordering](elasticsearch://reference/aggregations/search-aggregations-bucket-terms-aggregation.md#search-aggregations-bucket-terms-aggregation-order) and [filtering by bucket selector](elasticsearch://reference/aggregations/search-aggregations-pipeline-bucket-selector-aggregation.md) is constrained by the maximum number of buckets returned. If you want to search all aggregation results, you need to create the complete {{dataframe}}. If you need to sort or filter the aggregation results by multiple fields, transforms are particularly useful.
* You need to sort aggregation results by a pipeline aggregation.
- [Pipeline aggregations](elasticsearch://reference/aggregations/pipeline.md) cannot be used for sorting. Technically, this is because pipeline aggregations are run during the reduce phase after all other aggregations have already completed. If you create a {{transform}}, you can effectively perform multiple passes over the data.
+ [Pipeline aggregations](elasticsearch://reference/aggregations/pipeline.md) cannot be used for sorting. Technically, this is because pipeline aggregations are run during the reduce phase after all other aggregations have already completed. If you create a transform, you can effectively perform multiple passes over the data.
* You want to create summary tables to optimize queries.
- For example, if you have a high level dashboard that is accessed by a large number of users and it uses a complex aggregation over a large dataset, it may be more efficient to create a {{transform}} to cache results. Therefore, each user doesn't need to run the aggregation query.
+ For example, if you have a high level dashboard that is accessed by a large number of users and it uses a complex aggregation over a large dataset, it may be more efficient to create a transform to cache results. Therefore, each user doesn't need to run the aggregation query.
* You need to account for late-arriving data.
- In some cases, data might not be immediately available when a {{transform}} runs, leading to missing records in the destination index. This can happen due to ingestion delays, where documents take a few seconds or minutes to become searchable after being indexed. To handle this, the `delay` parameter in the {{transform}}’s sync configuration allows you to postpone processing new data. Instead of always querying the most recent records, the {{transform}} will skip a short period of time (for example, 60 seconds) to ensure all relevant data has arrived before processing.
+ In some cases, data might not be immediately available when a transform runs, leading to missing records in the destination index. This can happen due to ingestion delays, where documents take a few seconds or minutes to become searchable after being indexed. To handle this, the `delay` parameter in the transform’s sync configuration allows you to postpone processing new data. Instead of always querying the most recent records, the transform will skip a short period of time (for example, 60 seconds) to ensure all relevant data has arrived before processing.
- For example, if a {{transform}} runs every 5 minutes, it usually processes data from 5 minutes ago up to the current time. However, if you set `delay` to 60 seconds, the {{transform}} will instead process data from 6 minutes ago up to 1 minute ago, making sure that any documents that arrived late are included. By adjusting the `delay` parameter, you can improve the accuracy of transformed data while still maintaining near real-time results.
+ For example, if a transform runs every 5 minutes, it usually processes data from 5 minutes ago up to the current time. However, if you set `delay` to 60 seconds, the transform will instead process data from 6 minutes ago up to 1 minute ago, making sure that any documents that arrived late are included. By adjusting the `delay` parameter, you can improve the accuracy of transformed data while still maintaining near real-time results.
diff --git a/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md b/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md
index 7f5b384499..fa79f53551 100644
--- a/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md
+++ b/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md
@@ -18,7 +18,7 @@ This tutorial is a follow-on to [installing a self-managed {{stack}}](/deploy-ma
Since {{stack}} 8.0, security is enabled by default, meaning that traffic between {{es}} nodes and between {{kib}} and {{es}} is SSL-encrypted. While this is suitable for testing non-production viability of the Elastic platform, most production networks have requirements for the use of trusted CA-signed certificates. These steps demonstrate how to update the out-of-the-box self-signed certificates with your own trusted CA-signed certificates.
-For traffic to be encrypted between {{es}} cluster nodes and between {{kib}} and {{es}}, SSL certificates must be created for the transport ({{es}} inter-node communication) and HTTP (for the {{es}} REST API) layers. Similarly, when setting up {{fleet-server}} you’ll generate and configure a new certificate bundle, and then {{elastic-agent}} uses the generated certificates to communicate with both {{fleet-server}} and {{es}}. The process to set things up is as follows:
+For traffic to be encrypted between {{es}} cluster nodes and between {{kib}} and {{es}}, SSL certificates must be created for the transport ({{es}} inter-node communication) and HTTP (for the {{es}} REST API) layers. Similarly, when setting up {{fleet-server}} you’ll generate and configure a new certificate bundle, and then {{agent}} uses the generated certificates to communicate with both {{fleet-server}} and {{es}}. The process to set things up is as follows:
* [Prerequisites and assumptions](#install-stack-demo-secure-prereqs)
* [Step 1: Generate a new self-signed CA certificate](#install-stack-demo-secure-ca)
diff --git a/reference/fleet/agent-environment-variables.md b/reference/fleet/agent-environment-variables.md
index a46225d5d5..ca8d146503 100644
--- a/reference/fleet/agent-environment-variables.md
+++ b/reference/fleet/agent-environment-variables.md
@@ -73,7 +73,7 @@ Settings used to bootstrap {{fleet-server}} on this {{agent}}. At least one {{fl
| $$$env-bootstrap-fleet-fleet-server-cert-key-passphrase$$$
`FLEET_SERVER_CERT_KEY_PASSPHRASE`
| (string) The path to the private key passphrase for an encrypted private key file.
**Default:** none
|
| $$$env-bootstrap-fleet-fleet-server-client-auth$$$
`FLEET_SERVER_CLIENT_AUTH`
| (string) One of `none`, `optional`, or `required`. {{fleet-server}}'s client authentication option for client mTLS connections. If `optional` or `required` is specified, client certificates are verified using CAs.
**Default:** `none`
|
| $$$env-bootstrap-fleet-fleet-server-es-ca-trusted-fingerprint$$$
`FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT`
| (string) The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint is used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}.
**Default:** `""`
|
-| $$$env-bootstrap-fleet-fleet-daemon-timeout$$$
`FLEET_DAEMON_TIMEOUT`
| (duration) Set to indicate how long {{fleet-server}} will wait during the bootstrap process for {{elastic-agent}}.
|
+| $$$env-bootstrap-fleet-fleet-daemon-timeout$$$
`FLEET_DAEMON_TIMEOUT`
| (duration) Set to indicate how long {{fleet-server}} will wait during the bootstrap process for {{agent}}.
|
| $$$env-bootstrap-fleet-fleet-server-timeout$$$
`FLEET_SERVER_TIMEOUT`
| (duration) Set to indicate how long {{agent}} will wait for {{fleet-server}} to check in as healthy.
|
diff --git a/reference/fleet/elastic-agent-container.md b/reference/fleet/elastic-agent-container.md
index 18c4cbb96c..b482392887 100644
--- a/reference/fleet/elastic-agent-container.md
+++ b/reference/fleet/elastic-agent-container.md
@@ -10,7 +10,7 @@ products:
You can run {{agent}} inside a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the [Elastic Docker registry](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). If you are running in Kubernetes, refer to [run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md).
-Running {{elastic-agent}} in a container is supported only in Linux environments. For this reason we don't currently provide {{agent}} container images for Windows.
+Running {{agent}} in a container is supported only in Linux environments. For this reason we don't currently provide {{agent}} container images for Windows.
In version 9.0.0, the default Ubuntu-based Docker images used for {{agent}} have been changed to Red Hat UBI (Universal Base Image) minimal based images, to reduce the overall footprint of the agent Docker images and to improve compliance with enterprise standards. Refer to [#6427]({{agent-pull}}6427) for details.
diff --git a/reference/fleet/index.md b/reference/fleet/index.md
index c3c0b9612f..637bbc97a4 100644
--- a/reference/fleet/index.md
+++ b/reference/fleet/index.md
@@ -12,7 +12,7 @@ products:
- id: kibana
---
-# {{fleet}} and {{elastic-agent}} overview [fleet-and-elastic-agent]
+# {{fleet}} and {{agent}} overview [fleet-and-elastic-agent]
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
$$$package-registry-intro$$$
@@ -85,7 +85,7 @@ You can see the state of all your {{agents}} in {{fleet}}. On the **Agents** pag
:alt: Agents page
:::
-{{fleet}} in {{kib}} enables you to manage {{elastic-agent}} installations in standalone or {{fleet}} mode.
+{{fleet}} in {{kib}} enables you to manage {{agent}} installations in standalone or {{fleet}} mode.
Standalone mode requires you to manually configure and manage the agent locally. It is recommended for advanced users only.
diff --git a/reference/machine-learning/ootb-ml-jobs-siem.md b/reference/machine-learning/ootb-ml-jobs-siem.md
index 912c4e6d52..1c645da83c 100644
--- a/reference/machine-learning/ootb-ml-jobs-siem.md
+++ b/reference/machine-learning/ootb-ml-jobs-siem.md
@@ -148,7 +148,7 @@ If there are additional requirements such as installing the Windows System Monit
## Security: Elastic Integrations [security-integrations-jobs]
-[Elastic Integrations](integration-docs://reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, {{transforms}}, and in this case, {{ml}} capabilities for Security.
+[Elastic Integrations](integration-docs://reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, transforms, and in this case, {{ml}} capabilities for Security.
The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment.
diff --git a/reference/security/defend-advanced-settings.md b/reference/security/defend-advanced-settings.md
index c5a7cd66f7..28d53c3a16 100644
--- a/reference/security/defend-advanced-settings.md
+++ b/reference/security/defend-advanced-settings.md
@@ -18,7 +18,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*How long to wait for agent connectivity before sending first policy reply, in seconds. Default: `60`.*
- {{elastic-endpoint}} applies a cached configuration from disk immediately on start up. However, before generating a policy response document, {{elastic-endpoint}} waits to first establish a connection to {{elastic-agent}} to see if there are configuration updates. Use this setting to specify how long that delay should be. Regardless of this setting, {{elastic-endpoint}} will periodically attempt to (re)connect to {{elastic-agent}} if it isn't connected.
+ {{elastic-endpoint}} applies a cached configuration from disk immediately on start up. However, before generating a policy response document, {{elastic-endpoint}} waits to first establish a connection to {{agent}} to see if there are configuration updates. Use this setting to specify how long that delay should be. Regardless of this setting, {{elastic-endpoint}} will periodically attempt to (re)connect to {{agent}} if it isn't connected.
`[mac,windows].advanced.alerts.cloud_lookup`
@@ -56,7 +56,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable self-healing by erasing attack artifacts when prevention alerts are triggered. Warning: data loss can occur. Default: `false`.*
When a prevention alert is generated, {{elastic-endpoint}} can [roll back](/solutions/security/configure-elastic-defend/configure-self-healing-rollback-for-windows-endpoints.md) recent filesystem changes likely associated with the attack. Use this setting to enable the self-healing rollback feature.
-
+
::::{warning}
This feature can cause permanent data loss.
::::
@@ -80,7 +80,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable self-healing of registry based malware artifacts when prevention alerts are triggered. Requires `rollback.self_healing.enabled` to also be enabled. Default: `true`.*
As an extension to the base-level self-healing rollback feature, {{elastic-endpoint}} can roll back recent registry changes when an attack occurs. Use this setting to enable this feature.
-
+
::::{warning}
This feature can cause permanent data loss.
::::
@@ -175,7 +175,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Provide an additional PEM-encoded certificate for {{fleet}} Server SSL/TLS verification. Default: none.*
- Specifies the certificate used to verify the SSL/TLS connection to the {{fleet}} Server. We typically recommend configuring this at the {{fleet}} level, so it applies consistently across {{elastic-agent}} and all integrations, rather than setting it specifically for {{elastic-endpoint}}.
+ Specifies the certificate used to verify the SSL/TLS connection to the {{fleet}} Server. We typically recommend configuring this at the {{fleet}} level, so it applies consistently across {{agent}} and all integrations, rather than setting it specifically for {{elastic-endpoint}}.
`[linux,mac,windows].advanced.artifacts.user.proxy_disable`
@@ -191,7 +191,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Override all other proxy settings for use when downloading user artifact updates from {{fleet}} Server. Default: none.*
- Use this setting to configure a proxy server when communicating with the {{fleet}} Server. We typically recommend [configuring proxy settings at the {{elastic-agent}} level](/reference/fleet/fleet-agent-proxy-support.md) to ensure consistent and centralized proxy handling.
+ Use this setting to configure a proxy server when communicating with the {{fleet}} Server. We typically recommend [configuring proxy settings at the {{agent}} level](/reference/fleet/fleet-agent-proxy-support.md) to ensure consistent and centralized proxy handling.
`[linux,mac,windows].advanced.capture_command_line`
@@ -221,12 +221,12 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable diagnostic features. Default: `true`.*
- Use this setting to disable diagnostic mode, which tests new protections and rules to ensure low false positive rates upon production release. The results of this testing are only collected if telemetry is enabled.
-
+ Use this setting to disable diagnostic mode, which tests new protections and rules to ensure low false positive rates upon production release. The results of this testing are only collected if telemetry is enabled.
+
::::{note}
We recommend keeping diagnostic mode enabled to help improve product quality and ensure new protections perform effectively in your environment before they’re released.
::::
-
+
`windows.advanced.diagnostic.rollback_telemetry_enabled`
: Added in 8.1.0.
@@ -241,9 +241,9 @@ Advanced settings are not recommended for most users. Use them only if you have
*Provide a comma-delimited set of key=value pairs of values to add into all documents. Each key must begin with `Custom`. An example is `Custom.key=value1,Custom.key2=value2`. Default: none.*
- Use this setting to add custom key/value pairs into all {{elastic-endpoint}} documents. It works similarly to the [**Custom fields** {{elastic-agent}} policy configuration](/reference/fleet/agent-policy.md#add-custom-fields), which {{elastic-endpoint}} doesn't support.
+ Use this setting to add custom key/value pairs into all {{elastic-endpoint}} documents. It works similarly to the [**Custom fields** {{agent}} policy configuration](/reference/fleet/agent-policy.md#add-custom-fields), which {{elastic-endpoint}} doesn't support.
+
-
`[linux,mac,windows].advanced.elasticsearch.delay`
: Added in 7.9.0.
@@ -314,7 +314,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable ETW API events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable API event collection, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling API event collection may break other {{elastic-endpoint}} features.
:::
@@ -503,7 +503,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel image load events. `false` disables them even if they are needed by other features. Default: `true`.*
- Use this setting to disable image load events, even if other {{elastic-endpoint}} features require them.
+ Use this setting to disable image load events, even if other {{elastic-endpoint}} features require them.
:::{warning}
Disabling image load events may break other {{elastic-endpoint}} features.
@@ -579,7 +579,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Provide a comma-separated list of additional filesystems for fanotify subsystem to monitor. Names should be as they appear in `/proc/filesystems`, for example `jfs,ufs,ramfs`. When `ignore_unknown_filesystems` is `false`, this option is ignored. Warning: it's recommended to avoid network backed filesystems. Default: none.*
- Use this setting to specify filesystems for fanotify to monitor for malware. Filesystems that aren't monitored won't generate malware alerts.
+ Use this setting to specify filesystems for fanotify to monitor for malware. Filesystems that aren't monitored won't generate malware alerts.
`linux.advanced.fanotify.seccomp_restricted`
@@ -613,7 +613,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable self-protection hardening on macOS. Default: `true`.*
- Use this setting to enable self-protection on macOS. This hardens {{elastic-endpoint}} and {{elastic-agent}} processes, files, and services against basic tampering attempts. This is distinct from [tamper protection](https://www.elastic.co/docs/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation).
+ Use this setting to enable self-protection on macOS. This hardens {{elastic-endpoint}} and {{agent}} processes, files, and services against basic tampering attempts. This is distinct from [tamper protection](https://www.elastic.co/docs/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation).
`linux.advanced.host_isolation.allowed`
@@ -628,7 +628,7 @@ Advanced settings are not recommended for most users. Use them only if you have
: Added in 8.11.0.
*Collect and send image load events to {{es}}. Warning: this can lead to very high data volumes; use of event filters to drop unwanted events is strongly recommended. Default: `false`.*
-
+
If malicious behavior protection is enabled, {{elastic-endpoint}} by default monitors for image load events, but doesn't emit them to {{es}} due to high data volume. Use this setting to allow those events to be emitted to {{es}}, regardless of whether malicious behavior protection is enabled.
@@ -657,7 +657,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Control whether to connect to the kernel driver. Warning: disabling this will break most features. Default: `true`.*
- On macOS, {{elastic-endpoint}} uses a system extension, and on Windows, a kernel driver. Use this setting to disable {{elastic-endpoint}}'s attempt to connect to those subsystems.
+ On macOS, {{elastic-endpoint}} uses a system extension, and on Windows, a kernel driver. Use this setting to disable {{elastic-endpoint}}'s attempt to connect to those subsystems.
:::{warning}
Disabling the connection will break many {{elastic-endpoint}} features.
@@ -700,7 +700,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel file write events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable kernel file write events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling file write events may break other {{elastic-endpoint}} features.
:::
@@ -724,7 +724,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel network events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable kernel network events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling network events may break other {{elastic-endpoint}} features.
:::
@@ -736,7 +736,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable the network content filter, which will enable network eventing. Warning: host isolation will fail if this is disabled. Default: `true`.*
Use this setting to enable or disable the macOS network content filter.
-
+
:::{warning}
Disabling the network content filter will break other {{elastic-endpoint}} features.
:::
@@ -748,7 +748,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable the network packet filter. Warning: host isolation will fail if this is disabled. Default: `true`.*
Use this setting to enable or disable the macOS network packet filter.
-
+
:::{warning}
Disabling the network packet filter will break other {{elastic-endpoint}} features.
:::
@@ -784,7 +784,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel process events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable kernel process events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling process events may break other {{elastic-endpoint}} features.
:::
@@ -796,7 +796,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable process and thread handle events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable process and thread handle events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling process and thread handle events may break other {{elastic-endpoint}} features.
:::
@@ -808,7 +808,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel registry events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable registry modification events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling registry modification events may break other {{elastic-endpoint}} features.
:::
@@ -826,7 +826,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable kernel sync image load events. `false` disables them even if they are needed by other features. Default: `true`.*
Use this setting to disable synchronous image load events, even if other {{elastic-endpoint}} features require them.
-
+
:::{warning}
Disabling synchronous image load events may break other {{elastic-endpoint}} features.
:::
@@ -905,7 +905,7 @@ Advanced settings are not recommended for most users. Use them only if you have
*Enable scanning for malicious memory regions as a part of memory protection. Default: `true`.*
- Use this setting to disable memory scanning using YARA rules, even if Memory Threat protection is enabled. On Windows, Memory Threat protection will remain effective even without this scan. On macOS and Linux, disabling this scan will effectively disable Memory Threat protection.
+ Use this setting to disable memory scanning using YARA rules, even if Memory Threat protection is enabled. On Windows, Memory Threat protection will remain effective even without this scan. On macOS and Linux, disabling this scan will effectively disable Memory Threat protection.
`[linux,mac,windows].advanced.memory_protection.memory_scan_collect_sample`
@@ -1028,7 +1028,7 @@ Advanced settings are not recommended for most users. Use them only if you have
: Added in 8.3.0.
*The percentage of the aggregate system CPU to which Endpoint is restricted. The range is `20`-`100%`. Values under `20` are ignored and trigger a policy warning. Default: `50`.*
-
+
:::{important}
Setting the limit too low will impact system performance, since {{elastic-endpoint}} pauses application loads during malware scans.
:::
diff --git a/release-notes/elastic-security/index.md b/release-notes/elastic-security/index.md
index 252ef8d469..d4d829f265 100644
--- a/release-notes/elastic-security/index.md
+++ b/release-notes/elastic-security/index.md
@@ -131,7 +131,7 @@ To check for security updates, go to [Security announcements for the Elastic sta
* Fixes a bug where Linux capabilities were included in {{elastic-endpoint}} network events despite being disabled.
* Fixes an issue where {{elastic-defend}} would incorrectly calculate throughput capacity when sending documents to output. This may have limited event throughput on extremely busy endpoints.
* Improves the reliability of local {{elastic-defend}} administrative shell commands. In rare cases, a command could fail to execute due to issues with interprocess communication.
-* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{elastic-agent}} connectivity changes no longer alter the host isolation state.
+* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{agent}} connectivity changes no longer alter the host isolation state.
* Fixes a bug in {{elastic-defend}} where Linux endpoints would report `process.executable` as a relative, instead of absolute, path.
* Fixes an issue which could cause {{elastic-defend}} to improperly report success when self-healing rollback attempted to terminate a process with an active debugger on Windows.
* Fixes an issue in {{elastic-defend}} installation logging where only the first character of install paths (usually 'C') was logged.
@@ -193,7 +193,7 @@ To check for security updates, go to [Security announcements for the Elastic sta
* Adds support in {{elastic-defend}} for installing eBPF probes on Linux endpoints when taskstats is compiled out of the kernel.
* Fixes an issue in {{elastic-defend}} where Linux network events could have source and destination bytes swapped.
* Removes `.process.thread.capabilities.permitted` and `.process.thread.capabilities.effective` from Linux network events in {{elastic-defend}}.
-* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{elastic-agent}} connectivity changes no longer alter the host isolation state.
+* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{agent}} connectivity changes no longer alter the host isolation state.
* Fixes an issue where {{elastic-defend}} would incorrectly calculate throughput capacity when sending documents to output. This may have limited event throughput on extremely busy endpoints.
* Fixes an issue in {{elastic-defend}} installation logging where only the first character of install paths (usually 'C') would be logged.
@@ -350,7 +350,7 @@ To check for security updates, go to [Security announcements for the Elastic sta
* Adds support in {{elastic-defend}} for installing eBPF probes on Linux endpoints when taskstats is compiled out of the kernel.
* Fixes an issue in {{elastic-defend}} where Linux network events could have source and destination bytes swapped.
* Removes `.process.thread.capabilities.permitted` and `.process.thread.capabilities.effective` from Linux network events in {{elastic-defend}}.
-* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{elastic-agent}} connectivity changes no longer alter the host isolation state.
+* Fixes an issue in {{elastic-defend}} where host isolation could auto-release incorrectly. Host isolation now only releases when {{elastic-endpoint}} becomes orphaned. Intermittent {{agent}} connectivity changes no longer alter the host isolation state.
* Improves the reliability of local {{elastic-defend}} administrative shell commands. In rare cases, a command could fail to execute due to issue with interprocess communication.
* Fixes an issue where {{elastic-defend}} would incorrectly calculate throughput capacity when sending documents to output. This may have limited event throughput on extremely busy endpoints.
* Fixes an issue in {{elastic-defend}} installation logging where only the first character of install paths (usually 'C') would be logged.
diff --git a/release-notes/elastic-security/known-issues.md b/release-notes/elastic-security/known-issues.md
index 5026b1382c..8e022ddd20 100644
--- a/release-notes/elastic-security/known-issues.md
+++ b/release-notes/elastic-security/known-issues.md
@@ -19,13 +19,13 @@ Known issues are significant defects or limitations that may impact your impleme
:::{dropdown} Deploying integrations using AWS CloudFormation doesn't work
Applies to: 9.2.0 and 9.2.1
-**Impact**
+**Impact**
New deployments of integrations that collect data from cloud service providers, such as Asset Discovery and Cloud Security Posture Management, do not work when deployed to AWS using agent-based deployment and the AWS CloudFormation deployment option. The problem results from a malformed CloudFormation parameter: `ElasticAgentVersion`. The default value for `ElasticAgentVersion` has a space instead of a `+`. This produces an invalid agent version value.
For more information, check [#14627](https://github.com/elastic/kibana/pull/242365).
-**Workaround**
+**Workaround**
To work around this issue, update the default CloudFormation template by replacing the space in the `ElasticAgentVersion` parameter with a `+`.
@@ -35,7 +35,7 @@ Resolved in {{stack}} 9.2.2.
:::
-:::{dropdown} Entity store transform is unavailable
+:::{dropdown} Entity store transform is unavailable
Applies to: 9.2.0
@@ -43,7 +43,7 @@ Applies to: 9.2.0
A new feature introduced to the entity store in 9.2.0 caused the transform to scan for nonexistent indices.
-**Workaround**
+**Workaround**
Restart the entity store:
1. Find **Entity Store** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
@@ -61,17 +61,17 @@ Applies to: ECH 9.2.0 deployments hosted on AWS
**Impact**
-If your ECH deployment is hosted on AWS, new Cloud Security Posture Management (CSPM) and Asset Inventory integrations will fail to produce findings when deployed using agent-based deployment. ECH deployments hosted on GCP or Azure are not affected. Integrations that use agentless deployment are not affected.
+If your ECH deployment is hosted on AWS, new Cloud Security Posture Management (CSPM) and Asset Inventory integrations will fail to produce findings when deployed using agent-based deployment. ECH deployments hosted on GCP or Azure are not affected. Integrations that use agentless deployment are not affected.
-**Workaround**
+**Workaround**
Two workarounds are available:
-1. Turn off the **Enable Cloud Connector** advanced setting.
+1. Turn off the **Enable Cloud Connector** advanced setting.
1. Go to the **Advanced Settings** menu using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. In the **Security Solution** section, turn off the **Enable Cloud Connector** option.
3. Your agent-based integration deployments will work as expected.
-2. Use agentless deployment.
+2. Use agentless deployment.
1. Instead of using agent-based deployment, use agentless deployment. Agentless deployment works as expected.
**Resolved**
@@ -82,22 +82,22 @@ Resolved in {{stack}} 9.2.1
::::{dropdown} Filters may not apply correctly on the Alerts page
-Applies to: 9.1.0, 9.1.1, 9.1.2, and 9.1.3
+Applies to: 9.1.0, 9.1.1, 9.1.2, and 9.1.3
**Impact**
-After upgrading to 9.1.0 and later, some users may experience inconsistent results when applying filters to the Alerts page.
+After upgrading to 9.1.0 and later, some users may experience inconsistent results when applying filters to the Alerts page.
**Workaround**
-You can turn off the {{kib}} `courier:ignoreFilterIfFieldNotInIndex` [advanced setting](kibana://reference/advanced-settings.md#kibana-search-settings), which only applies to the current space. However, turning off this setting might prevent dashboards and visualizations with applied filters from displaying properly. If you have important dashboards that this will impact, you can temporarily move them to a new space by doing the following:
+You can turn off the {{kib}} `courier:ignoreFilterIfFieldNotInIndex` [advanced setting](kibana://reference/advanced-settings.md#kibana-search-settings), which only applies to the current space. However, turning off this setting might prevent dashboards and visualizations with applied filters from displaying properly. If you have important dashboards that this will impact, you can temporarily move them to a new space by doing the following:
-1. Create a [new space](/deploy-manage/manage-spaces.md#spaces-managing).
-2. Turn on the {{kib}} `courier:ignoreFilterIfFieldNotInIndex` [advanced setting](kibana://reference/advanced-settings.md#kibana-search-settings) so that filters apply to visualizations only if the index contains the filtering field.
-3. Use the [import saved objects tool](/explore-analyze/find-and-organize/saved-objects.md#saved-objects-import-and-export) to move the dashboards or visualizations to the space you just created.
+1. Create a [new space](/deploy-manage/manage-spaces.md#spaces-managing).
+2. Turn on the {{kib}} `courier:ignoreFilterIfFieldNotInIndex` [advanced setting](kibana://reference/advanced-settings.md#kibana-search-settings) so that filters apply to visualizations only if the index contains the filtering field.
+3. Use the [import saved objects tool](/explore-analyze/find-and-organize/saved-objects.md#saved-objects-import-and-export) to move the dashboards or visualizations to the space you just created.
:::{note}
-Ensure you give any users who will need access to the new space the appropriate permissions.
+Ensure you give any users who will need access to the new space the appropriate permissions.
:::
**Resolved**
@@ -106,13 +106,13 @@ Resolved in {{stack}} 9.1.4
::::
-:::{dropdown} The {{elastic-agent}} Docker image is not available at `docker.elastic.co/beats/elastic-agent:9.0.0`
+:::{dropdown} The {{agent}} Docker image is not available at `docker.elastic.co/beats/elastic-agent:9.0.0`
Applies to: 9.0.0
**Impact**
-The {{elastic-agent}} image is not available from `docker.elastic.co/beats/elastic-agent:9.0.0`. The default manifests for integrations that run {{elastic-agent}} on Kubernetes—such as CSPM or CNVM—use this image location, resulting in an error.
+The {{agent}} image is not available from `docker.elastic.co/beats/elastic-agent:9.0.0`. The default manifests for integrations that run {{agent}} on Kubernetes—such as CSPM or CNVM—use this image location, resulting in an error.
**Workaround**
diff --git a/solutions/observability/apm/apm-server/use-internal-collection-to-send-monitoring-data.md b/solutions/observability/apm/apm-server/use-internal-collection-to-send-monitoring-data.md
index e102df9f19..8674abbc48 100644
--- a/solutions/observability/apm/apm-server/use-internal-collection-to-send-monitoring-data.md
+++ b/solutions/observability/apm/apm-server/use-internal-collection-to-send-monitoring-data.md
@@ -80,7 +80,7 @@ You specify these settings in the X-Pack monitoring section of the `apm-server.y
### `monitoring.enabled` [_monitoring_enabled]
-The `monitoring.enabled` config is a boolean setting to enable or disable {{monitoring}}. If set to `true`, monitoring is enabled.
+The `monitoring.enabled` config is a boolean setting to enable or disable {{monitor-features}}. If set to `true`, monitoring is enabled.
The default value is `false`.
diff --git a/solutions/observability/incident-management/create-manage-rules.md b/solutions/observability/incident-management/create-manage-rules.md
index 38145a9d81..f9c532ab99 100644
--- a/solutions/observability/incident-management/create-manage-rules.md
+++ b/solutions/observability/incident-management/create-manage-rules.md
@@ -66,7 +66,7 @@ From the action menu you can also:
## View rule details [observability-create-manage-rules-view-rule-details]
-Click on an individual rule on the **{{rules-app}}** page to view details including the rule name, status, definition, execution history, related alerts, and more.
+Click on an individual rule on the **{{rules-ui}}** page to view details including the rule name, status, definition, execution history, related alerts, and more.
:::{image} /solutions/images/serverless-alerts-detail-apm-anomaly.png
:alt: Rule details (APM anomaly)
@@ -106,7 +106,7 @@ Rules are disabled on export. You are prompted to re-enable the rule on successf
When creating or editing a rule, add the following resources to help you get started with investigating alerts:
-* {applies_to}`stack: ga 9.1` **Investigation guide**: Investigation guides can help you respond to alerts more efficiently and consistently. When creating them, you can include instructions for responding to alerts, links to external supporting materials, and more. When the rule generates an alert, the investigation guide can be accessed from the **Investigation guide** tab on the alert's details page.
+* {applies_to}`stack: ga 9.1` **Investigation guide**: Investigation guides can help you respond to alerts more efficiently and consistently. When creating them, you can include instructions for responding to alerts, links to external supporting materials, and more. When the rule generates an alert, the investigation guide can be accessed from the **Investigation guide** tab on the alert's details page.
::::{tip}
Use Markdown to format and structure text in your investigation guide.
diff --git a/solutions/search/using-resolve-cluster-endpoint-before-cross-cluster-search.md b/solutions/search/using-resolve-cluster-endpoint-before-cross-cluster-search.md
index f7158819f1..9c1c8eee92 100644
--- a/solutions/search/using-resolve-cluster-endpoint-before-cross-cluster-search.md
+++ b/solutions/search/using-resolve-cluster-endpoint-before-cross-cluster-search.md
@@ -19,7 +19,7 @@ You may want to exclude a cluster or index from a search when:
3. The index expression, combined with any query parameters you specify, might trigger exceptions. In these cases, the "error" field in the `_resolve/cluster` response will be present. This is also where security/permission errors will be shown.
4. A remote cluster is running an older version that does not support features needed for your search.
-## {{api-examples-title}} [resolve-cluster-api-example]
+## Examples [resolve-cluster-api-example]
```console
GET /_resolve/cluster/my-index*,clust*:my-index*
diff --git a/solutions/security/configure-elastic-defend/configure-data-volume-for-elastic-endpoint.md b/solutions/security/configure-elastic-defend/configure-data-volume-for-elastic-endpoint.md
index 29c063f96a..1b3d285764 100644
--- a/solutions/security/configure-elastic-defend/configure-data-volume-for-elastic-endpoint.md
+++ b/solutions/security/configure-elastic-defend/configure-data-volume-for-elastic-endpoint.md
@@ -28,7 +28,7 @@ Each setting has several OS-specific variants, represented by `[linux|mac|window
## Network event deduplication [network-event-deduplication]
:::{admonition} Added in 8.15.0
-This functionality was added in {{elastic-agent}} 8.15.0.
+This functionality was added in {{agent}} 8.15.0.
:::
When repeated network connections are detected from the same process, {{elastic-endpoint}} will not produce network events for subsequent connections. To disable or reduce deduplication of network events, use these advanced settings:
@@ -43,7 +43,7 @@ When repeated network connections are detected from the same process, {{elastic-
## Data in `host.*` fields [host-fields]
:::{admonition} Added in 8.18.0
-This functionality was added in {{elastic-agent}} 8.18.0.
+This functionality was added in {{agent}} 8.18.0.
:::
{{elastic-endpoint}} includes only a small subset of the data in the `host.*` fieldset in event documents. Full `host.*` information is still included in documents written to the `metrics-*` index pattern and in {{elastic-endpoint}} alerts. To override this behavior and include all `host.*` data for events, use this advanced setting:
@@ -60,7 +60,7 @@ Users should take note of how a lack of some `host.*` information may affect the
## Merged process and network events [merged-process-network]
:::{admonition} Added in 8.18.0
-This functionality was added in {{elastic-agent}} 8.18.0.
+This functionality was added in {{agent}} 8.18.0.
:::
{{elastic-endpoint}} merges process `create`/`terminate` events (Windows) and `fork`/`exec`/`end` events (macOS/Linux) when possible. This means short-lived processes only generate a single event containing the details from when the process terminated. {{elastic-endpoint}} also merges network `connection/termination` events (Windows/macOS/Linux) when possible for short-lived connections. To disable this behavior, use these advanced settings:
@@ -80,7 +80,7 @@ Merged events can affect the results of [event filters](../manage-elastic-defend
## MD5 and SHA-1 hashes [md5-sha1-hashes]
:::{admonition} Added in 8.18.0
-This functionality was added in {{elastic-agent}} 8.18.0.
+This functionality was added in {{agent}} 8.18.0.
:::
{{elastic-endpoint}} does not report MD5 and SHA-1 hashes in event data by default. These will still be reported if any [trusted applications](../manage-elastic-defend/trusted-applications.md), [blocklist entries](../manage-elastic-defend/blocklist.md), [event filters](../manage-elastic-defend/event-filters.md), or [Endpoint exceptions](../detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions) require them. To include these hashes in all event data, use these advanced settings:
diff --git a/solutions/security/get-started/spaces-defend-faq.md b/solutions/security/get-started/spaces-defend-faq.md
index ebaf88c1bd..a07629f504 100644
--- a/solutions/security/get-started/spaces-defend-faq.md
+++ b/solutions/security/get-started/spaces-defend-faq.md
@@ -14,9 +14,9 @@ This page introduces {{elastic-sec}} space awareness and answers frequently aske
::::{admonition} Key points
* Artifacts such as trusted applications, event filters, and response action history are scoped by space to provide granular control over access.
-* Role-based access control (RBAC) defines who can manage global and space-specific resources. Users can view, edit, or manage artifacts based on their role privileges and the space context.
+* Role-based access control (RBAC) defines who can manage global and space-specific resources. Users can view, edit, or manage artifacts based on their role privileges and the space context.
* You need the **Global artifact management** privilege to manage global artifacts (those not associated with specific policies).
-::::
+::::
::::{note}
{{elastic-sec}}'s space awareness works in conjunction with {{fleet}}'s space awareness. Space awareness is enabled by default in both applications, but for {{stack}} deployments that existed prior to version 9.1, {{fleet}} requires you to manually “opt-in” so that existing data can become space aware. For instructions, refer to [Enable space awareness in {{fleet}}](/deploy-manage/manage-spaces-fleet.md#spaces-fleet-enable).
@@ -27,7 +27,7 @@ This is a one-time migration that copies your existing {{fleet}} data into a new
## General FAQ [spaces-security-faq-general]
**What are spaces in {{kib}}, and how do they affect what I see?**
-Spaces allow your organization to segment data and configurations within {{kib}}. If you're working in a specific space, you’ll only see the policies, {{agents}}, endpoints, and data that belong to that space.
+Spaces allow your organization to segment data and configurations within {{kib}}. If you're working in a specific space, you’ll only see the policies, {{agents}}, endpoints, and data that belong to that space.
**Does this matter to me if my organization doesn't use spaces?**
@@ -37,16 +37,16 @@ If your organization doesn't use spaces, the only thing you need to know is that
When you upgrade your {{stack}} deployment to 9.1.0, the **Global Artifact Management** privilege is automatically granted to any role that grants the **All** privilege to at least one artifact type.
-**How do I use spaces with {{elastic-agent}} and {{elastic-defend}}?**
+**How do I use spaces with {{agent}} and {{elastic-defend}}?**
-Spaces are defined at the {{kib}} level. Once a space is created, {{elastic-agent}} policies can be assigned to it. To do this, go to your list of agent policies in {{fleet}}, and select the policy you want to assign. Navigate to the **Settings** tab, find the **Spaces** section, and select the space(s) where you want the policy to appear.
+Spaces are defined at the {{kib}} level. Once a space is created, {{agent}} policies can be assigned to it. To do this, go to your list of agent policies in {{fleet}}, and select the policy you want to assign. Navigate to the **Settings** tab, find the **Spaces** section, and select the space(s) where you want the policy to appear.
-Once assigned, the {{agents}}—and {{elastic-defend}} endpoints, if applicable—associated with this policy are visible and manageable only within the designated space(s).
+Once assigned, the {{agents}}—and {{elastic-defend}} endpoints, if applicable—associated with this policy are visible and manageable only within the designated space(s).
**Can artifacts be assigned to multiple spaces?**
-Yes, {{elastic-agent}} policies and all associated artifacts can be assigned to more than one space.
+Yes, {{agent}} policies and all associated artifacts can be assigned to more than one space.
## {{elastic-defend}} policies [spaces-security-faq-defend-policies]
@@ -76,7 +76,7 @@ Global artifacts are space agnostic and appear in all spaces.
Users can assign artifacts to any policies they have access to within their assigned space.
-When an artifact entry is created within a space, it is owned by that space. To edit or delete the artifact, you must either be in the owning space or have **Global artifact management** privileges.
+When an artifact entry is created within a space, it is owned by that space. To edit or delete the artifact, you must either be in the owning space or have **Global artifact management** privileges.
**What happens if my policy uses an artifact owned by a space I don't have access to?**
@@ -104,7 +104,7 @@ This information is not currently visible in the Kibana UI. It is, however, avai
**How does RBAC work for artifacts assigned to a particular space?**
-Specific {{kib}} privileges for each artifact type—such as event filters or trusted applications—allow you to manage (create, edit, delete, and assign) those artifact types globally or per policy, but only for policies within the spaces you have access to. These artifact types include:
+Specific {{kib}} privileges for each artifact type—such as event filters or trusted applications—allow you to manage (create, edit, delete, and assign) those artifact types globally or per policy, but only for policies within the spaces you have access to. These artifact types include:
* Trusted applications
* Host isolation exceptions
@@ -137,7 +137,7 @@ When a space is deleted, artifacts that were previously created from the deleted
**How do spaces impact response actions?**
-Response actions for both {{elastic-defend}} and third-party EDR solutions are associated with the {{fleet}} integration policy that's connected to the {{elastic-agent}} that executed the response action. A user authorized to view the response actions history log can only view items associated with integration policies that are accessible in the active space. If you share an integration policy with a new space, the associated response actions will automatically become visible in that space. There are some conditions that can result in response action history not being accessible by default—we call these ["orphan” response actions](#spaces-security-faq-orphan-response-actions).
+Response actions for both {{elastic-defend}} and third-party EDR solutions are associated with the {{fleet}} integration policy that's connected to the {{agent}} that executed the response action. A user authorized to view the response actions history log can only view items associated with integration policies that are accessible in the active space. If you share an integration policy with a new space, the associated response actions will automatically become visible in that space. There are some conditions that can result in response action history not being accessible by default—we call these ["orphan” response actions](#spaces-security-faq-orphan-response-actions).
**How are response actions visible across spaces?**
@@ -193,7 +193,7 @@ To use this API, you need {{kib}}'s built-in `superuser` role.
API call:
`GET /internal/api/endpoint/action/_orphan_actions_space`
-Response:
+Response:
```
{
"data": {
@@ -227,9 +227,9 @@ To remove the space ID where orphan response actions appear, call the API with a
## Endpoint protection rules [spaces-security-faq-endpoint-protection-rules]
-By default, [endpoint protection rules](/solutions/security/manage-elastic-defend/endpoint-protection-rules.md) use an index pattern that may be too broad for use in a particular space. In order to ensure that the space only shows the desired data in that space, you may need to customize the rule. For example, the Endpoint Security ({{elastic-defend}}) rule has an index pattern that picks up all data sent to `logs-endpoint.alerts-*`. This index pattern would pick up all events sent by {{elastic-defend}}, which may not be desirable.
+By default, [endpoint protection rules](/solutions/security/manage-elastic-defend/endpoint-protection-rules.md) use an index pattern that may be too broad for use in a particular space. In order to ensure that the space only shows the desired data in that space, you may need to customize the rule. For example, the Endpoint Security ({{elastic-defend}}) rule has an index pattern that picks up all data sent to `logs-endpoint.alerts-*`. This index pattern would pick up all events sent by {{elastic-defend}}, which may not be desirable.
-One option in this situation is to add a rule exception that ensures that only data with a `data_stream.namespace` that matches the namespace defined in the {{fleet}} policy that contains the applicable {{elastic-defend}} integration policies. {{fleet}} allows you to configure a space to restrict which namespace values can apply to policies, which can help you manage rules when new {{fleet}} policies are created, or existing policies are updated. Existing rules would not have to be adjusted to keep the namespace values in sync.
+One option in this situation is to add a rule exception that ensures that only data with a `data_stream.namespace` that matches the namespace defined in the {{fleet}} policy that contains the applicable {{elastic-defend}} integration policies. {{fleet}} allows you to configure a space to restrict which namespace values can apply to policies, which can help you manage rules when new {{fleet}} policies are created, or existing policies are updated. Existing rules would not have to be adjusted to keep the namespace values in sync.
**What happens to protection rules when a policy is shared with or moved to a new space?**
diff --git a/solutions/security/investigate/manage-integration.md b/solutions/security/investigate/manage-integration.md
index ec90776cba..d3b147f442 100644
--- a/solutions/security/investigate/manage-integration.md
+++ b/solutions/security/investigate/manage-integration.md
@@ -99,7 +99,7 @@ The [Osquery version](https://github.com/osquery/osquery/releases) available on
## Debug issues [_debug_issues]
-If you encounter issues with **Osquery Manager**, find the relevant logs for {{elastic-agent}} and Osquerybeat in the agent directory. Refer to the [Fleet Installation layout](/reference/fleet/installation-layout.md) to find the log file location for your OS.
+If you encounter issues with **Osquery Manager**, find the relevant logs for {{agent}} and Osquerybeat in the agent directory. Refer to the [Fleet Installation layout](/reference/fleet/installation-layout.md) to find the log file location for your OS.
```ts
../data/elastic-agent-*/logs/elastic-agent-json.log-*
diff --git a/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md b/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md
index 23a6c4c607..04ba91afac 100644
--- a/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md
+++ b/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md
@@ -41,4 +41,4 @@ For {{stack}} version >= 7.11.0, self-protection defines the following permissio
* Users — even Administrator/root — **cannot** delete {{elastic-endpoint}} files (located at `c:\Program Files\Elastic\Endpoint` on Windows, and `/Library/Elastic/Endpoint` on macOS).
* Users **cannot** terminate the {{elastic-endpoint}} program or service.
* Administrator/root users **can** read {{elastic-endpoint}}'s files. On Windows, the easiest way to read {{elastic-endpoint}} files is to start an Administrator `cmd.exe` prompt. On macOS, an Administrator can use the `sudo` command.
-* Administrator/root users **can** stop the {{elastic-agent}}'s service. On Windows, run the `sc stop "Elastic Agent"` command. On macOS, run the `sudo launchctl stop elastic-agent` command.
+* Administrator/root users **can** stop the {{agent}}'s service. On Windows, run the `sc stop "Elastic Agent"` command. On macOS, run the `sudo launchctl stop elastic-agent` command.
diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md
index 3c1c9d1696..a3492763e3 100644
--- a/troubleshoot/elasticsearch/increase-tier-capacity.md
+++ b/troubleshoot/elasticsearch/increase-tier-capacity.md
@@ -119,7 +119,7 @@ If it is not possible to increase the size per zone or the number of availabilit
himrst
```
- You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier.
+ You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier.
4. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1.
@@ -197,7 +197,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec
himrst
```
- You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier.
+ You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier.
3. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1.
diff --git a/troubleshoot/elasticsearch/transform-troubleshooting.md b/troubleshoot/elasticsearch/transform-troubleshooting.md
index 265eb54dd3..fe79cca7f4 100644
--- a/troubleshoot/elasticsearch/transform-troubleshooting.md
+++ b/troubleshoot/elasticsearch/transform-troubleshooting.md
@@ -21,10 +21,10 @@ Use the information in this section to troubleshoot common problems.
If you can't find your issue here, explore the other [troubleshooting topics](/troubleshoot/index.md) or [contact us](/troubleshoot/index.md#contact-us).
:::
-If you encounter problems with your {{transforms}}, you can gather more information from the following files and APIs:
+If you encounter problems with your transforms, you can gather more information from the following files and APIs:
* Lightweight audit messages are stored in `.transform-notifications-read`. Search by your `transform_id`.
-* The [get {{transform}} statistics API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats) provides information about the {{transform}} status and failures.
-* If the {{transform}} exists as a task, you can use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to gather task information. For example: `GET _tasks?actions=data_frame/transforms*&detailed`. Typically, the task exists when the {{transform}} is in a started or failed state.
-* The {{es}} logs from the node that was running the {{transform}} might also contain useful information. You can identify the node from the notification messages. Alternatively, if the task still exists, you can get that information from the get {{transform}} statistics API. For more information, see [*Elasticsearch application logging*](../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md).
+* The [get transform statistics API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats) provides information about the transform status and failures.
+* If the transform exists as a task, you can use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to gather task information. For example: `GET _tasks?actions=data_frame/transforms*&detailed`. Typically, the task exists when the transform is in a started or failed state.
+* The {{es}} logs from the node that was running the transform might also contain useful information. You can identify the node from the notification messages. Alternatively, if the task still exists, you can get that information from the get transform statistics API. For more information, see [*Elasticsearch application logging*](../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md).
diff --git a/troubleshoot/ingest/fleet/common-problems.md b/troubleshoot/ingest/fleet/common-problems.md
index 4f49d8246e..85af96f7e3 100644
--- a/troubleshoot/ingest/fleet/common-problems.md
+++ b/troubleshoot/ingest/fleet/common-problems.md
@@ -10,7 +10,7 @@ products:
- id: elastic-agent
---
-# Common problems with {{fleet}} and {{elastic-agent}} [fleet-troubleshooting]
+# Common problems with {{fleet}} and {{agent}} [fleet-troubleshooting]
We have collected the most common known problems and listed them here. If your problem is not described here, review the open issues in the following GitHub repositories:
diff --git a/troubleshoot/ingest/fleet/fleet-elastic-agent.md b/troubleshoot/ingest/fleet/fleet-elastic-agent.md
index f1f13f3ae5..ddc091ccd1 100644
--- a/troubleshoot/ingest/fleet/fleet-elastic-agent.md
+++ b/troubleshoot/ingest/fleet/fleet-elastic-agent.md
@@ -10,7 +10,7 @@ products:
- id: elastic-agent
---
-# Troubleshoot {{fleet}} and {{elastic-agent}} [troubleshooting-intro]
+# Troubleshoot {{fleet}} and {{agent}} [troubleshooting-intro]
This section provides an [FAQ](frequently-asked-questions.md) and [troubleshooting](common-problems.md) tips to help you resolve common problems with {{fleet}} and {{agent}}.
diff --git a/troubleshoot/ingest/fleet/frequently-asked-questions.md b/troubleshoot/ingest/fleet/frequently-asked-questions.md
index 9658c962a0..929d268067 100644
--- a/troubleshoot/ingest/fleet/frequently-asked-questions.md
+++ b/troubleshoot/ingest/fleet/frequently-asked-questions.md
@@ -87,7 +87,7 @@ The `state.yml` file (located under `data/elastic-agent-*`) contains the entire,
## Why can’t I see the data {{agent}} is sending? [where-is-the-data-agent-is-sending]
-If {{elastic-agent}} is set up and running, but you don’t see data in {{kib}}:
+If {{agent}} is set up and running, but you don’t see data in {{kib}}:
1. Go to **Management > {{dev-tools-app}}** in {{kib}}, and in the Console, search your index for data. For example:
@@ -97,7 +97,7 @@ If {{elastic-agent}} is set up and running, but you don’t see data in {{kib}}:
Or if you prefer, go to the **Discover** app.
-2. Look at the data that {{elastic-agent}} has sent and see if the `name.host` field contains your host machine name.
+2. Look at the data that {{agent}} has sent and see if the `name.host` field contains your host machine name.
If you don’t see data for your host, it’s possible that the data is blocked in the network, or that a firewall or security problem is preventing the {{agent}} from sending the data.
diff --git a/troubleshoot/observability/troubleshoot-mapping-issues.md b/troubleshoot/observability/troubleshoot-mapping-issues.md
index 872f2bf922..8533abca46 100644
--- a/troubleshoot/observability/troubleshoot-mapping-issues.md
+++ b/troubleshoot/observability/troubleshoot-mapping-issues.md
@@ -11,28 +11,28 @@ products:
# Troubleshoot Uptime mapping issues [troubleshoot-uptime-mapping-issues]
-## Mapping issues [_mapping_issues]
+## Mapping issues [_mapping_issues]
-There are situations in which {{heartbeat}} data can be indexed without the correct mappings applied. These situations cannot occur with the {{elastic-agent}} configured via {{fleet}}, only with standalone {{heartbeat}} or {{elastic-agent}} running in standalone mode. This can occur when the underlying `heartbeat-VERSION` {{ilm-init}} alias is deleted manually or when {{heartbeat}} writes data through an intermediary such as {{ls}} without the `setup` command being run. When running {{elastic-agent}} in standalone mode this can happen if manually setup data streams have incorrect mappings.
+There are situations in which {{heartbeat}} data can be indexed without the correct mappings applied. These situations cannot occur with the {{agent}} configured via {{fleet}}, only with standalone {{heartbeat}} or {{agent}} running in standalone mode. This can occur when the underlying `heartbeat-VERSION` {{ilm-init}} alias is deleted manually or when {{heartbeat}} writes data through an intermediary such as {{ls}} without the `setup` command being run. When running {{agent}} in standalone mode this can happen if manually setup data streams have incorrect mappings.
To fix this problem, you typically need to remove your {{heartbeat}} indices and data streams. Then you must create new ones with the appropriate mappings installed. To achieve this, follow the steps below.
-### Stop your {{heartbeat}}/{{elastic-agent}} instances [_stop_your_heartbeatelastic_agent_instances]
+### Stop your {{heartbeat}}/{{agent}} instances [_stop_your_heartbeatelastic_agent_instances]
-It is necessary to stop all {{heartbeat}}/{{elastic-agent}} instances that are targeting the cluster, so they will not write to or re-create indices prematurely.
+It is necessary to stop all {{heartbeat}}/{{agent}} instances that are targeting the cluster, so they will not write to or re-create indices prematurely.
-### Delete your {{heartbeat}} indices / {{elastic-agent}} data streams [_delete_your_heartbeat_indices_elastic_agent_data_streams]
+### Delete your {{heartbeat}} indices / {{agent}} data streams [_delete_your_heartbeat_indices_elastic_agent_data_streams]
To ensure the mapping is applied to all {{heartbeat}} data going forward, delete all the {{heartbeat}} indices that match the pattern the {{uptime-app}} will use.
There are multiple ways to achieve this. You can read about performing this using the [Index Management UI](/manage-data/data-store/index-basics.md#index-management) or with the [Delete index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete).
-If using {{elastic-agent}} you will want to fix any issues with custom data stream mappings. We encourage the use of {{fleet}} to eliminate this issue.
+If using {{agent}} you will want to fix any issues with custom data stream mappings. We encourage the use of {{fleet}} to eliminate this issue.
-### If using {{heartbeat}}, perform {{heartbeat}} setup [_if_using_heartbeat_perform_heartbeat_setup]
+### If using {{heartbeat}}, perform {{heartbeat}} setup [_if_using_heartbeat_perform_heartbeat_setup]
The below command will cause {{heartbeat}} to perform its setup processes and recreate the index template properly.
@@ -45,7 +45,7 @@ For more information on how to use this command, or if you’re using DEB, RPM,
This command performs the necessary startup tasks and ensures that your indices have the appropriate mapping going forward.
-### Run {{heartbeat}}/{{elastic-agent}} again [_run_heartbeatelastic_agent_again]
+### Run {{heartbeat}}/{{agent}} again [_run_heartbeatelastic_agent_again]
-Now, when you run {{heartbeat}}/{{elastic-agent}}, your data will be indexed with the appropriate mappings. When the {{uptime-app}} attempts to fetch your data, it should be able to render without issues.
+Now, when you run {{heartbeat}}/{{agent}}, your data will be indexed with the appropriate mappings. When the {{uptime-app}} attempts to fetch your data, it should be able to render without issues.