Skip to content

Commit 5325326

Browse files
remove transform, transforms, transform-cap, transforms-cap subs
1 parent 5646034 commit 5325326

File tree

24 files changed

+261
-265
lines changed

24 files changed

+261
-265
lines changed

deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,13 @@ Some {{stack}} features also require specific node roles:
4141

4242
* {{ccs-cap}} and {{ccr}} require the `remote_cluster_client` role.
4343
* {{stack-monitor-app}} and ingest pipelines require the `ingest` role.
44-
* {{fleet}}, the {{security-app}}, and {{transforms}} require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
44+
* {{fleet}}, the {{security-app}}, and transforms require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
4545
* {{ml-cap}} features, such as {{anomaly-detect}}, require the `ml` role.
4646

4747
::::
4848

4949

50-
As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and {{transform}} nodes.
50+
As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and transform nodes.
5151

5252

5353
## Change the role of a node [change-node-role]
@@ -82,7 +82,7 @@ The following is a list of the roles that a node can perform in a cluster. A nod
8282
* [Ingest node](#node-ingest-node) (`ingest`): Ingest nodes are able to apply an [ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to not include the `ingest` role from nodes that have the `master` or `data` roles.
8383
* [Remote-eligible node](#remote-node) (`remote_cluster_client`): A node that is eligible to act as a remote client.
8484
* [Machine learning node](#ml-node-role) (`ml`): A node that can run {{ml-features}}. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. For more information, see [Machine learning settings](../../deploy/self-managed/configure-elasticsearch.md) and [Machine learning in the {{stack}}](/explore-analyze/machine-learning.md).
85-
* [{{transform-cap}} node](#transform-node-role) (`transform`): A node that can perform {{transforms}}. If you want to use {{transforms}}, there must be at least one {{transform}} node in your cluster. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
85+
* [Transform node](#transform-node-role) (`transform`): A node that can perform transforms. If you want to use transforms, there must be at least one transform node in your cluster. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
8686

8787
::::{admonition} Coordinating node
8888
:class: note
@@ -299,15 +299,15 @@ node.roles: [ ml, remote_cluster_client]
299299
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{ml}} jobs or {{dfeeds}}. If you use {{ccs}} in your {{anomaly-jobs}}, the `remote_cluster_client` role is also required on all master-eligible nodes. Otherwise, the {{dfeed}} cannot start. See [Remote-eligible node](#remote-node).
300300

301301

302-
### {{transform-cap}} node [transform-node-role]
302+
### Transform node [transform-node-role]
303303

304-
{{transform-cap}} nodes run {{transforms}} and handle {{transform}} API requests. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md).
304+
Transform nodes run transforms and handle transform API requests. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md).
305305

306-
To create a dedicated {{transform}} node, set:
306+
To create a dedicated transform node, set:
307307

308308
```yaml
309309
node.roles: [ transform, remote_cluster_client ]
310310
```
311311

312-
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{transforms}}. See [Remote-eligible node](#remote-node).
312+
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in transforms. See [Remote-eligible node](#remote-node).
313313

deploy-manage/remote-clusters/remote-clusters-migrate.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ On the remote cluster:
109109

110110
On the local cluster, stop any persistent tasks that refer to the remote cluster:
111111

112-
* Use the [Stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
112+
* Use the [Stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
113113
* Use the [Close jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-close-job) API to close any anomaly detection jobs.
114114
* Use the [Pause auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-auto-follow-pattern) API to pause any auto-follow {{ccr}}.
115115
* Use the [Pause follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-follow) API to pause any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
@@ -218,7 +218,7 @@ On the local cluster:
218218

219219
Resume any persistent tasks that you stopped earlier. Tasks should be restarted by the same user or API key that created the task before the migration. Ensure the roles of this user or API key have been updated with the required `remote_indices` or `remote_cluster` privileges. For users, tasks capture the caller’s credentials when started and run in that user’s security context. For API keys, restarting a task will update the task with the updated API key.
220220

221-
* Use the [Start {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
221+
* Use the [Start transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
222222
* Use the [Open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) API to open any anomaly detection jobs.
223223
* Use the [Resume follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-follow) API to resume any auto-follow {{ccr}}.
224224
* Use the [Resume auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-auto-follow-pattern) API to resume any manual {{ccr}} or existing indices that were created from the auto-follow pattern.

docset.yml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -222,10 +222,6 @@ subs:
222222
watcher-transform: "payload transform"
223223
watcher-transforms: "payload transforms"
224224
watcher-transforms-cap: "Payload transforms"
225-
transform: "transform"
226-
transforms: "transforms"
227-
transform-cap: "Transform"
228-
transforms-cap: "Transforms"
229225
dfanalytics-cap: "Data frame analytics"
230226
dfanalytics: "data frame analytics"
231227
dfanalytics-job: "data frame analytics analytics job"

explore-analyze/alerts-cases/alerts/rule-types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Some rule types are subscription features, while others are free features. For a
2525
| --- | --- |
2626
| [{{es}} query](rule-type-es-query.md) | Run a user-configured {{es}} query, compare the number of matches to a configured threshold, and schedule actions to run when the threshold condition is met. |
2727
| [Index threshold](rule-type-index-threshold.md) | Aggregate field values from documents using {{es}} queries, compare them to threshold values, and schedule actions to run when the thresholds are met. |
28-
| [{{transform-cap}} rules](../../transforms/transform-alerts.md) | {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
28+
| [Transform rules](../../transforms/transform-alerts.md) | {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
2929
| [Tracking containment](geo-alerting.md) | Run an {{es}} query to determine if any documents are currently contained in any boundaries from a specified boundary index and generate alerts when a rule’s conditions are met. |
3030

3131
## {{observability}} rules [observability-rules]

explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ It’s an online model and updated continuously. Old parts of the model are prun
6969
There is a set of benchmarks to monitor the performance of the {{anomaly-detect}} algorithms and to ensure no regression occurs as the methods are continuously developed and refined. They are called "data scenarios" and consist of 3 things:
7070

7171
* a dataset (stored as an {{es}} snapshot),
72-
* a {{ml}} config ({{anomaly-detect}}, dfanalysis, {{transform}}, or {{infer}}),
72+
* a {{ml}} config ({{anomaly-detect}}, dfanalysis, transform, or {{infer}}),
7373
* an arbitrary set of static assertions (bucket counts, anomaly scores, accuracy value, and so on).
7474

7575
Performance metrics are collected from each and every scenario run and they are persisted in an Elastic Cloud cluster. This information is then used to track the performance over time, across the different builds, mainly to detect any regressions in the performance (both result quality and compute time).

explore-analyze/machine-learning/data-frame-analytics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ products:
1313
# Data frame analytics [ml-dfanalytics]
1414

1515
::::{important}
16-
Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [{{transforms-cap}}](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
16+
Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [Transforms](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
1717
::::
1818

1919
{{dfanalytics-cap}} enable you to perform different analyses of your data and annotate it with the results. Consult [Setup and security](setting-up-machine-learning.md) to learn more about the license and the security privileges that are required to use {{dfanalytics}}.

explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
3838

3939
If possible, prepare your input data such that it has less classes. A {{classanalysis}} with many classes takes more time to run than a binary {{classification}} job. The relationship between the number of classes and the runtime is roughly linear.
4040

41-
You might also need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
41+
You might also need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
4242

4343
To learn more about how to prepare your data, refer to [the relevant section](ml-dfa-overview.md#prepare-transform-data) of the supervised learning overview.
4444

explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
4545

4646
## 3. Prepare and transform data [dfa-outlier-detection-prepare-data]
4747

48-
{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
48+
{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
4949

5050
You can find an example of how to transform your data into an entity-centric index in [this section](#weblogs-outliers).
5151

@@ -116,17 +116,17 @@ The evaluate {{dfanalytics}} API can return the false positive rate (`fpr`) and
116116

117117
The goal of {{oldetection}} is to find the most unusual documents in an index. Let’s try to detect unusual behavior in the [data logs sample data set](../../index.md#gs-get-data-into-kibana).
118118

119-
1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating {{transforms}}, you also need `manage_data_frame_transforms` cluster privileges.
119+
1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating transforms, you also need `manage_data_frame_transforms` cluster privileges.
120120

121-
2. Create a {{transform}} that generates an entity-centric index with numeric or boolean data to analyze.
121+
2. Create a transform that generates an entity-centric index with numeric or boolean data to analyze.
122122
In this example, we’ll use the web logs sample data and pivot the data such that we get a new index that contains a network usage summary for each client IP.
123-
In particular, create a {{transform}} that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
124-
You can preview the {{transform}} before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
123+
In particular, create a transform that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
124+
You can preview the transform before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
125125
:::{image} /explore-analyze/images/machine-learning-logs-transform-preview.jpg
126-
:alt: Creating a {{transform}} in {{kib}}
126+
:alt: Creating a transform in {{kib}}
127127
:screenshot:
128128
:::
129-
Alternatively, you can use the [preview {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
129+
Alternatively, you can use the [preview transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
130130

131131
::::{dropdown} API example
132132

@@ -218,15 +218,15 @@ POST _transform/_preview
218218

219219
::::
220220

221-
For more details about creating {{transforms}}, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
221+
For more details about creating transforms, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
222222

223-
3. Start the {{transform}}.
223+
3. Start the transform.
224224

225225
::::{tip}
226-
Even though resource utilization is automatically adjusted based on the cluster load, a {{transform}} increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
226+
Even though resource utilization is automatically adjusted based on the cluster load, a transform increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
227227
::::
228228

229-
You can start, stop, and manage {{transforms}} in {{kib}}. Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
229+
You can start, stop, and manage transforms in {{kib}}. Alternatively, you can use the [start transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
230230

231231
::::{dropdown} API example
232232

@@ -352,7 +352,7 @@ GET weblog-outliers/_search?q="111.237.144.54"
352352
Now that you’ve found unusual behavior in the sample data set, consider how you might apply these steps to other data sets. If you have data that is already marked up with true outliers, you can determine how well the {{oldetection}} algorithms perform by using the evaluate {{dfanalytics}} API. See [6. Evaluate the results](#ml-outlier-detection-evaluate).
353353

354354
::::{tip}
355-
If you do not want to keep the {{transform}} and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete {{transforms}} and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
355+
If you do not want to keep the transform and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete transforms and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
356356
::::
357357

358358
## Further reading [outlier-detection-reading]

explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ An important requirement is a data set that is large enough to train a model. Fo
5858

5959
Before you train the model, consider preprocessing the data. In practice, the type of preprocessing depends on the nature of the data set. Preprocessing can include, but is not limited to, mitigating redundancy, reducing biases, applying standards and/or conventions, data normalization, and so on.
6060

61-
{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
61+
{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
6262

6363
### Train, test, iterate [train-test-iterate]
6464

@@ -76,7 +76,7 @@ Once the model is trained, you can evaluate how well it predicts previously unse
7676

7777
You have trained the model and are satisfied with the performance. The last step is to deploy your trained model and start using it on new data.
7878

79-
The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous {{transform}} or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
79+
The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous transform or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
8080

8181
### Next steps [next-steps]
8282

0 commit comments

Comments
 (0)