You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,13 +41,13 @@ Some {{stack}} features also require specific node roles:
41
41
42
42
* {{ccs-cap}} and {{ccr}} require the `remote_cluster_client` role.
43
43
* {{stack-monitor-app}} and ingest pipelines require the `ingest` role.
44
-
* {{fleet}}, the {{security-app}}, and {{transforms}} require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
44
+
* {{fleet}}, the {{security-app}}, and transforms require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
45
45
* {{ml-cap}} features, such as {{anomaly-detect}}, require the `ml` role.
46
46
47
47
::::
48
48
49
49
50
-
As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and {{transform}} nodes.
50
+
As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and transform nodes.
51
51
52
52
53
53
## Change the role of a node [change-node-role]
@@ -82,7 +82,7 @@ The following is a list of the roles that a node can perform in a cluster. A nod
82
82
*[Ingest node](#node-ingest-node) (`ingest`): Ingest nodes are able to apply an [ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to not include the `ingest` role from nodes that have the `master` or `data` roles.
83
83
*[Remote-eligible node](#remote-node) (`remote_cluster_client`): A node that is eligible to act as a remote client.
84
84
*[Machine learning node](#ml-node-role) (`ml`): A node that can run {{ml-features}}. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. For more information, see [Machine learning settings](../../deploy/self-managed/configure-elasticsearch.md) and [Machine learning in the {{stack}}](/explore-analyze/machine-learning.md).
85
-
*[{{transform-cap}} node](#transform-node-role) (`transform`): A node that can perform {{transforms}}. If you want to use {{transforms}}, there must be at least one {{transform}} node in your cluster. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
85
+
*[Transform node](#transform-node-role) (`transform`): A node that can perform transforms. If you want to use transforms, there must be at least one transform node in your cluster. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{ml}} jobs or {{dfeeds}}. If you use {{ccs}} in your {{anomaly-jobs}}, the `remote_cluster_client` role is also required on all master-eligible nodes. Otherwise, the {{dfeed}} cannot start. See [Remote-eligible node](#remote-node).
300
300
301
301
302
-
### {{transform-cap}} node [transform-node-role]
302
+
### Transform node [transform-node-role]
303
303
304
-
{{transform-cap}} nodes run {{transforms}} and handle {{transform}} API requests. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md).
304
+
Transform nodes run transforms and handle transform API requests. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md).
305
305
306
-
To create a dedicated {{transform}} node, set:
306
+
To create a dedicated transform node, set:
307
307
308
308
```yaml
309
309
node.roles: [ transform, remote_cluster_client ]
310
310
```
311
311
312
-
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{transforms}}. See [Remote-eligible node](#remote-node).
312
+
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in transforms. See [Remote-eligible node](#remote-node).
Copy file name to clipboardExpand all lines: deploy-manage/remote-clusters/remote-clusters-migrate.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -109,7 +109,7 @@ On the remote cluster:
109
109
110
110
On the local cluster, stop any persistent tasks that refer to the remote cluster:
111
111
112
-
* Use the [Stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
112
+
* Use the [Stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
113
113
* Use the [Close jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-close-job) API to close any anomaly detection jobs.
114
114
* Use the [Pause auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-auto-follow-pattern) API to pause any auto-follow {{ccr}}.
115
115
* Use the [Pause follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-follow) API to pause any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
@@ -218,7 +218,7 @@ On the local cluster:
218
218
219
219
Resume any persistent tasks that you stopped earlier. Tasks should be restarted by the same user or API key that created the task before the migration. Ensure the roles of this user or API key have been updated with the required `remote_indices` or `remote_cluster` privileges. For users, tasks capture the caller’s credentials when started and run in that user’s security context. For API keys, restarting a task will update the task with the updated API key.
220
220
221
-
* Use the [Start {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
221
+
* Use the [Start transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
222
222
* Use the [Open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) API to open any anomaly detection jobs.
223
223
* Use the [Resume follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-follow) API to resume any auto-follow {{ccr}}.
224
224
* Use the [Resume auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-auto-follow-pattern) API to resume any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
Copy file name to clipboardExpand all lines: explore-analyze/alerts-cases/alerts/rule-types.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Some rule types are subscription features, while others are free features. For a
25
25
| --- | --- |
26
26
|[{{es}} query](rule-type-es-query.md)| Run a user-configured {{es}} query, compare the number of matches to a configured threshold, and schedule actions to run when the threshold condition is met. |
27
27
|[Index threshold](rule-type-index-threshold.md)| Aggregate field values from documents using {{es}} queries, compare them to threshold values, and schedule actions to run when the thresholds are met. |
28
-
|[{{transform-cap}} rules](../../transforms/transform-alerts.md)| {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
28
+
|[Transform rules](../../transforms/transform-alerts.md)| {applies_to}`stack: beta` {applies_to}`serverless: beta` Run scheduled checks on a {{ctransform}} to check its health. If a {{ctransform}} meets the conditions, an alert is created and the associated action is triggered. |
29
29
|[Tracking containment](geo-alerting.md)| Run an {{es}} query to determine if any documents are currently contained in any boundaries from a specified boundary index and generate alerts when a rule’s conditions are met. |
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ It’s an online model and updated continuously. Old parts of the model are prun
69
69
There is a set of benchmarks to monitor the performance of the {{anomaly-detect}} algorithms and to ensure no regression occurs as the methods are continuously developed and refined. They are called "data scenarios" and consist of 3 things:
70
70
71
71
* a dataset (stored as an {{es}} snapshot),
72
-
* a {{ml}} config ({{anomaly-detect}}, dfanalysis, {{transform}}, or {{infer}}),
72
+
* a {{ml}} config ({{anomaly-detect}}, dfanalysis, transform, or {{infer}}),
73
73
* an arbitrary set of static assertions (bucket counts, anomaly scores, accuracy value, and so on).
74
74
75
75
Performance metrics are collected from each and every scenario run and they are persisted in an Elastic Cloud cluster. This information is then used to track the performance over time, across the different builds, mainly to detect any regressions in the performance (both result quality and compute time).
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/data-frame-analytics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ products:
13
13
# Data frame analytics [ml-dfanalytics]
14
14
15
15
::::{important}
16
-
Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [{{transforms-cap}}](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
16
+
Using {{dfanalytics}} requires source data to be structured as a two dimensional "tabular" data structure, in other words a {{dataframe}}. [Transforms](../transforms.md) enable you to create {{dataframes}} which can be used as the source for {{dfanalytics}}.
17
17
::::
18
18
19
19
{{dfanalytics-cap}} enable you to perform different analyses of your data and annotate it with the results. Consult [Setup and security](setting-up-machine-learning.md) to learn more about the license and the security privileges that are required to use {{dfanalytics}}.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
38
38
39
39
If possible, prepare your input data such that it has less classes. A {{classanalysis}} with many classes takes more time to run than a binary {{classification}} job. The relationship between the number of classes and the runtime is roughly linear.
40
40
41
-
You might also need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
41
+
You might also need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{classification}}.
42
42
43
43
To learn more about how to prepare your data, refer to [the relevant section](ml-dfa-overview.md#prepare-transform-data) of the supervised learning overview.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ Before you can use the {{stack-ml-features}}, there are some configuration requi
45
45
46
46
## 3. Prepare and transform data [dfa-outlier-detection-prepare-data]
47
47
48
-
{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
48
+
{{oldetection-cap}} requires specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for {{oldetection}}.
49
49
50
50
You can find an example of how to transform your data into an entity-centric index in [this section](#weblogs-outliers).
51
51
@@ -116,17 +116,17 @@ The evaluate {{dfanalytics}} API can return the false positive rate (`fpr`) and
116
116
117
117
The goal of {{oldetection}} is to find the most unusual documents in an index. Let’s try to detect unusual behavior in the [data logs sample data set](../../index.md#gs-get-data-into-kibana).
118
118
119
-
1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating {{transforms}}, you also need `manage_data_frame_transforms` cluster privileges.
119
+
1. Verify that your environment is set up properly to use {{ml-features}}. If the {{es}} {{security-features}} are enabled, you need a user that has authority to create and manage {{dfanalytics}} jobs. See [Setup and security](../setting-up-machine-learning.md). Since we’ll be creating transforms, you also need `manage_data_frame_transforms` cluster privileges.
120
120
121
-
2. Create a {{transform}} that generates an entity-centric index with numeric or boolean data to analyze.
121
+
2. Create a transform that generates an entity-centric index with numeric or boolean data to analyze.
122
122
In this example, we’ll use the web logs sample data and pivot the data such that we get a new index that contains a network usage summary for each client IP.
123
-
In particular, create a {{transform}} that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
124
-
You can preview the {{transform}} before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
123
+
In particular, create a transform that calculates the number of occasions when a specific client IP communicated with the network (`@timestamp.value_count`), the sum of the bytes that are exchanged between the network and the client’s machine (`bytes.sum`), the maximum exchanged bytes during a single occasion (`bytes.max`), and the total number of requests (`request.value_count`) initiated by a specific client IP.
124
+
You can preview the transform before you create it. Go to the **Transforms** page in the main menu or by using the [global search field](../../find-and-organize/find-apps-and-objects.md) in {{kib}}.:
Alternatively, you can use the [preview {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
129
+
Alternatively, you can use the [preview transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform).
130
130
131
131
::::{dropdown} API example
132
132
@@ -218,15 +218,15 @@ POST _transform/_preview
218
218
219
219
::::
220
220
221
-
For more details about creating {{transforms}}, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
221
+
For more details about creating transforms, see [Transforming the eCommerce sample data](../../transforms/ecommerce-transforms.md).
222
222
223
-
3. Start the {{transform}}.
223
+
3. Start the transform.
224
224
225
225
::::{tip}
226
-
Even though resource utilization is automatically adjusted based on the cluster load, a {{transform}} increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
226
+
Even though resource utilization is automatically adjusted based on the cluster load, a transform increases search and indexing load on your cluster while it runs. If you’re experiencing an excessive load, however, you can stop it.
227
227
::::
228
228
229
-
You can start, stop, and manage {{transforms}} in {{kib}}. Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
229
+
You can start, stop, and manage transforms in {{kib}}. Alternatively, you can use the [start transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API.
230
230
231
231
::::{dropdown} API example
232
232
@@ -352,7 +352,7 @@ GET weblog-outliers/_search?q="111.237.144.54"
352
352
Now that you’ve found unusual behavior in the sample data set, consider how you might apply these steps to other data sets. If you have data that is already marked up with true outliers, you can determine how well the {{oldetection}} algorithms perform by using the evaluate {{dfanalytics}} API. See [6. Evaluate the results](#ml-outlier-detection-evaluate).
353
353
354
354
::::{tip}
355
-
If you do not want to keep the {{transform}} and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete {{transforms}} and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
355
+
If you do not want to keep the transform and the {{dfanalytics}} job, you can delete them in {{kib}} or use the [delete transform API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-delete-transform) and [delete {{dfanalytics}} job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-delete-data-frame-analytics). When you delete transforms and {{dfanalytics}} jobs in {{kib}}, you have the option to also remove the destination indices and {{data-sources}}.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ An important requirement is a data set that is large enough to train a model. Fo
58
58
59
59
Before you train the model, consider preprocessing the data. In practice, the type of preprocessing depends on the nature of the data set. Preprocessing can include, but is not limited to, mitigating redundancy, reducing biases, applying standards and/or conventions, data normalization, and so on.
60
60
61
-
{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [{{transform}}](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
61
+
{{regression-cap}} and {{classification}} require specifically structured source data: a two dimensional tabular data structure. For this reason, you might need to [transform](../../transforms.md) your data to create a {{dataframe}} which can be used as the source for these types of {{dfanalytics}}.
62
62
63
63
### Train, test, iterate [train-test-iterate]
64
64
@@ -76,7 +76,7 @@ Once the model is trained, you can evaluate how well it predicts previously unse
76
76
77
77
You have trained the model and are satisfied with the performance. The last step is to deploy your trained model and start using it on new data.
78
78
79
-
The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous {{transform}} or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
79
+
The Elastic {{ml}} feature called {{infer}} enables you to make predictions for new data either by using it as a processor in an ingest pipeline, in a continuous transform or as an aggregation at search time. When new data comes into your ingest pipeline or you run a search on your data with an {{infer}} aggregation, the model is used to infer against the data and make predictions on it.
0 commit comments