You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
review and fix Vale checks corrected by Cursor for the Dashboards and
Transforms folders under E&A.
Targets fixes for most of the warnings.
---------
Co-authored-by: florent-leborgne <[email protected]>
Copy file name to clipboardExpand all lines: explore-analyze/dashboards/arrange-panels.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ To add a collapsible section:
29
29
:::{tip}
30
30
The section must be expanded in order to place panels into it.
31
31
:::
32
-
5.Just like any other panel, you can drag and drop the collapsible section to a different position in the dashboard.
32
+
5.Like any other panel, you can drag and drop the collapsible section to a different position in the dashboard.
33
33
6. Save the dashboard.
34
34
35
35
Users viewing the dashboard will find the section in the same state as when you saved the dashboard. If you saved it with the section collapsed, then it will also be collapsed by default for users.
Copy file name to clipboardExpand all lines: explore-analyze/dashboards/building.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ products:
20
20
$$$dashboard-minimum-requirements$$$
21
21
To create or edit dashboards, you first need to:
22
22
23
-
* have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load just the right data when building a visualization or exploring it.
23
+
* have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load the right data when building a visualization or exploring it.
24
24
25
25
::::{tip}
26
26
If you don’t have data at hand and still want to explore dashboards, you can import one of the [sample data sets](../../manage-data/ingest/sample-data.md) available.
Copy file name to clipboardExpand all lines: explore-analyze/dashboards/duplicate-dashboards.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ products:
17
17
18
18
You will be redirected to the duplicated dashboard.
19
19
20
-
To duplicate a managed dashboard, follow the instructions above or click the **Managed** badge in the toolbar. Then click **Duplicate** in the dialogue that appears.
20
+
To duplicate a managed dashboard, follow the instructions above or click the **Managed** badge in the toolbar. Then click **Duplicate** in the dialog that appears.
Copy file name to clipboardExpand all lines: explore-analyze/dashboards/open-dashboard.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ products:
14
14
2. Locate the dashboard you want to edit.
15
15
16
16
::::{tip}
17
-
When looking for a specific dashboard, you can filter them by tag or by creator, or search the list based on their name and description. Note that the creator information is only available for dashboards created on or after version 8.14.
17
+
When looking for a specific dashboard, you can filter them by tag or by creator, or search the list based on their name and description. The creator information is only available for dashboards created on or after version 8.14.
Copy file name to clipboardExpand all lines: explore-analyze/transforms/ecommerce-transforms.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,7 @@ products:
94
94
95
95
4. When you are satisfied with what you see in the preview, create the {{transform}}.
96
96
1. Supply a {{transform}} ID, the name of the destination index and optionally a description. If the destination index does not exist, it will be created automatically when you start the {{transform}}.
97
-
2. Decide whether you want the {{transform}} to run once or continuously. Since this sample data index is unchanging, let’s use the default behavior and just run the {{transform}} once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the {{transform}} can use to check which entities have changed. In general, it’s a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
97
+
2. Decide whether you want the {{transform}} to run once or continuously. Since this sample data index is unchanging, let's use the default behavior and run the {{transform}} once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the {{transform}} can use to check which entities have changed. In general, it's a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
98
98
3. Optionally, you can configure a retention policy that applies to your {{transform}}. Select a date field that is used to identify old documents in the destination index and provide a maximum age. Documents that are older than the configured value are removed from the destination index.
:alt: Adding transfrom ID and retention policy to a {{transform}} in {{kib}}
@@ -303,7 +303,7 @@ products:
303
303
304
304
Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform), [stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) and [reset {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform) APIs.
305
305
306
-
If you reset a {{transform}}, all checkpoints, states, and the destination index (if it was created by the {{transform}}) are deleted. The {{transform}} is ready to start again as if it had just been created.
306
+
If you reset a {{transform}}, all checkpoints, states, and the destination index (if it was created by the {{transform}}) are deleted. The {{transform}} is ready to start again as if it were newly created.
Copy file name to clipboardExpand all lines: explore-analyze/transforms/transform-alerts.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,7 +71,7 @@ There is a set of variables that you can use to customize the notification messa
71
71
72
72
After you save the configurations, the rule appears in the **{{rules-ui}}** list where you can check its status and see the overview of its configuration information.
73
73
74
-
The name of an alert is always the same as the {{transform}} ID of the associated {{transform}} that triggered it. You can mute the notifications for a particular {{transform}} on the page of the rule that lists the individual alerts. You can open it via**{{rules-ui}}** by selecting the rule name.
74
+
The name of an alert is always the same as the {{transform}} ID of the associated {{transform}} that triggered it. You can mute the notifications for a particular {{transform}} on the page of the rule that lists the individual alerts. You can open it through**{{rules-ui}}** by selecting the rule name.
Copy file name to clipboardExpand all lines: explore-analyze/transforms/transform-checkpoints.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ To create a checkpoint, the {{ctransform}}:
27
27
28
28
If changes are found a checkpoint is created.
29
29
30
-
2. Identifies which entities and/or time buckets have changed.
30
+
2. Identifies which entities or time buckets have changed.
31
31
32
32
The {{transform}} searches to see which entities or time buckets have changed between the last and the new checkpoint. The {{transform}} uses the values to synchronize the source and destination indices with fewer operations than a full re-run.
33
33
@@ -45,7 +45,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans
45
45
46
46
In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.
47
47
48
-
If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
48
+
If you don't have a `event.ingested` field or it isn't populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or through {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the {{transform}} search/index operations has other users in your cluster. Also note that retries occur at `frequency` interval.
48
+
A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the {{transform}} search/index operations has other users in your cluster. Also, retries occur at `frequency` interval.
@@ -103,11 +103,11 @@ When using the API to delete a failed {{transform}}, first stop it using `_stop?
103
103
104
104
### {{ctransforms-cap}} may give incorrect results if documents are not yet available to search [transform-availability-limitations]
105
105
106
-
After a document is indexed, there is a very small delay until it is available to search.
106
+
After a document is indexed, there is a small delay until it is available to search.
107
107
108
108
A {{ctransform}} periodically checks for changed entities between the time since it last checked and `now` minus `sync.time.delay`. This time window moves without overlapping. If the timestamp of a recently indexed document falls within this time window but this document is not yet available to search then this entity will not be updated.
109
109
110
-
If using a `sync.time.field` that represents the data ingest time and using a zero second or very small `sync.time.delay`, then it is more likely that this issue will occur.
110
+
If using a `sync.time.field` that represents the data ingest time and using a zero second or small `sync.time.delay`, then it is more likely that this issue will occur.
111
111
112
112
### Support for date nanoseconds data type [transform-date-nanos]
113
113
@@ -184,4 +184,4 @@ The {{transforms}} management page in {{kib}} lists up to 1000 {{transforms}}.
184
184
185
185
### {{kib}} might not support every {{transform}} configuration option [transform-ui-support]
186
186
187
-
There might be configuration options available via the {{transform}} APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
187
+
There might be configuration options available through the {{transform}} APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
Copy file name to clipboardExpand all lines: explore-analyze/transforms/transform-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,9 +38,9 @@ As an optional step, you can also add a query to further limit the scope of the
38
38
39
39
The {{transform}} performs a composite aggregation that paginates through all the data defined by the source index query. The output of the aggregation is stored in a *destination index*. Each time the {{transform}} queries the source index, it creates a *checkpoint*. You can decide whether you want the {{transform}} to run once or continuously. A *batch {{transform}}* is a single operation that has a single checkpoint. *{{ctransforms-cap}}* continually increment and process checkpoints as new source data is ingested.
40
40
41
-
Imagine that you run a webshop that sells clothes. Every order creates a document that contains a unique order ID, the name and the category of the ordered product, its price, the ordered quantity, the exact date of the order, and some customer information (name, gender, location, etc). Your data set contains all the transactions from last year.
41
+
Imagine that you run a webshop that sells clothes. Every order creates a document that contains a unique order ID, the name and the category of the ordered product, its price, the ordered quantity, the exact date of the order, and some customer information (name, gender, location, and so on). Your data set contains all the transactions from last year.
42
42
43
-
If you want to check the sales in the different categories in your last fiscal year, define a {{transform}} that groups the data by the product categories (women’s shoes, men’s clothing, etc.) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.
43
+
If you want to check the sales in the different categories in your last fiscal year, define a {{transform}} that groups the data by the product categories (women's shoes, men's clothing, and so on) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.
Copy file name to clipboardExpand all lines: explore-analyze/transforms/transform-scale.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ If you have defined a {{transform}} source index `query`, ensure it is as effici
55
55
56
56
Imagine your {{ctransform}} is configured to group by `IP` and calculate the sum of `bytes_sent`. For each checkpoint, a {{ctransform}} detects changes in the source data since the previous checkpoint, identifying the IPs for which new data has been ingested. Then it performs a second search, filtered for this group of IPs, in order to calculate the total `bytes_sent`. If this second search matches many shards, then this could be resource intensive. Consider limiting the scope that the source index pattern and query will match.
57
57
58
-
To limit which historical indices are accessed, exclude certain tiers (for example `"must_not": { "terms": { "_tier": [ "data_frozen", "data_cold" ] } }`and/or use an absolute time value as a date range filter in your source query (for example, greater than 2024-01-01T00:00:00). If you use a relative time value (for example, gte now-30d/d) then ensure date rounding is applied to take advantage of query caching and ensure that the relative time is much larger than the largest of `frequency` or `time.sync.delay` or the date histogram bucket, otherwise data may be missed. Do not use date filters which are less than a date value (for example, `lt`: less than or `lte`: less than or equal to) as this conflicts with the logic applied at each checkpoint execution and data may be missed.
58
+
To limit which historical indices are accessed, exclude certain tiers (for example `"must_not": { "terms": { "_tier": [ "data_frozen", "data_cold" ] } }` or use an absolute time value as a date range filter in your source query (for example, greater than 2024-01-01T00:00:00). If you use a relative time value (for example, gte now-30d/d) then ensure date rounding is applied to take advantage of query caching and ensure that the relative time is much larger than the largest of `frequency` or `time.sync.delay` or the date histogram bucket, otherwise data may be missed. Do not use date filters which are less than a date value (for example, `lt`: less than or `lte`: less than or equal to) as this conflicts with the logic applied at each checkpoint execution and data may be missed.
59
59
60
60
Consider using [date math](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-date-math-index-names) in your index names to reduce the number of indices to resolve in your queries. Add a date pattern - for example, `yyyy-MM-dd` - to your index names and use it to limit your query to a specific date. The example below queries indices only from yesterday and today:
61
61
@@ -90,7 +90,7 @@ Index sorting enables you to store documents on disk in a specific order which c
90
90
91
91
## 9. Disable the `_source` field on the destination index (storage) [disable-source-dest]
92
92
93
-
The [`_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) contains the original JSON document body that was passed at index time. The `_source` field itself is not indexed (and thus is not searchable), but it is still stored in the index and incurs a storage overhead. Consider disabling `_source` to save storage space if you have a large destination index. Disabling `_source` is only possible during index creation.
93
+
The [`_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) contains the original JSON document body that was passed at index time. The `_source` field itself is not indexed (and therefore is not searchable), but it is still stored in the index and incurs a storage overhead. Consider disabling `_source` to save storage space if you have a large destination index. Disabling `_source` is only possible during index creation.
94
94
95
95
::::{note}
96
96
When the `_source` field is disabled, a number of features are not supported. Consult [Disabling the `_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#disable-source-field) to understand the consequences before disabling it.
0 commit comments