Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion explore-analyze/dashboards/arrange-panels.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ To add a collapsible section:
:::{tip}
The section must be expanded in order to place panels into it.
:::
5. Just like any other panel, you can drag and drop the collapsible section to a different position in the dashboard.
5. Like any other panel, you can drag and drop the collapsible section to a different position in the dashboard.
6. Save the dashboard.

Users viewing the dashboard will find the section in the same state as when you saved the dashboard. If you saved it with the section collapsed, then it will also be collapsed by default for users.
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/dashboards/building.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ products:
$$$dashboard-minimum-requirements$$$
To create or edit dashboards, you first need to:

* have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load just the right data when building a visualization or exploring it.
* have [data indexed into {{es}}](/manage-data/ingest.md) and a [data view](../find-and-organize/data-views.md). A data view is a subset of your {{es}} data, and allows you to load the right data when building a visualization or exploring it.

::::{tip}
If you don’t have data at hand and still want to explore dashboards, you can import one of the [sample data sets](../../manage-data/ingest/sample-data.md) available.
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/dashboards/duplicate-dashboards.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ products:

You will be redirected to the duplicated dashboard.

To duplicate a managed dashboard, follow the instructions above or click the **Managed** badge in the toolbar. Then click **Duplicate** in the dialogue that appears.
To duplicate a managed dashboard, follow the instructions above or click the **Managed** badge in the toolbar. Then click **Duplicate** in the dialog that appears.

:::{image} /explore-analyze/images/kibana-managed-dashboard-popover-8.16.0.png
:alt: Managed badge dialog with Duplicate button
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/dashboards/open-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ products:
2. Locate the dashboard you want to edit.

::::{tip}
When looking for a specific dashboard, you can filter them by tag or by creator, or search the list based on their name and description. Note that the creator information is only available for dashboards created on or after version 8.14.
When looking for a specific dashboard, you can filter them by tag or by creator, or search the list based on their name and description. The creator information is only available for dashboards created on or after version 8.14.
::::

3. Click the dashboard name you want to open.
Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/transforms/ecommerce-transforms.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ products:

4. When you are satisfied with what you see in the preview, create the {{transform}}.
1. Supply a {{transform}} ID, the name of the destination index and optionally a description. If the destination index does not exist, it will be created automatically when you start the {{transform}}.
2. Decide whether you want the {{transform}} to run once or continuously. Since this sample data index is unchanging, lets use the default behavior and just run the {{transform}} once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the {{transform}} can use to check which entities have changed. In general, its a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
2. Decide whether you want the {{transform}} to run once or continuously. Since this sample data index is unchanging, let's use the default behavior and run the {{transform}} once. If you want to try it out, however, go ahead and click on **Continuous mode**. You must choose a field that the {{transform}} can use to check which entities have changed. In general, it's a good idea to use the ingest timestamp field. In this example, however, you can use the `order_date` field.
3. Optionally, you can configure a retention policy that applies to your {{transform}}. Select a date field that is used to identify old documents in the destination index and provide a maximum age. Documents that are older than the configured value are removed from the destination index.
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot3.png
:alt: Adding transfrom ID and retention policy to a {{transform}} in {{kib}}
Expand Down Expand Up @@ -303,7 +303,7 @@ products:

Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform), [stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) and [reset {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform) APIs.

If you reset a {{transform}}, all checkpoints, states, and the destination index (if it was created by the {{transform}}) are deleted. The {{transform}} is ready to start again as if it had just been created.
If you reset a {{transform}}, all checkpoints, states, and the destination index (if it was created by the {{transform}}) are deleted. The {{transform}} is ready to start again as if it were newly created.

::::{dropdown} API example
```console
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/transforms/transform-alerts.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ There is a set of variables that you can use to customize the notification messa

After you save the configurations, the rule appears in the **{{rules-ui}}** list where you can check its status and see the overview of its configuration information.

The name of an alert is always the same as the {{transform}} ID of the associated {{transform}} that triggered it. You can mute the notifications for a particular {{transform}} on the page of the rule that lists the individual alerts. You can open it via **{{rules-ui}}** by selecting the rule name.
The name of an alert is always the same as the {{transform}} ID of the associated {{transform}} that triggered it. You can mute the notifications for a particular {{transform}} on the page of the rule that lists the individual alerts. You can open it through **{{rules-ui}}** by selecting the rule name.

## Action variables [transform-action-variables]

Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/transforms/transform-checkpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To create a checkpoint, the {{ctransform}}:

If changes are found a checkpoint is created.

2. Identifies which entities and/or time buckets have changed.
2. Identifies which entities or time buckets have changed.

The {{transform}} searches to see which entities or time buckets have changed between the last and the new checkpoint. The {{transform}} uses the values to synchronize the source and destination indices with fewer operations than a full re-run.

Expand All @@ -45,7 +45,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans

In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.

If you dont have a `event.ingested` field or it isnt populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
If you don't have a `event.ingested` field or it isn't populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or through {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.

```console
PUT _ingest/pipeline/set_ingest_time
Expand Down
10 changes: 5 additions & 5 deletions explore-analyze/transforms/transform-limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The following limitations and known problems apply to the {{version.stack}} rele

* [Configuration limitations](#transform-config-limitations) apply to the configuration process of the {{transforms}}.
* [Operational limitations](#transform-operational-limitations) affect the behavior of the {{transforms}} that are running.
* [Limitations in {{kib}}](#transform-ui-limitations) only apply to {{transforms}} managed via the user interface.
* [Limitations in {{kib}}](#transform-ui-limitations) only apply to {{transforms}} managed through the user interface.

## Configuration limitations [transform-config-limitations]

Expand Down Expand Up @@ -45,7 +45,7 @@ If a {{transform}} contains Painless scripts that use deprecated syntax, depreca

### {{ctransform-cap}} scheduling limitations [transform-scheduling-limitations]

A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the {{transform}} search/index operations has other users in your cluster. Also note that retries occur at `frequency` interval.
A {{ctransform}} periodically checks for changes to source data. The functionality of the scheduler is currently limited to a basic periodic timer which can be within the `frequency` range from 1s to 1h. The default is 1m. This is designed to run little and often. When choosing a `frequency` for this timer consider your ingest rate along with the impact that the {{transform}} search/index operations has other users in your cluster. Also, retries occur at `frequency` interval.

## Operational limitations [transform-operational-limitations]

Expand Down Expand Up @@ -103,11 +103,11 @@ When using the API to delete a failed {{transform}}, first stop it using `_stop?

### {{ctransforms-cap}} may give incorrect results if documents are not yet available to search [transform-availability-limitations]

After a document is indexed, there is a very small delay until it is available to search.
After a document is indexed, there is a small delay until it is available to search.

A {{ctransform}} periodically checks for changed entities between the time since it last checked and `now` minus `sync.time.delay`. This time window moves without overlapping. If the timestamp of a recently indexed document falls within this time window but this document is not yet available to search then this entity will not be updated.

If using a `sync.time.field` that represents the data ingest time and using a zero second or very small `sync.time.delay`, then it is more likely that this issue will occur.
If using a `sync.time.field` that represents the data ingest time and using a zero second or small `sync.time.delay`, then it is more likely that this issue will occur.

### Support for date nanoseconds data type [transform-date-nanos]

Expand Down Expand Up @@ -184,4 +184,4 @@ The {{transforms}} management page in {{kib}} lists up to 1000 {{transforms}}.

### {{kib}} might not support every {{transform}} configuration option [transform-ui-support]

There might be configuration options available via the {{transform}} APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
There might be configuration options available through the {{transform}} APIs that are not supported in {{kib}}. For an exhaustive list of configuration options, refer to the [documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-transform).
4 changes: 2 additions & 2 deletions explore-analyze/transforms/transform-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@ As an optional step, you can also add a query to further limit the scope of the

The {{transform}} performs a composite aggregation that paginates through all the data defined by the source index query. The output of the aggregation is stored in a *destination index*. Each time the {{transform}} queries the source index, it creates a *checkpoint*. You can decide whether you want the {{transform}} to run once or continuously. A *batch {{transform}}* is a single operation that has a single checkpoint. *{{ctransforms-cap}}* continually increment and process checkpoints as new source data is ingested.

Imagine that you run a webshop that sells clothes. Every order creates a document that contains a unique order ID, the name and the category of the ordered product, its price, the ordered quantity, the exact date of the order, and some customer information (name, gender, location, etc). Your data set contains all the transactions from last year.
Imagine that you run a webshop that sells clothes. Every order creates a document that contains a unique order ID, the name and the category of the ordered product, its price, the ordered quantity, the exact date of the order, and some customer information (name, gender, location, and so on). Your data set contains all the transactions from last year.

If you want to check the sales in the different categories in your last fiscal year, define a {{transform}} that groups the data by the product categories (womens shoes, mens clothing, etc.) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.
If you want to check the sales in the different categories in your last fiscal year, define a {{transform}} that groups the data by the product categories (women's shoes, men's clothing, and so on) and the order date. Use the last year as the interval for the order date. Then add a sum aggregation on the ordered quantity. The result is an entity-centric index that shows the number of sold items in every product category in the last year.

:::{image} /explore-analyze/images/elasticsearch-reference-pivot-preview.png
:alt: Example of a pivot {{transform}} preview in {{kib}}
Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/transforms/transform-scale.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ If you have defined a {{transform}} source index `query`, ensure it is as effici

Imagine your {{ctransform}} is configured to group by `IP` and calculate the sum of `bytes_sent`. For each checkpoint, a {{ctransform}} detects changes in the source data since the previous checkpoint, identifying the IPs for which new data has been ingested. Then it performs a second search, filtered for this group of IPs, in order to calculate the total `bytes_sent`. If this second search matches many shards, then this could be resource intensive. Consider limiting the scope that the source index pattern and query will match.

To limit which historical indices are accessed, exclude certain tiers (for example `"must_not": { "terms": { "_tier": [ "data_frozen", "data_cold" ] } }` and/or use an absolute time value as a date range filter in your source query (for example, greater than 2024-01-01T00:00:00). If you use a relative time value (for example, gte now-30d/d) then ensure date rounding is applied to take advantage of query caching and ensure that the relative time is much larger than the largest of `frequency` or `time.sync.delay` or the date histogram bucket, otherwise data may be missed. Do not use date filters which are less than a date value (for example, `lt`: less than or `lte`: less than or equal to) as this conflicts with the logic applied at each checkpoint execution and data may be missed.
To limit which historical indices are accessed, exclude certain tiers (for example `"must_not": { "terms": { "_tier": [ "data_frozen", "data_cold" ] } }` or use an absolute time value as a date range filter in your source query (for example, greater than 2024-01-01T00:00:00). If you use a relative time value (for example, gte now-30d/d) then ensure date rounding is applied to take advantage of query caching and ensure that the relative time is much larger than the largest of `frequency` or `time.sync.delay` or the date histogram bucket, otherwise data may be missed. Do not use date filters which are less than a date value (for example, `lt`: less than or `lte`: less than or equal to) as this conflicts with the logic applied at each checkpoint execution and data may be missed.

Consider using [date math](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-date-math-index-names) in your index names to reduce the number of indices to resolve in your queries. Add a date pattern - for example, `yyyy-MM-dd` - to your index names and use it to limit your query to a specific date. The example below queries indices only from yesterday and today:

Expand Down Expand Up @@ -90,7 +90,7 @@ Index sorting enables you to store documents on disk in a specific order which c

## 9. Disable the `_source` field on the destination index (storage) [disable-source-dest]

The [`_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) contains the original JSON document body that was passed at index time. The `_source` field itself is not indexed (and thus is not searchable), but it is still stored in the index and incurs a storage overhead. Consider disabling `_source` to save storage space if you have a large destination index. Disabling `_source` is only possible during index creation.
The [`_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) contains the original JSON document body that was passed at index time. The `_source` field itself is not indexed (and therefore is not searchable), but it is still stored in the index and incurs a storage overhead. Consider disabling `_source` to save storage space if you have a large destination index. Disabling `_source` is only possible during index creation.

::::{note}
When the `_source` field is disabled, a number of features are not supported. Consult [Disabling the `_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#disable-source-field) to understand the consequences before disabling it.
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/transforms/transform-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To use {{transforms}}, you must have:

Assigning security privileges affects how users access {{transforms}}. Consider the two main categories:

* **[{{es}} API user](#transform-es-security-privileges)**: uses an {{es}} client, cURL, or {{kib}} **{{dev-tools-app}}** to access {{transforms}} via {{es}} APIs. This scenario requires {{es}} security privileges.
* **[{{es}} API user](#transform-es-security-privileges)**: uses an {{es}} client, cURL, or {{kib}} **{{dev-tools-app}}** to access {{transforms}} through {{es}} APIs. This scenario requires {{es}} security privileges.
* **[{{kib}} user](#transform-kib-security-privileges)**: uses {{transforms}} in {{kib}}. This scenario requires {{kib}} feature privileges *and* {{es}} security privileges.

### {{es}} API user [transform-es-security-privileges]
Expand Down
Loading
Loading