You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The {{ecloud}} Terraform provider allows you to provision and manage {{ech}} and {{ece}} deployments as code, and introduce DevOps-driven methodologies to manage and deploy the {{stack}} and solutions.
1
+
The {{ecloud}} Terraform provider allows you to provision and manage {{serverless-full}} projects, {{ech}} and {{ece}} deployments as code, and introduce DevOps-driven methodologies to manage and deploy the {{stack}} and solutions.
2
2
3
-
To get started, see the [{{ecloud}} Terraform provider documentation](https://registry.terraform.io/providers/elastic/ec/latest/docs).
3
+
To get started, review the [{{ecloud}} Terraform provider documentation](https://registry.terraform.io/providers/elastic/ec/latest/docs) and [{{ecloud}} Terraform GitHub repository](https://github.com/elastic/terraform-provider-ec) for more guidance.
Copy file name to clipboardExpand all lines: explore-analyze/find-and-organize/data-views.md
+8-19Lines changed: 8 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,17 +13,6 @@ products:
13
13
14
14
# Data views [data-views]
15
15
16
-
$$$field-formatters-numeric$$$
17
-
18
-
$$$managing-fields$$$
19
-
20
-
$$$runtime-fields$$$
21
-
22
-
$$$management-cross-cluster-search$$$
23
-
24
-
$$$data-views-read-only-access$$$
25
-
26
-
27
16
By default, analytics features such as Discover require a {{data-source}} to access the {{es}} data that you want to explore. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](/manage-data/data-store/aliases.md). For example, a {{data-source}} can point to your log data from yesterday, or all indices that contain your data.
28
17
29
18
::::{note}
@@ -176,15 +165,15 @@ Deleting a {{data-source}} breaks all visualizations, saved Discover sessions, a
176
165
2. Find the {{data-source}} that you want to delete, and then click  in the **Actions** column.
177
166
178
167
179
-
## {{data-source}} field cache [data-view-field-cache]
168
+
## Data view field cache [data-view-field-cache]
180
169
181
170
The browser caches {{data-source}} field lists for increased performance. This is particularly impactful for {{data-sources}} with a high field count that span a large number of indices and clusters. The field list is updated every couple of minutes in typical {{kib}} usage. Alternatively, use the refresh button on the {{data-source}} management detail page to get an updated field list. A force reload of {{kib}} has the same effect.
182
171
183
172
The field list may be impacted by changes in indices and user permissions.
184
173
185
174
## Manage data views [managing-data-views]
186
175
187
-
To customize the data fields in your data view, you can add runtime fields to the existing documents, add scripted fields to compute data on the fly, and change how {{kib}} displays the data fields.
176
+
To customize the fields in your data view, you can add runtime fields to the existing documents, add scripted fields to compute data on the fly, and change how {{kib}} displays the data view fields.
188
177
189
178
190
179
### Explore your data with runtime fields [runtime-fields]
@@ -347,9 +336,9 @@ doc['field_name'].value
347
336
For more information on scripted fields and additional examples, refer to [Using Painless in {{kib}} scripted fields](https://www.elastic.co/blog/using-painless-kibana-scripted-fields)
348
337
349
338
350
-
#### Migrate to runtime fields or ES|QL queries [migrate-off-scripted-fields]
339
+
#### Migrate to runtime fields or {{esql}} queries [migrate-off-scripted-fields]
351
340
352
-
The following code snippets demonstrate how an example scripted field called `computed_values` on the Kibana Sample Data Logs data view could be migrated to either a runtime field or an ES|QL query, highlighting the differences between each approach.
341
+
The following code snippets demonstrate how an example scripted field called `computed_values` on the Kibana Sample Data Logs data view could be migrated to either a runtime field or an {{esql}} query, highlighting the differences between each approach.
353
342
354
343
355
344
##### Scripted field [scripted-field-example]
@@ -463,9 +452,9 @@ Built-in validation is unsupported for scripted fields. When your scripts contai
463
452
464
453
465
454
466
-
### Format data fields [managing-fields]
455
+
### Format data view fields [managing-fields]
467
456
468
-
{{kib}} uses the same field types as {{es}}, however, some {{es}} field types are unsupported in {{kib}}. To customize how {{kib}} displays data fields, use the formatting options.
457
+
{{kib}} uses the same field types as {{es}}, however, some {{es}} field types are unsupported in {{kib}}. To customize how {{kib}} displays data view fields, use the formatting options.
469
458
470
459
1. Go to the **Data Views** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
471
460
2. Click the data view that contains the field you want to change.
@@ -474,7 +463,7 @@ Built-in validation is unsupported for scripted fields. When your scripts contai
474
463
5. Select **Set format**, then enter the **Format** for the field.
475
464
476
465
::::{note}
477
-
For numeric fields the default field formatters are based on the `meta.unit` field. The unit is associated with a [time unit](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units), percent, or byte. The convention for percents is to use value 1 to mean 100%.
466
+
For numeric fields, the default field formatters are based on the `meta.unit` field. The unit is associated with a [time unit](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units), percent, or byte. The convention for percents is to use value 1 to mean 100%.
478
467
::::
479
468
480
469
@@ -699,4 +688,4 @@ Some data views are exclusively configured and **managed** by Elastic. You can v
699
688
700
689
4. Select **Duplicate**. A Similar flyout opens where you can adjust the settings of the new copy of the managed data view.
701
690
702
-
5. Finalize your edits, then select **Save data view to Kibana** or **Use without saving**, depending on your needs. By saving it to {{kib}}, you can retrieve it and use it again later.
691
+
5. Finalize your edits, then select **Save data view to Kibana** or **Use without saving**, depending on your needs. By saving it to {{kib}}, you can retrieve it and use it again later.
: The name of the pattern that will match your text. For example, `NUMBER` and `IP` are both patterns that are provided within the default patterns set. The `NUMBER` pattern matches data like `3.44`, and the `IP` pattern matches data like `55.3.244.1`.
@@ -62,14 +62,14 @@ If you need help building grok patterns to match your data, use the [Grok Debugg
62
62
::::
63
63
64
64
65
-
For example, if you’re working with Apache log data, you can use the `%{{COMMONAPACHELOG}}` syntax, which understands the structure of Apache logs. A sample document might look like this:
65
+
For example, if you’re working with Apache log data, you can use the `%{COMMONAPACHELOG}` syntax, which understands the structure of Apache logs. A sample document might look like this:
To extract the IP address from the `message` field, you can write a Painless script that incorporates the `%{{COMMONAPACHELOG}}` syntax. You can test this script using the [`ip` field context](elasticsearch://reference/scripting-languages/painless/painless-api-examples.md#painless-runtime-ip) of the Painless execute API, but let’s use a runtime field instead.
72
+
To extract the IP address from the `message` field, you can write a Painless script that incorporates the `%{COMMONAPACHELOG}` syntax. You can test this script using the [`ip` field context](elasticsearch://reference/scripting-languages/painless/painless-api-examples.md#painless-runtime-ip) of the Painless execute API, but let’s use a runtime field instead.
73
73
74
74
Based on the sample document, index the `@timestamp` and `message` fields. To remain flexible, use `wildcard` as the field type for `message`:
Copy file name to clipboardExpand all lines: manage-data/_snippets/ilm-status.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
To see the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status):
1
+
To view the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status):
Copy file name to clipboardExpand all lines: manage-data/_snippets/ilm-stop.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ By default, the {{ilm}} service is in the `RUNNING` state and manages all indice
3
3
You can stop {{ilm-init}} to suspend management operations for all indices. For example, you might stop {{ilm}} when performing scheduled maintenance or making changes to the cluster that could impact the execution of {{ilm-init}} actions.
4
4
5
5
::::{important}
6
-
When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. No snapshots will be taken as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected.
6
+
When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. {{slm-init}} will not take snapshots as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected.
7
7
::::
8
8
9
9
To stop the {{ilm-init}} service and pause execution of all lifecycle policies, use the [{{ilm-init}} stop API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop):
@@ -12,7 +12,7 @@ To stop the {{ilm-init}} service and pause execution of all lifecycle policies,
12
12
POST _ilm/stop
13
13
```
14
14
15
-
The response will look like this:
15
+
The response looks like this:
16
16
17
17
```console-result
18
18
{
@@ -28,7 +28,7 @@ While the {{ilm-init}} service is shutting down, run the status API to verify th
Copy file name to clipboardExpand all lines: manage-data/data-store.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,4 +18,4 @@ Then, learn how these documents and the fields they contain are stored and index
18
18
19
19
You can also read more about working with {{es}} as a data store including how to use [index templates](/manage-data/data-store/templates.md) to tell {{es}} how to configure an index when it is created, how to use [aliases](/manage-data/data-store/aliases.md) to point to multiple indices, and how to use the [command line to manage data](/manage-data/data-store/manage-data-from-the-command-line.md) stored in {{es}}.
20
20
21
-
If your use case involves working with continuous streams of time series data, you may consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. The data can be accessed through a single, named resource, while it is stored in a series of hidden, auto-generated backing indices.
21
+
If your use case involves working with continuous streams of time series data, you can consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. You can access the data through a single, named resource, while {{es}} stores it in a series of hidden, auto-generated backing indices.
Copy file name to clipboardExpand all lines: manage-data/data-store/data-streams.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ When a backing index is created, the index is named using the following conventi
106
106
107
107
Some operations, such as a [shrink](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-shrink) or [restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md), can change a backing index’s name. These name changes do not remove a backing index from its data stream.
108
108
109
-
The generation of the data stream can change without a new index being added to the data stream (e.g. when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
109
+
The generation of the data stream can change without a new index being added to the data stream (for example, when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
Copy file name to clipboardExpand all lines: manage-data/data-store/data-streams/failure-store-recipes.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -307,7 +307,7 @@ Without tags in place it would not be as clear where in the pipeline the indexin
307
307
308
308
## Alerting on failed ingestion [failure-store-examples-alerting]
309
309
310
-
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
310
+
Since failure stores can be searched like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
311
311
{{kib}}. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
312
312
313
313
:::::{stepper}
@@ -382,7 +382,7 @@ We recommend a few best practices for remediating failure data.
382
382
383
383
**Use an ingest pipeline to convert failure documents back into their original document.** Failure documents store failure information along with the document that failed ingestion. The first step for remediating documents should be to use an ingest pipeline to extract the original source from the failure document and then discard any other information about the failure.
384
384
385
-
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is via the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
385
+
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is using the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
@@ -511,7 +511,7 @@ Because ingest pipeline failures need to be reprocessed by their original pipeli
511
511
```
512
512
1. The `data.id` field is expected to be present. If it isn't present this pipeline will fail.
513
513
514
-
Fixing a failure's root cause is a often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
514
+
Fixing a failure's root cause is often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
515
515
516
516
```console
517
517
PUT _ingest/pipeline/my-datastream-default-pipeline
@@ -658,7 +658,7 @@ POST _ingest/pipeline/_simulate
658
658
]
659
659
}
660
660
```
661
-
1. The index has been updated via the reroute processor.
661
+
1. The index has been updated through the reroute processor.
662
662
2. The document ID has stayed the same.
663
663
3. The source should cleanly match the contents of the original document.
664
664
@@ -995,7 +995,7 @@ PUT _ingest/pipeline/my-datastream-remediation-pipeline
995
995
2. Capture the source of the original document.
996
996
3. Discard the `error` field since it wont be needed for the remediation.
997
997
4. Also discard the `document` field.
998
-
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and thus will be present in the final document.
998
+
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and will be present in the final document.
999
999
1000
1000
:::{important}
1001
1001
Remember that a document that has failed during indexing has already been processed by the ingest processor! It shouldn't need to be processed again unless you made changes to your pipeline to fix the original problem. Make sure that any fixes applied to the ingest pipeline are reflected in the pipeline logic here.
@@ -1088,7 +1088,7 @@ Caused by: j.l.IllegalArgumentException: data stream timestamp field [@timestamp
1088
1088
]
1089
1089
}
1090
1090
```
1091
-
1. The index has been updated via the script processor.
1091
+
1. The index has been updated through the script processor.
1092
1092
2. The source should reflect any fixes and match the expected document shape for the final index.
1093
1093
3. In this example case, we find that the failure timestamp has stayed in the source.
0 commit comments