Skip to content

Commit ff5d819

Browse files
committed
Merge branch 'main' into enhance-and-restructure-autoops-section
2 parents 90ee3ba + 1b6803d commit ff5d819

File tree

67 files changed

+247
-237
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+247
-237
lines changed
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
The {{ecloud}} Terraform provider allows you to provision and manage {{ech}} and {{ece}} deployments as code, and introduce DevOps-driven methodologies to manage and deploy the {{stack}} and solutions.
1+
The {{ecloud}} Terraform provider allows you to provision and manage {{serverless-full}} projects, {{ech}} and {{ece}} deployments as code, and introduce DevOps-driven methodologies to manage and deploy the {{stack}} and solutions.
22

3-
To get started, see the [{{ecloud}} Terraform provider documentation](https://registry.terraform.io/providers/elastic/ec/latest/docs).
3+
To get started, review the [{{ecloud}} Terraform provider documentation](https://registry.terraform.io/providers/elastic/ec/latest/docs) and [{{ecloud}} Terraform GitHub repository](https://github.com/elastic/terraform-provider-ec) for more guidance.

deploy-manage/deploy/elastic-cloud/tools-apis.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -96,12 +96,7 @@ serverless: unavailable
9696
:::
9797

9898

99-
## Provision deployments with Terraform
100-
```{applies_to}
101-
deployment:
102-
ess: ga
103-
serverless: unavailable
104-
```
99+
## Provision projects and deployments with Terraform
105100

106101
:::{include} /deploy-manage/deploy/_snippets/tpec.md
107102
:::

explore-analyze/find-and-organize/data-views.md

Lines changed: 8 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -13,17 +13,6 @@ products:
1313

1414
# Data views [data-views]
1515

16-
$$$field-formatters-numeric$$$
17-
18-
$$$managing-fields$$$
19-
20-
$$$runtime-fields$$$
21-
22-
$$$management-cross-cluster-search$$$
23-
24-
$$$data-views-read-only-access$$$
25-
26-
2716
By default, analytics features such as Discover require a {{data-source}} to access the {{es}} data that you want to explore. A {{data-source}} can point to one or more indices, [data streams](../../manage-data/data-store/data-streams.md), or [index aliases](/manage-data/data-store/aliases.md). For example, a {{data-source}} can point to your log data from yesterday, or all indices that contain your data.
2817

2918
::::{note}
@@ -176,15 +165,15 @@ Deleting a {{data-source}} breaks all visualizations, saved Discover sessions, a
176165
2. Find the {{data-source}} that you want to delete, and then click ![Delete icon](/explore-analyze/images/kibana-delete.png "") in the **Actions** column.
177166

178167

179-
## {{data-source}} field cache [data-view-field-cache]
168+
## Data view field cache [data-view-field-cache]
180169

181170
The browser caches {{data-source}} field lists for increased performance. This is particularly impactful for {{data-sources}} with a high field count that span a large number of indices and clusters. The field list is updated every couple of minutes in typical {{kib}} usage. Alternatively, use the refresh button on the {{data-source}} management detail page to get an updated field list. A force reload of {{kib}} has the same effect.
182171

183172
The field list may be impacted by changes in indices and user permissions.
184173

185174
## Manage data views [managing-data-views]
186175

187-
To customize the data fields in your data view, you can add runtime fields to the existing documents, add scripted fields to compute data on the fly, and change how {{kib}} displays the data fields.
176+
To customize the fields in your data view, you can add runtime fields to the existing documents, add scripted fields to compute data on the fly, and change how {{kib}} displays the data view fields.
188177

189178

190179
### Explore your data with runtime fields [runtime-fields]
@@ -347,9 +336,9 @@ doc['field_name'].value
347336
For more information on scripted fields and additional examples, refer to [Using Painless in {{kib}} scripted fields](https://www.elastic.co/blog/using-painless-kibana-scripted-fields)
348337

349338

350-
#### Migrate to runtime fields or ES|QL queries [migrate-off-scripted-fields]
339+
#### Migrate to runtime fields or {{esql}} queries [migrate-off-scripted-fields]
351340

352-
The following code snippets demonstrate how an example scripted field called `computed_values` on the Kibana Sample Data Logs data view could be migrated to either a runtime field or an ES|QL query, highlighting the differences between each approach.
341+
The following code snippets demonstrate how an example scripted field called `computed_values` on the Kibana Sample Data Logs data view could be migrated to either a runtime field or an {{esql}} query, highlighting the differences between each approach.
353342

354343

355344
##### Scripted field [scripted-field-example]
@@ -463,9 +452,9 @@ Built-in validation is unsupported for scripted fields. When your scripts contai
463452

464453

465454

466-
### Format data fields [managing-fields]
455+
### Format data view fields [managing-fields]
467456

468-
{{kib}} uses the same field types as {{es}}, however, some {{es}} field types are unsupported in {{kib}}. To customize how {{kib}} displays data fields, use the formatting options.
457+
{{kib}} uses the same field types as {{es}}, however, some {{es}} field types are unsupported in {{kib}}. To customize how {{kib}} displays data view fields, use the formatting options.
469458

470459
1. Go to the **Data Views** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
471460
2. Click the data view that contains the field you want to change.
@@ -474,7 +463,7 @@ Built-in validation is unsupported for scripted fields. When your scripts contai
474463
5. Select **Set format**, then enter the **Format** for the field.
475464

476465
::::{note}
477-
For numeric fields the default field formatters are based on the `meta.unit` field. The unit is associated with a [time unit](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units), percent, or byte. The convention for percents is to use value 1 to mean 100%.
466+
For numeric fields, the default field formatters are based on the `meta.unit` field. The unit is associated with a [time unit](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units), percent, or byte. The convention for percents is to use value 1 to mean 100%.
478467
::::
479468

480469

@@ -699,4 +688,4 @@ Some data views are exclusively configured and **managed** by Elastic. You can v
699688

700689
4. Select **Duplicate**. A Similar flyout opens where you can adjust the settings of the new copy of the managed data view.
701690

702-
5. Finalize your edits, then select **Save data view to Kibana** or **Use without saving**, depending on your needs. By saving it to {{kib}}, you can retrieve it and use it again later.
691+
5. Finalize your edits, then select **Save data view to Kibana** or **Use without saving**, depending on your needs. By saving it to {{kib}}, you can retrieve it and use it again later.

explore-analyze/scripting/grok.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The {{stack}} ships with numerous [predefined grok patterns](https://github.com/
2020

2121
| | | |
2222
| --- | --- | --- |
23-
| `%{{SYNTAX}}` | `%{SYNTAX:ID}` | `%{SYNTAX:ID:TYPE}` |
23+
| `%{SYNTAX}` | `%{SYNTAX:ID}` | `%{SYNTAX:ID:TYPE}` |
2424

2525
`SYNTAX`
2626
: The name of the pattern that will match your text. For example, `NUMBER` and `IP` are both patterns that are provided within the default patterns set. The `NUMBER` pattern matches data like `3.44`, and the `IP` pattern matches data like `55.3.244.1`.
@@ -62,14 +62,14 @@ If you need help building grok patterns to match your data, use the [Grok Debugg
6262
::::
6363

6464

65-
For example, if you’re working with Apache log data, you can use the `%{{COMMONAPACHELOG}}` syntax, which understands the structure of Apache logs. A sample document might look like this:
65+
For example, if you’re working with Apache log data, you can use the `%{COMMONAPACHELOG}` syntax, which understands the structure of Apache logs. A sample document might look like this:
6666

6767
```js
6868
"timestamp":"2020-04-30T14:30:17-05:00","message":"40.135.0.0 - -
6969
[30/Apr/2020:14:30:17 -0500] \"GET /images/hm_bg.jpg HTTP/1.0\" 200 24736"
7070
```
7171

72-
To extract the IP address from the `message` field, you can write a Painless script that incorporates the `%{{COMMONAPACHELOG}}` syntax. You can test this script using the [`ip` field context](elasticsearch://reference/scripting-languages/painless/painless-api-examples.md#painless-runtime-ip) of the Painless execute API, but let’s use a runtime field instead.
72+
To extract the IP address from the `message` field, you can write a Painless script that incorporates the `%{COMMONAPACHELOG}` syntax. You can test this script using the [`ip` field context](elasticsearch://reference/scripting-languages/painless/painless-api-examples.md#painless-runtime-ip) of the Painless execute API, but let’s use a runtime field instead.
7373

7474
Based on the sample document, index the `@timestamp` and `message` fields. To remain flexible, use `wildcard` as the field type for `message`:
7575

manage-data/_snippets/ilm-start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ To restart {{ilm-init}} and resume executing policies, use the [{{ilm-init}} sta
66
POST _ilm/start
77
```
88

9-
The response will look like this:
9+
The response looks like this:
1010

1111
```console-result
1212
{
@@ -20,7 +20,7 @@ Verify that {{ilm}} is now running:
2020
GET _ilm/status
2121
```
2222

23-
The response will look like this:
23+
The response looks like this:
2424

2525
```console-result
2626
{

manage-data/_snippets/ilm-status.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
To see the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status):
1+
To view the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status):
22

33
```console
44
GET _ilm/status

manage-data/_snippets/ilm-stop.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ By default, the {{ilm}} service is in the `RUNNING` state and manages all indice
33
You can stop {{ilm-init}} to suspend management operations for all indices. For example, you might stop {{ilm}} when performing scheduled maintenance or making changes to the cluster that could impact the execution of {{ilm-init}} actions.
44

55
::::{important}
6-
When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. No snapshots will be taken as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected.
6+
When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. {{slm-init}} will not take snapshots as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected.
77
::::
88

99
To stop the {{ilm-init}} service and pause execution of all lifecycle policies, use the [{{ilm-init}} stop API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop):
@@ -12,7 +12,7 @@ To stop the {{ilm-init}} service and pause execution of all lifecycle policies,
1212
POST _ilm/stop
1313
```
1414

15-
The response will look like this:
15+
The response looks like this:
1616

1717
```console-result
1818
{
@@ -28,7 +28,7 @@ While the {{ilm-init}} service is shutting down, run the status API to verify th
2828
GET _ilm/status
2929
```
3030

31-
The response will look like this:
31+
The response looks like this:
3232

3333
```console-result
3434
{

manage-data/data-store.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@ Then, learn how these documents and the fields they contain are stored and index
1818

1919
You can also read more about working with {{es}} as a data store including how to use [index templates](/manage-data/data-store/templates.md) to tell {{es}} how to configure an index when it is created, how to use [aliases](/manage-data/data-store/aliases.md) to point to multiple indices, and how to use the [command line to manage data](/manage-data/data-store/manage-data-from-the-command-line.md) stored in {{es}}.
2020

21-
If your use case involves working with continuous streams of time series data, you may consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. The data can be accessed through a single, named resource, while it is stored in a series of hidden, auto-generated backing indices.
21+
If your use case involves working with continuous streams of time series data, you can consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. You can access the data through a single, named resource, while {{es}} stores it in a series of hidden, auto-generated backing indices.

manage-data/data-store/data-streams.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ When a backing index is created, the index is named using the following conventi
106106

107107
Some operations, such as a [shrink](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-shrink) or [restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md), can change a backing index’s name. These name changes do not remove a backing index from its data stream.
108108

109-
The generation of the data stream can change without a new index being added to the data stream (e.g. when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
109+
The generation of the data stream can change without a new index being added to the data stream (for example, when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
110110

111111

112112
## Append-only (mostly) [data-streams-append-only]

manage-data/data-store/data-streams/failure-store-recipes.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,7 @@ Without tags in place it would not be as clear where in the pipeline the indexin
307307

308308
## Alerting on failed ingestion [failure-store-examples-alerting]
309309

310-
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
310+
Since failure stores can be searched like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
311311
{{kib}}. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
312312

313313
:::::{stepper}
@@ -382,7 +382,7 @@ We recommend a few best practices for remediating failure data.
382382

383383
**Use an ingest pipeline to convert failure documents back into their original document.** Failure documents store failure information along with the document that failed ingestion. The first step for remediating documents should be to use an ingest pipeline to extract the original source from the failure document and then discard any other information about the failure.
384384

385-
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is via the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
385+
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is using the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
386386

387387
### Remediating ingest node failures [failure-store-examples-remediation-ingest]
388388

@@ -511,7 +511,7 @@ Because ingest pipeline failures need to be reprocessed by their original pipeli
511511
```
512512
1. The `data.id` field is expected to be present. If it isn't present this pipeline will fail.
513513

514-
Fixing a failure's root cause is a often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
514+
Fixing a failure's root cause is often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
515515

516516
```console
517517
PUT _ingest/pipeline/my-datastream-default-pipeline
@@ -658,7 +658,7 @@ POST _ingest/pipeline/_simulate
658658
]
659659
}
660660
```
661-
1. The index has been updated via the reroute processor.
661+
1. The index has been updated through the reroute processor.
662662
2. The document ID has stayed the same.
663663
3. The source should cleanly match the contents of the original document.
664664

@@ -995,7 +995,7 @@ PUT _ingest/pipeline/my-datastream-remediation-pipeline
995995
2. Capture the source of the original document.
996996
3. Discard the `error` field since it wont be needed for the remediation.
997997
4. Also discard the `document` field.
998-
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and thus will be present in the final document.
998+
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and will be present in the final document.
999999

10001000
:::{important}
10011001
Remember that a document that has failed during indexing has already been processed by the ingest processor! It shouldn't need to be processed again unless you made changes to your pipeline to fix the original problem. Make sure that any fixes applied to the ingest pipeline are reflected in the pipeline logic here.
@@ -1088,7 +1088,7 @@ Caused by: j.l.IllegalArgumentException: data stream timestamp field [@timestamp
10881088
]
10891089
}
10901090
```
1091-
1. The index has been updated via the script processor.
1091+
1. The index has been updated through the script processor.
10921092
2. The source should reflect any fixes and match the expected document shape for the final index.
10931093
3. In this example case, we find that the failure timestamp has stayed in the source.
10941094

0 commit comments

Comments
 (0)