Skip to content

Commit 439171b

Browse files
[OnWeek] Fix Vale rule warnings in manage-data/data-store
1 parent 6046343 commit 439171b

17 files changed

+34
-34
lines changed

manage-data/data-store/data-streams.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ When a backing index is created, the index is named using the following conventi
106106

107107
Some operations, such as a [shrink](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-shrink) or [restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md), can change a backing index’s name. These name changes do not remove a backing index from its data stream.
108108

109-
The generation of the data stream can change without a new index being added to the data stream (e.g. when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
109+
The generation of the data stream can change without a new index being added to the data stream (for example, when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
110110

111111

112112
## Append-only (mostly) [data-streams-append-only]

manage-data/data-store/data-streams/failure-store-recipes.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,7 @@ Without tags in place it would not be as clear where in the pipeline the indexin
307307

308308
## Alerting on failed ingestion [failure-store-examples-alerting]
309309

310-
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
310+
Since failure stores can be searched like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
311311
{{kib}}. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
312312

313313
:::::{stepper}
@@ -382,7 +382,7 @@ We recommend a few best practices for remediating failure data.
382382

383383
**Use an ingest pipeline to convert failure documents back into their original document.** Failure documents store failure information along with the document that failed ingestion. The first step for remediating documents should be to use an ingest pipeline to extract the original source from the failure document and then discard any other information about the failure.
384384

385-
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is via the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
385+
**Simulate first to avoid repeat failures.** If you must run a pipeline as part of your remediation process, it is best to simulate the pipeline against the failure first. This will catch any unforeseen issues that may fail the document a second time. Remember, ingest pipeline failures will capture the document before an ingest pipeline is applied to it, which can further complicate remediation when a failure document becomes nested inside a new failure. The easiest way to simulate these changes is using the [pipeline simulate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) or the [simulate ingest API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest).
386386

387387
### Remediating ingest node failures [failure-store-examples-remediation-ingest]
388388

@@ -511,7 +511,7 @@ Because ingest pipeline failures need to be reprocessed by their original pipeli
511511
```
512512
1. The `data.id` field is expected to be present. If it isn't present this pipeline will fail.
513513

514-
Fixing a failure's root cause is a often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
514+
Fixing a failure's root cause is often a bespoke process. In this example, instead of discarding the data, we will make this identifier field optional.
515515

516516
```console
517517
PUT _ingest/pipeline/my-datastream-default-pipeline
@@ -658,7 +658,7 @@ POST _ingest/pipeline/_simulate
658658
]
659659
}
660660
```
661-
1. The index has been updated via the reroute processor.
661+
1. The index has been updated through the reroute processor.
662662
2. The document ID has stayed the same.
663663
3. The source should cleanly match the contents of the original document.
664664

@@ -995,7 +995,7 @@ PUT _ingest/pipeline/my-datastream-remediation-pipeline
995995
2. Capture the source of the original document.
996996
3. Discard the `error` field since it wont be needed for the remediation.
997997
4. Also discard the `document` field.
998-
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and thus will be present in the final document.
998+
5. We extract all the fields from the original document's source back to the root of the document. The `@timestamp` field is not overwritten and will be present in the final document.
999999

10001000
:::{important}
10011001
Remember that a document that has failed during indexing has already been processed by the ingest processor! It shouldn't need to be processed again unless you made changes to your pipeline to fix the original problem. Make sure that any fixes applied to the ingest pipeline are reflected in the pipeline logic here.
@@ -1088,7 +1088,7 @@ Caused by: j.l.IllegalArgumentException: data stream timestamp field [@timestamp
10881088
]
10891089
}
10901090
```
1091-
1. The index has been updated via the script processor.
1091+
1. The index has been updated through the script processor.
10921092
2. The source should reflect any fixes and match the expected document shape for the final index.
10931093
3. In this example case, we find that the failure timestamp has stayed in the source.
10941094

manage-data/data-store/data-streams/failure-store.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ After a matching data stream is created, its failure store will be enabled.
6262

6363
### Set up for existing data streams [set-up-failure-store-existing]
6464

65-
Enabling the failure store via [index templates](../templates.md) can only affect data streams that are newly created. Existing data streams that use a template are not affected by changes to the template's `data_stream_options` field.
65+
Enabling the failure store using [index templates](../templates.md) can only affect data streams that are newly created. Existing data streams that use a template are not affected by changes to the template's `data_stream_options` field.
6666
To modify an existing data stream's options, use the [put data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options) API:
6767

6868
```console
@@ -96,7 +96,7 @@ PUT _data_stream/my-datastream-existing/_options
9696
You can also enable the data stream failure store in {{kib}}. Locate the data stream on the **Streams** page, where a stream maps directly to a data stream. Select a stream to view its details and go to the **Retention** tab where you can find the **Enable failure store** option.
9797
:::
9898

99-
### Enable failure store via cluster setting [set-up-failure-store-cluster-setting]
99+
### Enable failure store using cluster setting [set-up-failure-store-cluster-setting]
100100

101101
If you have a large number of existing data streams you may want to enable their failure stores in one place. Instead of updating each of their options individually, set `data_streams.failure_store.enabled` to a list of index patterns in the [cluster settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Any data streams that match one of these patterns will operate with their failure store enabled.
102102

@@ -257,7 +257,7 @@ If the document could have been redirected to a data stream's failure store but
257257
3. The response status is `400 Bad Request` due to the mapping problem.
258258

259259

260-
If the document was redirected to a data stream's failure store but that failed document could not be stored (e.g. due to shard unavailability or a similar problem), then the `failure_store` field on the response will be `failed`, and the response will display the error for the original failure, as well as a suppressed error detailing why the failure could not be stored:
260+
If the document was redirected to a data stream's failure store but that failed document could not be stored (for example, due to shard unavailability or a similar problem), then the `failure_store` field on the response will be `failed`, and the response will display the error for the original failure, as well as a suppressed error detailing why the failure could not be stored:
261261

262262
```console-result
263263
{
@@ -306,7 +306,7 @@ Once you have accumulated some failures, the failure store can be searched much
306306
:::{warning}
307307
Documents redirected to the failure store in the event of a failed ingest pipeline will be stored in their original, unprocessed form. If an ingest pipeline normally redacts sensitive information from a document, then failed documents in their original, unprocessed form may contain sensitive information.
308308

309-
Furthermore, failed documents are likely to be structured differently than normal data in a data stream, and thus special care should be taken when making use of [document level security](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#document-level-security) or [field level security](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#field-level-security). Any security policies that expect to utilize these features for both regular documents and failure documents should account for any differences in document structure between the two document types.
309+
Furthermore, failed documents are likely to be structured differently than normal data in a data stream, and special care should be taken when making use of [document level security](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#document-level-security) or [field level security](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#field-level-security). Any security policies that expect to utilize these features for both regular documents and failure documents should account for any differences in document structure between the two document types.
310310

311311
To limit visibility on potentially sensitive data, users require the [`read_failure_store`](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-indices) index privilege for a data stream in order to search that data stream's failure store data.
312312
:::
@@ -324,7 +324,7 @@ POST _query?format=txt
324324
"query": """FROM my-datastream::failures | DROP error.stack_trace | LIMIT 1""" <1>
325325
}
326326
```
327-
1. We drop the `error.stack_trace` field here just to keep the example free of newlines.
327+
1. We drop the `error.stack_trace` field here to keep the example free of newlines.
328328

329329
An example of a search result with the failed document present:
330330

@@ -820,7 +820,7 @@ PUT _cluster/settings
820820
}
821821
```
822822

823-
You can also specify the failure store retention period for a data stream on its data stream options. These can be specified via the index template for new data streams, or via the [put data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options) API for existing data streams.
823+
You can also specify the failure store retention period for a data stream on its data stream options. These can be specified using the index template for new data streams, or using the [put data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options) API for existing data streams.
824824

825825
```console
826826
PUT _data_stream/my-datastream/_options

manage-data/data-store/data-streams/run-downsampling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ stack: ga
3333
serverless: ga
3434
```
3535

36-
To downsample a time series via a [data stream lifecycle](/manage-data/lifecycle/data-stream.md), add a [downsampling](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) section to the data stream lifecycle (for existing data streams) or the index template (for new data streams).
36+
To downsample a time series using a [data stream lifecycle](/manage-data/lifecycle/data-stream.md), add a [downsampling](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) section to the data stream lifecycle (for existing data streams) or the index template (for new data streams).
3737

3838
* Set `fixed_interval` to your preferred level of granularity. The original time series data will be aggregated at this interval.
3939
* Set `after` to the minimum time to wait after an index rollover, before running downsampling.

manage-data/data-store/mapping.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ products:
2323
% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md
2424
% Notes: redirect only
2525

26-
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
26+
% Internal links rely on the following IDs being on this page (for example, as a heading ID, paragraph ID, and so on):
2727

2828
$$$mapping-limit-settings$$$
2929

manage-data/data-store/mapping/define-runtime-fields-in-search-request.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ products:
1212

1313
You can specify a `runtime_mappings` section in a search request to create runtime fields that exist only as part of the query. You specify a script as part of the `runtime_mappings` section, just as you would if [adding a runtime field to the mappings](map-runtime-field.md).
1414

15-
Defining a runtime field in a search request uses the same format as defining a runtime field in the index mapping. Just copy the field definition from the `runtime` in the index mapping to the `runtime_mappings` section of the search request.
15+
Defining a runtime field in a search request uses the same format as defining a runtime field in the index mapping. Copy the field definition from the `runtime` in the index mapping to the `runtime_mappings` section of the search request.
1616

1717
The following search request adds a `day_of_week` field to the `runtime_mappings` section. The field values will be calculated dynamically, and only within the context of this search request:
1818

manage-data/data-store/mapping/dynamic-templates.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ The `match_pattern` parameter adjusts the behavior of the `match` parameter to s
193193
"match": "^profit_\d+$"
194194
```
195195

196-
The following example matches all `string` fields whose name starts with `long_` (except for those which end with `_text`) and maps them as `long` fields:
196+
The following example matches all `string` fields whose name starts with `long_` (except for those that end with `_text`) and maps them as `long` fields:
197197

198198
```console
199199
PUT my-index-000001
@@ -265,7 +265,7 @@ PUT my-index-000001/_doc/1
265265

266266
## `path_match` and `path_unmatch` [path-match-unmatch]
267267

268-
The `path_match` and `path_unmatch` parameters work in the same way as `match` and `unmatch`, but operate on the full dotted path to the field, not just the final name, e.g. `some_object.*.some_field`.
268+
The `path_match` and `path_unmatch` parameters work in the same way as `match` and `unmatch`, but operate on the full dotted path to the field, not just the final name, for example, `some_object.*.some_field`.
269269

270270
This example copies the values of any fields in the `name` object to the top-level `full_name` field, except for the `middle` field:
271271

@@ -342,7 +342,7 @@ PUT my-index-000001/_doc/2
342342
}
343343
```
344344

345-
Note that the `path_match` and `path_unmatch` parameters match on object paths in addition to leaf fields. As an example, indexing the following document will result in an error because the `path_match` setting also matches the object field `name.title`, which cant be mapped as text:
345+
The `path_match` and `path_unmatch` parameters match on object paths in addition to leaf fields. As an example, indexing the following document will result in an error because the `path_match` setting also matches the object field `name.title`, which can't be mapped as text:
346346

347347
```console
348348
PUT my-index-000001/_doc/2

manage-data/data-store/mapping/explore-data-with-runtime-fields.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ The mapping contains two fields: `@timestamp` and `message`.
9696

9797
If you want to retrieve results that include `clientip`, you can add that field as a runtime field in the mapping. The following runtime script defines a [grok pattern](../../../explore-analyze/scripting/grok.md) that extracts structured fields out of a single text field within a document. A grok pattern is like a regular expression that supports aliased expressions that you can reuse.
9898

99-
The script matches on the `%{{COMMONAPACHELOG}}` log pattern, which understands the structure of Apache logs. If the pattern matches (`clientip != null`), the script emits the value of the matching IP address. If the pattern doesnt match, the script just returns the field value without crashing.
99+
The script matches on the `%{{COMMONAPACHELOG}}` log pattern, which understands the structure of Apache logs. If the pattern matches (`clientip != null`), the script emits the value of the matching IP address. If the pattern doesn't match, the script returns the field value without crashing.
100100

101101
```console
102102
PUT my-index-000001/_mappings
@@ -116,7 +116,7 @@ PUT my-index-000001/_mappings
116116
1. This condition ensures that the script doesn’t crash even if the pattern of the message doesn’t match.
117117

118118

119-
Alternatively, you can define the same runtime field but in the context of a search request. The runtime definition and the script are exactly the same as the one defined previously in the index mapping. Just copy that definition into the search request under the `runtime_mappings` section and include a query that matches on the runtime field. This query returns the same results as if you defined a search query for the `http.clientip` runtime field in your index mappings, but only in the context of this specific search:
119+
Alternatively, you can define the same runtime field but in the context of a search request. The runtime definition and the script are exactly the same as the one defined previously in the index mapping. Copy that definition into the search request under the `runtime_mappings` section and include a query that matches on the runtime field. This query returns the same results as if you defined a search query for the `http.clientip` runtime field in your index mappings, but only in the context of this specific search:
120120

121121
```console
122122
GET my-index-000001/_search

manage-data/data-store/mapping/index-runtime-field.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ products:
1010

1111
# Index a runtime field [runtime-indexed]
1212

13-
Runtime fields are defined by the context where they run. For example, you can define runtime fields in the [context of a search query](define-runtime-fields-in-search-request.md) or within the [`runtime` section](map-runtime-field.md) of an index mapping. If you decide to index a runtime field for greater performance, just move the full runtime field definition (including the script) to the context of an index mapping. {{es}} automatically uses these indexed fields to drive queries, resulting in a fast response time. This capability means you can write a script only once, and apply it to any context that supports runtime fields.
13+
Runtime fields are defined by the context where they run. For example, you can define runtime fields in the [context of a search query](define-runtime-fields-in-search-request.md) or within the [`runtime` section](map-runtime-field.md) of an index mapping. If you decide to index a runtime field for greater performance, move the full runtime field definition (including the script) to the context of an index mapping. {{es}} automatically uses these indexed fields to drive queries, resulting in a fast response time. This capability means you can write a script only once, and apply it to any context that supports runtime fields.
1414

1515
::::{note}
1616
Indexing a `composite` runtime field is currently not supported.

0 commit comments

Comments
 (0)