You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -352,6 +352,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro
352
352
`manage_data_stream_lifecycle`
353
353
: All [Data stream lifecycle](../../../manage-data/lifecycle/data-stream.md) operations relating to reading and managing the built-in lifecycle of a data stream. This includes operations such as adding and removing a lifecycle from a data stream.
354
354
355
+
`manage_failure_store`
356
+
: All `monitor` privileges plus index and data stream administration limited to failure stores only.
357
+
355
358
`manage_follow_index`
356
359
: All actions that are required to manage the lifecycle of a follower index, which includes creating a follower index, closing it, and converting it to a regular index. This privilege is necessary only on clusters that contain follower indices.
357
360
@@ -381,6 +384,8 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro
381
384
382
385
This privilege is not available in {{serverless-full}}.
383
386
387
+
`read_failure_store`
388
+
: Read-only access to actions performed on a data stream's failure store. Required for access to failure store data (count, explain, get, mget, get indexed scripts, more like this, multi percolate/search/termvector, percolate, scroll, clear_scroll, search, suggest, tv).
384
389
385
390
`view_index_metadata`
386
391
: Read-only access to index and data stream metadata (aliases, exists, field capabilities, field mappings, get index, get data stream, ilm explain, mappings, search shards, settings, validate query). This privilege is available for use primarily by {{kib}} users.
Ingest processors can be labeled with [tags](./failure-store.md). These tags are user-provided information that names or describes the processor's purpose in the pipeline. When documents are redirected to the failure store due to a processor issue, they capture the tag from the processor in which the failure occurred, if it exists. Because of this behavior, it is a good practice to tag the processors in your pipeline so that the location of a failure can be identified quickly.
146
+
Ingest processors can be labeled with tags. These tags are user-provided information that names or describes the processor's purpose in the pipeline. When documents are redirected to the failure store due to a processor issue, they capture the tag from the processor in which the failure occurred, if it exists. Because of this behavior, it is a good practice to tag the processors in your pipeline so that the location of a failure can be identified quickly.
147
147
148
148
Here we have a needlessly complicated pipeline. It is made up of several set and remove processors. Beneficially, they are all tagged with descriptive names.
149
149
```console
@@ -281,7 +281,8 @@ Without tags in place it would not be as clear where in the pipeline the indexin
281
281
282
282
## Alerting on failed ingestion [failure-store-recipes-alerting]
283
283
284
-
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](./failure-store.md) in Kibana. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
284
+
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
285
+
{{kib}}. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
285
286
286
287
:::::{stepper}
287
288
@@ -349,7 +350,7 @@ Care should be taken when replaying data into a data stream from a failure store
349
350
350
351
We recommend a few best practices for remediating failure data.
351
352
352
-
**Separate your failures beforehand.** As described in the [failure document source](#use-failure-store-document-source) section above, failure documents are structured differently depending on when the document failed during ingestion. We recommend to separate documents by ingest pipeline failures and indexing failures at minimum. Ingest pipeline failures often need to have the original pipeline re-run, while index failures should skip any pipelines. Further separating failures by index or specific failure type may also be beneficial.
353
+
**Separate your failures beforehand.** As described in the previous [failure document source](./failure-store.md#use-failure-store-document-source) section, failure documents are structured differently depending on when the document failed during ingestion. We recommend to separate documents by ingest pipeline failures and indexing failures at minimum. Ingest pipeline failures often need to have the original pipeline re-run, while index failures should skip any pipelines. Further separating failures by index or specific failure type may also be beneficial.
353
354
354
355
**Perform a failure store rollover.** Consider rolling over the failure store before attempting to remediate failures. This will create a new failure index that will collect any new failures during the remediation process.
355
356
@@ -544,7 +545,7 @@ PUT _ingest/pipeline/my-datastream-remediation-pipeline
544
545
::::
545
546
546
547
::::{step} Test your pipelines
547
-
Before sending data off to be reindexed, be sure to test the pipelines in question with an example document to make sure they work. First, test to make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](./failure-store.md) for this.
548
+
Before sending data off to be reindexed, be sure to test the pipelines in question with an example document to make sure they work. First, test to make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for this.
548
549
549
550
```console
550
551
POST _ingest/pipeline/_simulate
@@ -635,7 +636,7 @@ POST _ingest/pipeline/_simulate
635
636
2. The document ID has stayed the same.
636
637
3. The source should cleanly match the contents of the original document.
637
638
638
-
Now that the remediation pipeline has been tested, be sure to test the end-to-end ingestion to verify that no further problems will arise. To do this, we will use the [simulate ingestion API](./failure-store.md) to test multiple pipeline executions.
639
+
Now that the remediation pipeline has been tested, be sure to test the end-to-end ingestion to verify that no further problems will arise. To do this, we will use the [simulate ingestion API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest) to test multiple pipeline executions.
639
640
640
641
```console
641
642
POST _ingest/_simulate?pipeline=my-datastream-remediation-pipeline <1>
@@ -733,7 +734,7 @@ POST _ingest/_simulate?pipeline=my-datastream-remediation-pipeline <1>
733
734
::::
734
735
735
736
::::{step} Reindex the failure documents
736
-
Combine the remediation pipeline with the failure store query together in a [reindex operation](./failure-store.md) to replay the failures.
737
+
Combine the remediation pipeline with the failure store query together in a [reindex operation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to replay the failures.
737
738
738
739
```console
739
740
POST _reindex
@@ -816,7 +817,7 @@ Since the failure store is enabled on this data stream, it would be wise to chec
816
817
817
818
### Remediating mapping and shard failures [failure-store-recipes-remediation-mapping]
818
819
819
-
As described in the previous [failure document source](#use-failure-store-document-source) section, failures that occur due to a mapping or indexing issue will be stored as they were after any pipelines had executed. This means that to replay the document into the data stream we will need to make sure to skip any pipelines that have already run.
820
+
As described in the previous [failure document source](./failure-store.md#use-failure-store-document-source) section, failures that occur due to a mapping or indexing issue will be stored as they were after any pipelines had executed. This means that to replay the document into the data stream we will need to make sure to skip any pipelines that have already run.
820
821
821
822
:::{tip}
822
823
You can greatly simplify this remediation process by writing any ingest pipelines to be idempotent. In that case, any document that has already be processed that passes through a pipeline again would be unchanged.
@@ -976,7 +977,7 @@ Remember that a document that has failed during indexing has already been proces
976
977
::::
977
978
978
979
::::{step} Test your pipeline
979
-
Before sending data off to be reindexed, be sure to test the remedial pipeline with an example document to make sure it works. Most importantly, make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](./failure-store.md) for this.
980
+
Before sending data off to be reindexed, be sure to test the remedial pipeline with an example document to make sure it works. Most importantly, make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for this.
980
981
981
982
```console
982
983
POST _ingest/pipeline/_simulate
@@ -1067,7 +1068,7 @@ Caused by: j.l.IllegalArgumentException: data stream timestamp field [@timestamp
1067
1068
::::
1068
1069
1069
1070
::::{step} Reindex the failure documents
1070
-
Combine the remediation pipeline with the failure store query together in a [reindex operation](./failure-store.md) to replay the failures.
1071
+
Combine the remediation pipeline with the failure store query together in a [reindex operation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to replay the failures.
0 commit comments