Skip to content

Commit ccda059

Browse files
committed
Update links and add privilege info
1 parent fd8939e commit ccda059

File tree

3 files changed

+30
-25
lines changed

3 files changed

+30
-25
lines changed

deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -352,6 +352,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro
352352
`manage_data_stream_lifecycle`
353353
: All [Data stream lifecycle](../../../manage-data/lifecycle/data-stream.md) operations relating to reading and managing the built-in lifecycle of a data stream. This includes operations such as adding and removing a lifecycle from a data stream.
354354

355+
`manage_failure_store`
356+
: All `monitor` privileges plus index and data stream administration limited to failure stores only.
357+
355358
`manage_follow_index`
356359
: All actions that are required to manage the lifecycle of a follower index, which includes creating a follower index, closing it, and converting it to a regular index. This privilege is necessary only on clusters that contain follower indices.
357360

@@ -381,6 +384,8 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro
381384

382385
This privilege is not available in {{serverless-full}}.
383386

387+
`read_failure_store`
388+
: Read-only access to actions performed on a data stream's failure store. Required for access to failure store data (count, explain, get, mget, get indexed scripts, more like this, multi percolate/search/termvector, percolate, scroll, clear_scroll, search, suggest, tv).
384389

385390
`view_index_metadata`
386391
: Read-only access to index and data stream metadata (aliases, exists, field capabilities, field mappings, get index, get data stream, ilm explain, mappings, search shards, settings, validate query). This privilege is available for use primarily by {{kib}} users.

manage-data/data-store/data-streams/failure-store-recipes.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ We find a remove processor in the first pipeline that is the root cause of the p
143143

144144
## Troubleshooting complicated ingest pipelines [failure-store-recipes-complicated-ingest-troubleshoot]
145145

146-
Ingest processors can be labeled with [tags](./failure-store.md). These tags are user-provided information that names or describes the processor's purpose in the pipeline. When documents are redirected to the failure store due to a processor issue, they capture the tag from the processor in which the failure occurred, if it exists. Because of this behavior, it is a good practice to tag the processors in your pipeline so that the location of a failure can be identified quickly.
146+
Ingest processors can be labeled with tags. These tags are user-provided information that names or describes the processor's purpose in the pipeline. When documents are redirected to the failure store due to a processor issue, they capture the tag from the processor in which the failure occurred, if it exists. Because of this behavior, it is a good practice to tag the processors in your pipeline so that the location of a failure can be identified quickly.
147147

148148
Here we have a needlessly complicated pipeline. It is made up of several set and remove processors. Beneficially, they are all tagged with descriptive names.
149149
```console
@@ -281,7 +281,8 @@ Without tags in place it would not be as clear where in the pipeline the indexin
281281

282282
## Alerting on failed ingestion [failure-store-recipes-alerting]
283283

284-
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](./failure-store.md) in Kibana. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
284+
Since failure stores can be searched just like a normal data stream, we can use them as inputs to [alerting rules](../../../explore-analyze/alerts-cases/alerts.md) in
285+
{{kib}}. Here is a simple alerting example that is triggered when more than ten indexing failures have occurred in the last five minutes for a data stream:
285286

286287
:::::{stepper}
287288

@@ -349,7 +350,7 @@ Care should be taken when replaying data into a data stream from a failure store
349350

350351
We recommend a few best practices for remediating failure data.
351352

352-
**Separate your failures beforehand.** As described in the [failure document source](#use-failure-store-document-source) section above, failure documents are structured differently depending on when the document failed during ingestion. We recommend to separate documents by ingest pipeline failures and indexing failures at minimum. Ingest pipeline failures often need to have the original pipeline re-run, while index failures should skip any pipelines. Further separating failures by index or specific failure type may also be beneficial.
353+
**Separate your failures beforehand.** As described in the previous [failure document source](./failure-store.md#use-failure-store-document-source) section, failure documents are structured differently depending on when the document failed during ingestion. We recommend to separate documents by ingest pipeline failures and indexing failures at minimum. Ingest pipeline failures often need to have the original pipeline re-run, while index failures should skip any pipelines. Further separating failures by index or specific failure type may also be beneficial.
353354

354355
**Perform a failure store rollover.** Consider rolling over the failure store before attempting to remediate failures. This will create a new failure index that will collect any new failures during the remediation process.
355356

@@ -544,7 +545,7 @@ PUT _ingest/pipeline/my-datastream-remediation-pipeline
544545
::::
545546

546547
::::{step} Test your pipelines
547-
Before sending data off to be reindexed, be sure to test the pipelines in question with an example document to make sure they work. First, test to make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](./failure-store.md) for this.
548+
Before sending data off to be reindexed, be sure to test the pipelines in question with an example document to make sure they work. First, test to make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for this.
548549

549550
```console
550551
POST _ingest/pipeline/_simulate
@@ -635,7 +636,7 @@ POST _ingest/pipeline/_simulate
635636
2. The document ID has stayed the same.
636637
3. The source should cleanly match the contents of the original document.
637638

638-
Now that the remediation pipeline has been tested, be sure to test the end-to-end ingestion to verify that no further problems will arise. To do this, we will use the [simulate ingestion API](./failure-store.md) to test multiple pipeline executions.
639+
Now that the remediation pipeline has been tested, be sure to test the end-to-end ingestion to verify that no further problems will arise. To do this, we will use the [simulate ingestion API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-simulate-ingest) to test multiple pipeline executions.
639640

640641
```console
641642
POST _ingest/_simulate?pipeline=my-datastream-remediation-pipeline <1>
@@ -733,7 +734,7 @@ POST _ingest/_simulate?pipeline=my-datastream-remediation-pipeline <1>
733734
::::
734735

735736
::::{step} Reindex the failure documents
736-
Combine the remediation pipeline with the failure store query together in a [reindex operation](./failure-store.md) to replay the failures.
737+
Combine the remediation pipeline with the failure store query together in a [reindex operation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to replay the failures.
737738

738739
```console
739740
POST _reindex
@@ -816,7 +817,7 @@ Since the failure store is enabled on this data stream, it would be wise to chec
816817

817818
### Remediating mapping and shard failures [failure-store-recipes-remediation-mapping]
818819

819-
As described in the previous [failure document source](#use-failure-store-document-source) section, failures that occur due to a mapping or indexing issue will be stored as they were after any pipelines had executed. This means that to replay the document into the data stream we will need to make sure to skip any pipelines that have already run.
820+
As described in the previous [failure document source](./failure-store.md#use-failure-store-document-source) section, failures that occur due to a mapping or indexing issue will be stored as they were after any pipelines had executed. This means that to replay the document into the data stream we will need to make sure to skip any pipelines that have already run.
820821

821822
:::{tip}
822823
You can greatly simplify this remediation process by writing any ingest pipelines to be idempotent. In that case, any document that has already be processed that passes through a pipeline again would be unchanged.
@@ -976,7 +977,7 @@ Remember that a document that has failed during indexing has already been proces
976977
::::
977978

978979
::::{step} Test your pipeline
979-
Before sending data off to be reindexed, be sure to test the remedial pipeline with an example document to make sure it works. Most importantly, make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](./failure-store.md) for this.
980+
Before sending data off to be reindexed, be sure to test the remedial pipeline with an example document to make sure it works. Most importantly, make sure the resulting document from the remediation pipeline is shaped how you expect. We can use the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) for this.
980981

981982
```console
982983
POST _ingest/pipeline/_simulate
@@ -1067,7 +1068,7 @@ Caused by: j.l.IllegalArgumentException: data stream timestamp field [@timestamp
10671068
::::
10681069

10691070
::::{step} Reindex the failure documents
1070-
Combine the remediation pipeline with the failure store query together in a [reindex operation](./failure-store.md) to replay the failures.
1071+
Combine the remediation pipeline with the failure store query together in a [reindex operation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to replay the failures.
10711072

10721073
```console
10731074
POST _reindex

0 commit comments

Comments
 (0)