You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/data-streams/failure-store.asciidoc
+26-12Lines changed: 26 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,14 @@ On this page, you'll learn how to set up, use, and manage a failure store, as we
15
15
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to <<failure-store-recipes,Using failure stores to address ingestion issues>>.
16
16
17
17
[discrete]
18
-
=== Set up a data stream failure store [[set-up-failure-store]]
18
+
[[set-up-failure-store]]
19
+
=== Set up a data stream failure store
19
20
20
21
Each data stream has its own failure store that can be enabled to accept failed documents. By default, this failure store is disabled and any ingestion problems are raised in the response to write operations.
21
22
22
23
[discrete]
23
-
==== Set up for new data streams [[set-up-failure-store-new]]
24
+
[[set-up-failure-store-new]]
25
+
==== Set up for new data streams
24
26
25
27
You can specify in a data stream's <<index-templates,index templates>> if it should enable the failure store when it is first created.
26
28
@@ -52,7 +54,8 @@ PUT _index_template/my-index-template
52
54
After a matching data stream is created, its failure store will be enabled.
53
55
54
56
[discrete]
55
-
==== Set up for existing data streams [[set-up-failure-store-existing]]
57
+
[[set-up-failure-store-existing]]
58
+
==== Set up for existing data streams
56
59
57
60
Enabling the failure store via <<index-templates,index templates>> can only affect data streams that are newly created. Existing data streams that use a template are not affected by changes to the template's `data_stream_options` field.
58
61
To modify an existing data stream's options, use the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options[Update data stream options API]:
@@ -82,7 +85,8 @@ PUT _data_stream/my-datastream-existing/_options
82
85
<1> Redirecting failed documents into the failure store will now be disabled.
83
86
84
87
[discrete]
85
-
==== Enable failure store via cluster setting [[set-up-failure-store-cluster-setting]]
88
+
[[set-up-failure-store-cluster-setting]]
89
+
==== Enable failure store via cluster setting
86
90
87
91
If you have a large number of existing data streams you may want to enable their failure stores in one place. Instead of updating each of their options individually, set `data_streams.failure_store.enabled` to a list of index patterns in the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings[cluster settings]. Any data streams that match one of these patterns will operate with their failure store enabled.
88
92
@@ -111,14 +115,16 @@ PUT _data_stream/my-datastream-1/_options
111
115
<1> The failure store for `my-datastream-1` is disabled even though it matches `my-datastream-*`. The data stream options override the cluster setting.
112
116
113
117
[discrete]
114
-
=== Using a failure store [[use-failure-store]]
118
+
[[use-failure-store]]
119
+
=== Using a failure store
115
120
116
121
The failure store is meant to ease the burden of detecting and handling failures when ingesting data to {es}. Clients are less likely to encounter unrecoverable failures when writing documents, and developers are more easily able to troubleshoot faulty pipelines and mappings.
117
122
118
123
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to <<failure-store-recipes,Using failure stores to address ingestion issues>>.
Once a failure store is enabled for a data stream it will begin redirecting documents that fail due to common ingestion problems instead of returning errors in write operations. Clients are notified in a non-intrusive way when a document is redirected to the failure store.
124
130
@@ -277,7 +283,8 @@ If the document was redirected to a data stream's failure store but that failed
277
283
<5> The response status is `400 Bad Request` due to the original mapping problem.
Failure documents have a uniform structure that is handled internally by {es}.
420
428
`@timestamp`:: (`date`) The timestamp at which the document encountered a failure in {es}.
421
429
`document`:: (`object`) The document at time of failure. If the document failed in an ingest pipeline, then the document will be the unprocessed version of the document as it arrived in the original indexing request. If the document failed due to a mapping issue, then the document will be as it was after any ingest pipelines were applied to it.
@@ -681,10 +689,14 @@ Caused by: j.l.IllegalArgumentException: For input string: "this field is invali
681
689
The `document` field attempts to show the effective input to whichever process led to the failure occurring. This gives you all the information you need to reproduce the problem.
682
690
683
691
[discrete]
684
-
=== Manage a data stream's failure store [[manage-failure-store]]
692
+
[[manage-failure-store]]
693
+
=== Manage a data stream's failure store
685
694
Failure data can accumulate in a data stream over time. To help manage this accumulation, most administrative operations that can be done on a data stream can be applied to the data stream's failure store.
686
695
687
-
==== Failure store rollover [[manage-failure-store-rollover]]
696
+
[discrete]
697
+
[[manage-failure-store-rollover]]
698
+
==== Failure store rollover
699
+
688
700
A data stream treats its failure store much like a secondary set of <<backing-indices,backing indices>>. Multiple dedicated hidden indices serve search requests for the failure store, while one index acts as the current write index. You can use the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover[rollover] API to rollover the failure store. Much like the regular indices in a data stream, a new write index will be created in the failure store to accept new failure documents.
689
701
690
702
[source,console]
@@ -707,7 +719,8 @@ POST my-datastream::failures/_rollover
707
719
----
708
720
709
721
[discrete]
710
-
==== Failure store lifecycle [[manage-failure-store-lifecycle]]
722
+
[[manage-failure-store-lifecycle]]
723
+
==== Failure store lifecycle
711
724
712
725
Failure stores have their retention managed using an internal <<data-stream-lifecycle,data stream lifecycle>>. A thirty day (30d) retention is applied to failure store data. You can view the active lifecycle for a failure store index by calling the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream[get data stream API]:
713
726
@@ -808,7 +821,8 @@ PUT _data_stream/my-datastream/_options
808
821
<2> Set only this data stream's failure store retention to ten days.
809
822
810
823
[discrete]
811
-
==== Add and remove from failure store [[manage-failure-store-indices]]
824
+
[[manage-failure-store-indices]]
825
+
==== Add and remove from failure store
812
826
813
827
Failure stores support adding and removing indices from them using the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-modify-data-stream[Update data streams] API.
0 commit comments