Skip to content

Commit 3699091

Browse files
committed
images
1 parent 62e74d5 commit 3699091

File tree

2 files changed

+34
-12
lines changed

2 files changed

+34
-12
lines changed

docs/reference/data-streams/failure-store-recipes.asciidoc

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -310,23 +310,31 @@ Since failure stores can be searched just like a normal data stream, we can use
310310
If you want to use KQL or Lucene query types, you should first create a data view for your failure store data.
311311
If you plan to use {esql} or the Query DSL query types, this step is not required.
312312
Navigate to the data view page in Kibana and add a new data view. Set the index pattern to your failure store using the selector syntax.
313+
313314
image::images/data-streams/failure_store_alerting_create_data_view.png[create a data view using the failure store syntax in the index name]
314315

315316
===== Step 2: Create new rule
316317
Navigate to Management / Alerts and Insights / Rules. Create a new rule. Choose the {es} query option.
318+
317319
image::images/data-streams/failure_store_alerting_create_rule.png[create a new alerting rule and select the elasticsearch query option]
318320

319321
===== Step 3: Pick your query type
320322
Choose which query type you wish to use
321323
For KQL/Lucene queries, reference the data view that contains your failure store.
324+
322325
image::images/data-streams/failure_store_alerting_kql.png[use the data view created in the previous step as the input to the kql query]
326+
323327
For Query DSL queries, use the `::failures` suffix on your data stream name.
328+
324329
image::images/data-streams/failure_store_alerting_dsl.png[use the ::failures suffix in the data stream name in the query dsl]
330+
325331
For {esql} queries, use the `::failures` suffix on your data stream name in the `FROM` command.
332+
326333
image::images/data-streams/failure_store_alerting_esql.png[use the ::failures suffix in the data stream name in the from command]
327334

328335
===== Step 4: Test
329336
Configure schedule, actions, and details of the alert before saving the rule.
337+
330338
image::images/data-streams/failure_store_alerting_finish.png[complete the rule configuration and save it]
331339

332340
[[data-remediation]]

docs/reference/data-streams/failure-store.asciidoc

Lines changed: 26 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,14 @@ On this page, you'll learn how to set up, use, and manage a failure store, as we
1515
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to <<failure-store-recipes,Using failure stores to address ingestion issues>>.
1616

1717
[discrete]
18-
=== Set up a data stream failure store [[set-up-failure-store]]
18+
[[set-up-failure-store]]
19+
=== Set up a data stream failure store
1920

2021
Each data stream has its own failure store that can be enabled to accept failed documents. By default, this failure store is disabled and any ingestion problems are raised in the response to write operations.
2122

2223
[discrete]
23-
==== Set up for new data streams [[set-up-failure-store-new]]
24+
[[set-up-failure-store-new]]
25+
==== Set up for new data streams
2426

2527
You can specify in a data stream's <<index-templates,index templates>> if it should enable the failure store when it is first created.
2628

@@ -52,7 +54,8 @@ PUT _index_template/my-index-template
5254
After a matching data stream is created, its failure store will be enabled.
5355

5456
[discrete]
55-
==== Set up for existing data streams [[set-up-failure-store-existing]]
57+
[[set-up-failure-store-existing]]
58+
==== Set up for existing data streams
5659

5760
Enabling the failure store via <<index-templates,index templates>> can only affect data streams that are newly created. Existing data streams that use a template are not affected by changes to the template's `data_stream_options` field.
5861
To modify an existing data stream's options, use the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options[Update data stream options API]:
@@ -82,7 +85,8 @@ PUT _data_stream/my-datastream-existing/_options
8285
<1> Redirecting failed documents into the failure store will now be disabled.
8386

8487
[discrete]
85-
==== Enable failure store via cluster setting [[set-up-failure-store-cluster-setting]]
88+
[[set-up-failure-store-cluster-setting]]
89+
==== Enable failure store via cluster setting
8690

8791
If you have a large number of existing data streams you may want to enable their failure stores in one place. Instead of updating each of their options individually, set `data_streams.failure_store.enabled` to a list of index patterns in the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings[cluster settings]. Any data streams that match one of these patterns will operate with their failure store enabled.
8892

@@ -111,14 +115,16 @@ PUT _data_stream/my-datastream-1/_options
111115
<1> The failure store for `my-datastream-1` is disabled even though it matches `my-datastream-*`. The data stream options override the cluster setting.
112116

113117
[discrete]
114-
=== Using a failure store [[use-failure-store]]
118+
[[use-failure-store]]
119+
=== Using a failure store
115120

116121
The failure store is meant to ease the burden of detecting and handling failures when ingesting data to {es}. Clients are less likely to encounter unrecoverable failures when writing documents, and developers are more easily able to troubleshoot faulty pipelines and mappings.
117122

118123
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to <<failure-store-recipes,Using failure stores to address ingestion issues>>.
119124

120125
[discrete]
121-
==== Failure redirection [[use-failure-store-redirect]]
126+
[[use-failure-store-redirect]]
127+
==== Failure redirection
122128

123129
Once a failure store is enabled for a data stream it will begin redirecting documents that fail due to common ingestion problems instead of returning errors in write operations. Clients are notified in a non-intrusive way when a document is redirected to the failure store.
124130

@@ -277,7 +283,8 @@ If the document was redirected to a data stream's failure store but that failed
277283
<5> The response status is `400 Bad Request` due to the original mapping problem.
278284

279285
[discrete]
280-
==== Searching failures [[use-failure-store-searching]]
286+
[[use-failure-store-searching]]
287+
==== Searching failures
281288

282289
Once you have accumulated some failures, the failure store can be searched much like a regular data stream.
283290

@@ -415,7 +422,8 @@ Because the `document.source` field is unmapped, it is absent from the ES|SQL an
415422
====
416423

417424
[discrete]
418-
==== Failure document structure [[use-failure-store-document]]
425+
[[use-failure-store-document]]
426+
==== Failure document structure
419427
Failure documents have a uniform structure that is handled internally by {es}.
420428
`@timestamp`:: (`date`) The timestamp at which the document encountered a failure in {es}.
421429
`document`:: (`object`) The document at time of failure. If the document failed in an ingest pipeline, then the document will be the unprocessed version of the document as it arrived in the original indexing request. If the document failed due to a mapping issue, then the document will be as it was after any ingest pipelines were applied to it.
@@ -681,10 +689,14 @@ Caused by: j.l.IllegalArgumentException: For input string: "this field is invali
681689
The `document` field attempts to show the effective input to whichever process led to the failure occurring. This gives you all the information you need to reproduce the problem.
682690

683691
[discrete]
684-
=== Manage a data stream's failure store [[manage-failure-store]]
692+
[[manage-failure-store]]
693+
=== Manage a data stream's failure store
685694
Failure data can accumulate in a data stream over time. To help manage this accumulation, most administrative operations that can be done on a data stream can be applied to the data stream's failure store.
686695

687-
==== Failure store rollover [[manage-failure-store-rollover]]
696+
[discrete]
697+
[[manage-failure-store-rollover]]
698+
==== Failure store rollover
699+
688700
A data stream treats its failure store much like a secondary set of <<backing-indices,backing indices>>. Multiple dedicated hidden indices serve search requests for the failure store, while one index acts as the current write index. You can use the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover[rollover] API to rollover the failure store. Much like the regular indices in a data stream, a new write index will be created in the failure store to accept new failure documents.
689701

690702
[source,console]
@@ -707,7 +719,8 @@ POST my-datastream::failures/_rollover
707719
----
708720

709721
[discrete]
710-
==== Failure store lifecycle [[manage-failure-store-lifecycle]]
722+
[[manage-failure-store-lifecycle]]
723+
==== Failure store lifecycle
711724

712725
Failure stores have their retention managed using an internal <<data-stream-lifecycle,data stream lifecycle>>. A thirty day (30d) retention is applied to failure store data. You can view the active lifecycle for a failure store index by calling the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream[get data stream API]:
713726

@@ -808,7 +821,8 @@ PUT _data_stream/my-datastream/_options
808821
<2> Set only this data stream's failure store retention to ten days.
809822

810823
[discrete]
811-
==== Add and remove from failure store [[manage-failure-store-indices]]
824+
[[manage-failure-store-indices]]
825+
==== Add and remove from failure store
812826

813827
Failure stores support adding and removing indices from them using the https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-modify-data-stream[Update data streams] API.
814828

0 commit comments

Comments
 (0)