Skip to content

Commit 2d66f90

Browse files
committed
more spacing
1 parent 3699091 commit 2d66f90

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

docs/reference/data-streams/failure-store-recipes.asciidoc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ When something goes wrong during ingestion it is often not an isolated event. In
88
When a document fails in an ingest pipeline it can be difficult to figure out exactly what went wrong and where. When these failures are captured by the failure store during this part of the ingestion process, they will contain additional debugging information. Failed documents will note the type of processor and which pipeline was executing when the failure occurred. Failed documents will also contain a pipeline trace which keeps track of any nested pipeline calls that the document was in at time of failure.
99

1010
To demonstrate this, we will follow a failed document through an unfamiliar data stream and ingest pipeline:
11+
1112
[source,console]
1213
----
1314
POST my-datastream-ingest/_doc
@@ -18,6 +19,7 @@ POST my-datastream-ingest/_doc
1819
}
1920
}
2021
----
22+
2123
[source,console-result]
2224
----
2325
{
@@ -38,10 +40,12 @@ POST my-datastream-ingest/_doc
3840
<1> The document was sent to the failure store.
3941

4042
Now we search the failure store to check the failure document to see what went wrong.
43+
4144
[source,console]
4245
----
4346
GET my-datastream-ingest::failures/_search
4447
----
48+
4549
[source,console-result]
4650
----
4751
{
@@ -129,10 +133,12 @@ GET _ingest/pipeline/ingest-step-2
129133
<2> This field was missing from the document at this point.
130134

131135
There is only a set processor in the `ingest-step-2` pipeline so this is likely not where the root problem is. Remembering the `pipeline_trace` field on the failure we find that `ingest-step-1` was the original pipeline called for this document. It is likely the data stream's default pipeline. Pulling its definition we find the following:
136+
132137
[source,console]
133138
----
134139
GET _ingest/pipeline/ingest-step-1
135140
----
141+
136142
[source,console-result]
137143
----
138144
{
@@ -214,6 +220,7 @@ PUT _ingest/pipeline/complicated-processor
214220
----
215221

216222
We ingest some data and find that it was sent to the failure store.
223+
217224
[source,console]
218225
----
219226
POST my-datastream-ingest/_doc?pipeline=complicated-processor
@@ -222,6 +229,7 @@ POST my-datastream-ingest/_doc?pipeline=complicated-processor
222229
"counter_name": "test"
223230
}
224231
----
232+
225233
[source,console-result]
226234
----
227235
{
@@ -240,10 +248,12 @@ POST my-datastream-ingest/_doc?pipeline=complicated-processor
240248
}
241249
----
242250
On checking the failure, we can quickly identify the tagged processor that caused the problem.
251+
243252
[source,console]
244253
----
245254
GET my-datastream-ingest::failures/_search
246255
----
256+
247257
[source,console-result]
248258
----
249259
{
@@ -357,6 +367,7 @@ Failures that occurred during ingest processing will be stored as they were befo
357367

358368
====== Step 1: Separate out which failures to replay
359369
Start off by constructing a query that can be used to consistently identify which failures will be remediated.
370+
360371
[source,console]
361372
----
362373
POST my-datastream-ingest-example::failures/_search
@@ -459,6 +470,7 @@ Take note of the documents that are returned. We can use these to simulate that
459470

460471
====== Step 2: Fix the original problem
461472
Because ingest pipeline failures need to be reprocessed by their original pipelines, any problems with those pipelines should be fixed before remediating failures. Investigating the pipeline mentioned in the example above shows that there is a processor that expects a field to be present that is not always present.
473+
462474
[source,console-result]
463475
----
464476
{

0 commit comments

Comments
 (0)