Skip to content

Commit 1ac9c09

Browse files
jbaierakilfoyle
andauthored
Apply suggestions from code review
Co-authored-by: David Kilfoyle <[email protected]>
1 parent 2f4d026 commit 1ac9c09

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

manage-data/data-store/data-streams/failure-store.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ POST my-datastream/_bulk
177177
}
178178
```
179179

180-
1. The response code is 200 OK, and the response body does not report any errors encountered.
180+
1. The response code is `200 OK`, and the response body does not report any errors encountered.
181181
2. The first document is accepted into the data stream's write index.
182182
3. The second document encountered a problem during ingest and was redirected to the data stream's failure store.
183183
4. The response is annotated with a field indicating that the failure store was used to persist the second document.
@@ -231,7 +231,7 @@ If the document could have been redirected to a data stream's failure store but
231231

232232
1. The failure is returned to the client as normal when the failure store is not enabled.
233233
2. The response is annotated with a flag indicating the failure store could have accepted the document, but it was not enabled.
234-
3. Status of 400 Bad Request due to the mapping problem.
234+
3. The response status is `400 Bad Request` due to the mapping problem.
235235

236236

237237
If the document was redirected to a data stream's failure store but that failed document could not be stored (e.g. due to shard unavailability or a similar problem), then the `failure_store` field on the response will be `failed`, and the response will display the error for the original failure, as well as a suppressed error detailing why the failure could not be stored:
@@ -273,7 +273,7 @@ If the document was redirected to a data stream's failure store but that failed
273273
2. The document could not be redirected because the failure store was not able to accept writes at this time due to an unforeseeable issue.
274274
3. The complete exception tree is present on the response.
275275
4. The response is annotated with a flag indicating the failure store would have accepted the document, but it was not able to.
276-
5. Status of 400 Bad Request due to the original mapping problem.
276+
5. The response status is `400 Bad Request` due to the original mapping problem.
277277

278278

279279
### Searching failures [use-failure-store-searching]
@@ -377,7 +377,7 @@ Caused by: j.l.IllegalArgumentException: For input string: "invalid_text"
377377

378378
1. The document belongs to a failure store index on the data stream.
379379
2. The failure document timestamp is when the failure occurred in {{es}}.
380-
3. The document that was sent is captured inside the failure document. Failure documents capture the id of the document at time of failure, along with which data stream the document was being written to, and the contents of the document. The `document.source` fields are unmapped to ensure failures are always captured.
380+
3. The document that was sent is captured inside the failure document. Failure documents capture the ID of the document at time of failure, along with which data stream the document was being written to, and the contents of the document. The `document.source` fields are unmapped to ensure failures are always captured.
381381
4. The failure document captures information about the error encountered, like the type of error, the error message, and a compressed stack trace.
382382
::::
383383

@@ -422,7 +422,7 @@ Failure documents have a uniform structure that is handled internally by {{es}}.
422422
: (`object`) The document at time of failure. If the document failed in an ingest pipeline, then the document will be the unprocessed version of the document as it arrived in the original indexing request. If the document failed due to a mapping issue, then the document will be as it was after any ingest pipelines were applied to it.
423423

424424
`document.id`
425-
: (`keyword`) The id of the original document at the time of failure.
425+
: (`keyword`) The ID of the original document at the time of failure.
426426

427427
`document.routing`
428428
: (`keyword`, optional) The routing of the original document at the time of failure if it was specified.
@@ -443,7 +443,7 @@ Failure documents have a uniform structure that is handled internally by {{es}}.
443443
: (`text`) A compressed stack trace from {{es}} for the failure.
444444

445445
`error.type`
446-
: (`keyword`) The type classification of failure. Values are the same type returned within failed indexing API responses.
446+
: (`keyword`) The type classification of the failure. Values are the same type returned within failed indexing API responses.
447447

448448
`error.pipeline`
449449
: (`keyword`, optional) If the failure occurred in an ingest pipeline, this will contain the name of the pipeline.
@@ -601,7 +601,7 @@ GET my-datastream-ingest::failures/_search
601601

602602
We can see that the document failed on the second processor in the pipeline. The first processor would have added a `@timestamp` field. Since the pipeline failed, we find that it has no `@timestamp` field added because it did not save any changes from before the pipeline failed.
603603

604-
The second place failures can occur is during indexing. After the documents have been processed by any applicable pipelines, they are parsed using the index mappings before being indexed into the shard. If a document is sent to the failure store due to a failure in this process, then it will be stored as it was after any ingestion had occurred. This is because the original document is overwritten by the ingest pipeline changes by this point. This has the benefit of being able to see what the document looked like during the mapping and indexing phase of the write operation.
604+
The second time when failures can occur is during indexing. After the documents have been processed by any applicable pipelines, they are parsed using the index mappings before being indexed into the shard. If a document is sent to the failure store due to a failure in this process, then it will be stored as it was after any ingestion had occurred. This is because, by this point, the original document has already been overwritten by the ingest pipeline changes. This has the benefit of allowing you to see what the document looked like during the mapping and indexing phase of the write operation.
605605

606606
Building on the example above, we send a document that has a text value where we expect a numeric value:
607607

@@ -1146,7 +1146,7 @@ Navigate to the data view page in Kibana and add a new data view. Set the index
11461146
::::
11471147

11481148
::::{step} Create new rule
1149-
Navigate to Management / Alerts and Insights / Rules. Create a new rule. Choose the Elasticsearch query option.
1149+
Navigate to Management / Alerts and Insights / Rules. Create a new rule. Choose the {{es}} query option.
11501150

11511151
:::{image} /manage-data/images/elasticsearch-reference-management_failure_store_alerting_create_rule.png
11521152
:alt: create a new alerting rule and select the elasticsearch query option

0 commit comments

Comments
 (0)