Skip to content

Commit 727692a

Browse files
committed
Doc: Add failure store info to data resiliency section
1 parent bd4cec0 commit 727692a

File tree

1 file changed

+22
-6
lines changed

1 file changed

+22
-6
lines changed

docs/reference/queues-data-resiliency.md

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,33 @@ mapped_pages:
55

66
# Queues and data resiliency [resiliency]
77

8-
By default, Logstash uses [in-memory bounded queues](/reference/memory-queue.md) between pipeline stages (inputs → pipeline workers) to buffer events.
8+
As data flows through the event processing pipeline, {{ls}} may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or {{ls}} might terminate abnormally.
99

10-
As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
10+
**Memory queue (MQ)**
11+
: By default, {{ls}} uses [in-memory bounded queues](/reference/memory-queue.md) between pipeline stages (inputs → pipeline workers) to buffer events.
12+
Memory queues have [limitations](/reference/memory-queue.md#limitations-of-memory-queues-mem-queue-limitations), but also offer [benefits](/reference/memory-queue.md#benefits-of-memory-queues-mem-queue-benefits) that make them a good choice for many users.
13+
If memory queues don't offer the resiliency you need, {{ls}} provides more options.
1114

12-
To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides data resiliency features.
15+
## {{ls}} data resiliency options [ls-queues]
1316

14-
* [Persistent queues (PQ)](/reference/persistent-queues.md) protect against data loss by storing events in an internal queue on disk.
15-
* [Dead letter queues (DLQ)](/reference/dead-letter-queues.md) provide on-disk storage for events that Logstash is unable to process so that you can evaluate them. You can easily reprocess events in the dead letter queue by using the `dead_letter_queue` input plugin.
17+
To guard against data loss and ensure that events flow through the pipeline without interruption, {{ls}} provides additional data resiliency features.
18+
These features are disabled by default. To turn on these features, you must explicitly enable them in the {{ls}} [settings file](/reference/logstash-settings-file.md).
1619

17-
These resiliency features are disabled by default. To turn on these features, you must explicitly enable them in the Logstash [settings file](/reference/logstash-settings-file.md).
20+
**Persistent queues (PQ)**
21+
: [Persistent queues (PQ)](/reference/persistent-queues.md) protect against data loss by storing events in an internal queue on disk.
1822

23+
**Dead letter queues (DLQ)**
24+
: [Dead letter queues (DLQ)](/reference/dead-letter-queues.md) provide on-disk storage for events that {{ls}} is unable to process so that you can evaluate them. You can easily reprocess events in the dead letter queue by using the `dead_letter_queue` input plugin.
1925

26+
## {{es}} failure store [es-failure-store]
27+
```{applies_to}
28+
serverless: ga
29+
stack: ga 9.1+
30+
```
2031

32+
When you use {{ls}} to send data streams to {{es}}, you have an additional option for data resiliency--the {{es}} [failure store](docs-content://manage-data/data-store/data-streams/failure-store.md).
2133

34+
A failure store is a secondary set of indices inside a data stream that is dedicated to storing failed documents.
35+
When a data stream's failure store is enabled, failures are captured in a separate index and persisted to be analyzed later.
36+
37+
Check out [Failure store](docs-content://manage-data/data-store/data-streams/failure-store.md) for details.

0 commit comments

Comments
 (0)