- 
                Notifications
    You must be signed in to change notification settings 
- Fork 3.5k
Doc: Add topic and expand info for in-memory queue #13246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 3 commits
2af381a
              a716de3
              c80a909
              7193ecf
              e575a5c
              56c79cd
              File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,51 @@ | ||
| [[memory-queue]] | ||
| === Memory queue | ||
|  | ||
| By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. | ||
| If Logstash experiences a temporary machine failure, the contents of the memory queue will be lost. | ||
| Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally, but are capable of being restarted. | ||
|  | ||
| [[mem-queue-benefits]] | ||
| ==== Benefits of memory queues | ||
|  | ||
| The memory queue might be a good choice if you value throughput over data resiliency. | ||
|  | ||
| * Easier configuration | ||
| * Easier management and administration | ||
| * Faster throughput | ||
|  | ||
| [[mem-queue-limitations]] | ||
| ==== Limitations of memory queues | ||
|  | ||
| * Can lose data in abnormal termination | ||
| * Don't do well handling sudden bursts of data, where extra capacity in needed for {ls} to catch up | ||
| * Not a good choice for data you can't afford to lose | ||
|  | ||
| TIP: Consider using <<persistent-queues,persistent queues>> to avoid these limitations. | ||
|  | ||
| [[sizing-mem-queue]] | ||
| ==== Memory queue size | ||
|  | ||
| Memory queue size is not configured directly. | ||
|         
                  karenzone marked this conversation as resolved.
              Show resolved
            Hide resolved | ||
| Multiply the `pipeline.batch.size` and `pipeline.workers` values to get the size of the memory queue. | ||
|          | ||
| This value is called the "inflight count." | ||
|  | ||
| [[backpressure-mem-queue]] | ||
| ==== Back pressure | ||
|  | ||
| When the queue is full, Logstash puts back pressure on the inputs to stall data | ||
| flowing into Logstash. | ||
| This mechanism helps Logstash control the rate of data flow at the input stage | ||
| without overwhelming outputs like Elasticsearch. | ||
|  | ||
| ToDo: Is the next paragraph accurate for MQ? | ||
|          | ||
|  | ||
| Each input handles back pressure independently. | ||
| For example, when the | ||
| <<plugins-inputs-beats,beats input>> encounters back pressure, it no longer | ||
| accepts new connections. | ||
| It waits until the queue has space to accept more events. | ||
| After the filter and output stages finish processing existing | ||
| events in the queue and ACKs them, Logstash automatically starts accepting new | ||
| events. | ||
|  | ||
Uh oh!
There was an error while loading. Please reload this page.