Shared buffer #18380
-
We have Vector deployed in Kubernetes as a deployment and we wanted to buffer to disk. Is it possible/advisable to have a shared buffer storage between the Vector pods? Any suggestions on how we can keep running them as a deployment and still buffer to a persistent disk? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Hi @camilisette ! The current buffer implementation is not designed to be shared between running processes. You could share a volume, but then have each Vector process have its own directory though. The typical recommendation for using disk buffers in a Kubernetes deployment is to use a StatefulSet. Alternatively, you could avoid using disk buffers and using a queue like Kafka instead (where Vector would read and/or write data to/from Kafka, which would act as the durable store). |
Beta Was this translation helpful? Give feedback.
-
Hi @jszwedko just wanted to add to this discussion. If for example we were to deploy Vector as a StatefulSet with disk buffering on an EBS, would it be advisable to have horizontal scaling enabled? If so, what would happen if for example Vector scales up from 4 pods to 5, then it scales back down to 4, what would happen to the data that was in the disk buffer of pod 5? Will Vector wait until buffer is completely empty before it terminates? |
Beta Was this translation helpful? Give feedback.
Hi @camilisette ! The current buffer implementation is not designed to be shared between running processes. You could share a volume, but then have each Vector process have its own directory though. The typical recommendation for using disk buffers in a Kubernetes deployment is to use a StatefulSet.
Alternatively, you could avoid using disk buffers and using a queue like Kafka instead (where Vector would read and/or write data to/from Kafka, which would act as the durable store).