Buffer in multi-tenant scenario #4176
Unanswered
kaiohenricunha
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What are the best-practices when it comes to setting up the fluentd buffer for a multi-tenant-scenario?
I have used the fluent-operator to setup a multi-tenant fluentbit and fluentd logging solution, where fluentbit collects and enriches the logs, and fluentd aggregates and ships them to AWS OpenSearch.
The operator uses a label router to separate logs from different tenants.
In my cluster, every time a new application is deployed via Helm chart, it applies the following resources:
So, for every new application a new
<match>
section will be created, and because of that a new buffer configuration for that application:To sum it up, I'll have a buffer for every pod that enables log collection in the Helm chart.
If I had to configure a single buffer for all the cluster I would use something like this:
This buffer was based on fluentd's documentation default values.
But this is obviously not scalable. I cannot have dozens or maybe even hundreds of applications/pods with the above buffer configuration because it would exhaust Fluentd resources.
How can I define a base "micro-buffer" that would be enough for most pods/applications?
Beta Was this translation helpful? Give feedback.
All reactions