-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Closed
Description
I think the Fluent Bit community should work towards having a higher bar for releases, to ensure stability, and improve user confidence.
The most common use case for Fluent Bit users is collecting k8s log files. It would be really cool if we had automated testing prior to releases that did the following:
- deploy the release candidate to a k8s node and collect logs
- use kubernetes filter to decorate with metadata
- some of the logs should be multiline
- testing custom parsers would be ideal as well
- as time goes on, we can add other common use cases
- send the logs via some open source, non-vendor output plugin, like forward or http. The destination receiving the logs should validate that all logs emitted by the k8s applications were sent and that they have k8s metadata and are in the right format.
This way, we test each release candidate against real-world use cases before releasing it.
We could have two types of tests:
- Performance tests: Send logs at some decently high rate for a short period of time, check that they all end up at the destination. We should set some minimum performance bar for each release. As time goes on, this could be expanded into automated benchmarking for releases- we see what the max throughput of each release is in some common use case. And then we have a min bar it must meet, and then the final result (which should be above the min bar) will be published in the release notes for the release.
- Stability tests: Run Fluent Bit in the k8s cluster for some non-trivial period of time. The test fails if it crashes or restarts. For patch/bug releases, we can set some small time frame, so that these tests can be run over-night. For minor version releases with new features, we would set a higher bar, like that FB must run without restarts for 3 - 5 days.
zhonghui12 and agup006
Metadata
Metadata
Assignees
Labels
No labels