AWS for Fluent Bit 2.31.0
2.31.0
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Compared to 2.30.0, this release adds the following feature that we are working on getting accepted upstream:
- Feature - Add
kinesis_firehoseandkinesis_streamssupport fortime_key_formatmilliseconds with%3Noption, and nanoseconds9Nand%Loptions fluent-bit:2831 - Bug - Format S3 filename with timestamp from the first log in uploaded file, rather than the time the first log was buffered by the s3 output aws-for-fluent-bit:459
- Enhancement - Transition S3 to fully synchronous file uploads to improve plugin stability fluent-bit:6573
- Bug - Resolve S3 logic to display
log_keymissing warning message if the configuredlog_keyfield is missing from log payload fluent-bit:6557 - Bug - ECS Metadata filter gracefuly handle task metadata query errors and cache metadata processing state to improve performance aws-for-fluent-bit:505
Same as 2.30.0, this release includes the following fixes and features that we are working on getting accepted upstream:
- Feature - Support OpenSearch Serverless data ingestion via OpenSearch plugin fluent-bit:6448
- Bug - Mitigate Datadog output plugin issue by reverting recent PR aws-for-fluent-bit:491
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit under different input load. Learn more about the load test.
| plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
|---|---|---|---|---|---|
| kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
| Log Duplication | ✅ | ✅ | ✅ | ||
| kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
| Log Duplication | ✅ | ✅ | 0%(946) | ||
| kinesis_streams | stdstream | Log Loss | ✅ | ✅ | 0%(83093) |
| Log Duplication | ✅ | 0%(13294) | 2%(495013) | ||
| kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
| Log Duplication | ✅ | ✅ | 0%(26270) | ||
| s3 | stdstream | Log Loss | ✅ | 3%(541520) | 29%(5305665) |
| Log Duplication | ✅ | ✅ | ✅ | ||
| s3 | tcp | Log Loss | ✅ | ✅ | 9%(1657116) |
| Log Duplication | ✅ | ✅ | ✅ |
| plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
|---|---|---|---|---|---|
| cloudwatch_logs | stdstream | Log Loss | ✅ | 16%(198844) | ✅ |
| Log Duplication | ✅ | ✅ | ✅ | ||
| cloudwatch_logs | tcp | Log Loss | 7%(43686) | ✅ | ✅ |
| Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.