You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/s3.md
+54-1Lines changed: 54 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,8 @@ The [Prometheus success/retry/error metrics values](../../administration/monitor
45
45
|`sts_endpoint`| Custom endpoint for the STS API. |_none_|
46
46
|`profile`| Option to specify an AWS Profile for credentials. |`default`|
47
47
|`canned_acl`|[Predefined Canned ACL policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) for S3 objects. |_none_|
48
-
|`compression`| Compression type for S3 objects. `gzip` is currently the only supported value by default. If Apache Arrow support was enabled at compile time, you can use `arrow`. For gzip compression, the Content-Encoding HTTP Header will be set to `gzip`. Gzip compression can be enabled when `use_put_object` is `on` or `off` (`PutObject` and Multipart). Arrow compression can only be enabled with `use_put_object On`. |_none_|
48
+
|`compression`| Compression/format for S3 objects. Supported: `gzip` (always available) and `parquet` (requires Arrow build). For `gzip`, the `Content-Encoding` header is set to `gzip`. `parquet` is available **only when Fluent Bit is built with `-DFLB_ARROW=On`** and Arrow GLib/Parquet GLib are installed. Parquet is typically used with `use_put_object On`. |*none*|
49
+
49
50
|`content_type`| A standard MIME type for the S3 object, set as the Content-Type HTTP header. |_none_|
50
51
|`send_content_md5`| Send the Content-MD5 header with `PutObject` and UploadPart requests, as is required when Object Lock is enabled. |`false`|
51
52
|`auto_retry_requests`| Immediately retry failed requests to AWS services once. This option doesn't affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which can help improve throughput during transient network issues. |`true`|
@@ -649,3 +650,55 @@ The following example uses `pyarrow` to analyze the uploaded data:
0 commit comments