Skip to content

Commit bdce112

Browse files
committed
outputs: s3: fix 404 and vale issues
Signed-off-by: Lynette Miles <[email protected]>
1 parent 9312bbe commit bdce112

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

pipeline/outputs/s3.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ for details about fetching AWS credentials.
1919

2020
{% hint style="warning" %}
2121

22-
The [Prometheus success/retry/error metrics values](administration/monitoring.md) output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers acknowlege this feature gap, and you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
22+
The [Prometheus success/retry/error metrics values](../../administration/monitoring.md) output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers acknowledge this feature gap, and you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
2323

2424
{% endhint %}
2525

@@ -36,7 +36,7 @@ The [Prometheus success/retry/error metrics values](administration/monitoring.md
3636
| `upload_timeout` | When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to `60m` to upload a new file every hour. | `10m`|
3737
| `store_dir` | Directory to locally buffer data before sending. When using multipart uploads, data buffers until reaching the `upload_chunk_size`. S3 stores metadata about in progress multipart uploads in this directory, allowing pending uploads to be completed if Fluent Bit stops and restarts. It stores the current `$INDEX` value if enabled in the S3 key format so the `$INDEX` keeps incrementing from its previous value after Fluent Bit restarts. | `/tmp/fluent-bit/s3` |
3838
| `store_dir_limit_size` | Size limit for disk usage in S3. Limit theS3 buffers in the `store_dir` to limit disk usage. Use `store_dir_limit_size` instead of `storage.total_limit_size` which can be used for other plugins | `0` (unlimited) |
39-
| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#allowing-a-file-extension-in-the-amazon-s3-key-format-with-usduuid). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` |
39+
| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#s3-key-format-and-tag-delimiters). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` |
4040
| `s3_key_format_tag_delimiters` | A series of characters used to split the tag into parts for use with `s3_key_format`. option. | `.` |
4141
| `static_file_path` | Disables behavior where UUID string appends to the end of the S3 key name when `$UUID` isn't provided in `s3_key_format`. `$UUID`, time formatters, `$TAG`, and other dynamic key formatters all work as expected while this feature is set to true. | `false` |
4242
| `use_put_object` | Use the S3 `PutObject` API instead of the multipart upload API. When enabled, the key extension is only available when `$UUID` is specified in `s3_key_format`. If `$UUID` isn't included, a random string appends format string and the key extension can't be customized. | `false` |
@@ -370,8 +370,7 @@ On shutdown, S3 output attempts to complete all pending uploads. If an upload fa
370370

371371
[MinIO](https://min.io/) is a high-performance, S3 compatible object storage and you can build your app with S3 capability without S3.
372372

373-
The following example runs [a MinIO server](https://docs.min.io/docs/minio-quickstart-guide.html)
374-
at `localhost:9000`, and create a bucket of `your-bucket`.
373+
The following example runs a MinIO server at `localhost:9000`, and creates a bucket of `your-bucket`.
375374

376375
Example:
377376

0 commit comments

Comments
 (0)