diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index c2286a470..8949ac40d 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -5,7 +5,7 @@
First of all, thanks for taking the time to read this guide. The fact that you're here means you're interested in contributing to Fluent Bit, and we greatly appreciate your time.
-This repository contains the files for the [Fluent Bit documentation library](https://docs.fluentbit.io/). Keeping these docs separate from the [main Fluent Bit repository](https://github.com/fluent/fluent-bit) helps reduce the number of commits to the Fluent Bit source code and makes it easier to maintain both projects.
+This repository contains the files for the [Fluent Bit documentation library](https://docs.fluentbit.io/manual/). Keeping these docs separate from the [main Fluent Bit repository](https://github.com/fluent/fluent-bit) helps reduce the number of commits to the Fluent Bit source code and makes it easier to maintain both projects.
Fluent Bit has a group of dedicated maintainers who oversee this repository, including several technical writers. These writers will review any pull requests you open, so don't be afraid to contribute, even if you're not a writer by trade. Your suggestions are valuable, and we'll help you wrangle any stray commas.
@@ -66,7 +66,7 @@ The active [linters](#linters) in this repository flag certain style errors and,
## Formatting guidelines
-The Fluent Bit docs library is built and hosted through [GitBook](https://docs.gitbook.com/). Unfortunately, GitBook doesn't support local previews for contributors, but a Fluent Bit maintainer with a dedicated GitBook account can verify that things are formatted correctly after you open a new pull request.
+The Fluent Bit docs library is built and hosted through [GitBook](https://gitbook.com/docs/). Unfortunately, GitBook doesn't support local previews for contributors, but a Fluent Bit maintainer with a dedicated GitBook account can verify that things are formatted correctly after you open a new pull request.
### Line wraps
diff --git a/README.md b/README.md
index c0164d7bf..285aa0041 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@ description: High Performance Telemetry Agent for Logs, Metrics and Traces
-[Fluent Bit](http://fluentbit.io) is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.
+[Fluent Bit](https://fluentbit.io) is a fast and lightweight telemetry agent for logs, metrics, and traces for Linux, macOS, Windows, and BSD family operating systems. Fluent Bit has been made with a strong focus on performance to allow the collection and processing of telemetry data from different sources without complexity.
## Features
@@ -39,7 +39,7 @@ For more details about changes in each release, refer to the [official release n
## Fluent Bit, Fluentd, and CNCF
-Fluent Bit is a [CNCF](https://cncf.io) graduated sub-project under the umbrella of [Fluentd](http://fluentd.org).
+Fluent Bit is a [CNCF](https://cncf.io) graduated sub-project under the umbrella of [Fluentd](https://www.fluentd.org).
Fluent Bit was originally created by [Eduardo Silva](https://www.linkedin.com/in/edsiper/) and is now sponsored by [Chronosphere](https://chronosphere.io/). As a CNCF-hosted project, it's a fully vendor-neutral and community-driven project.
diff --git a/about/fluentd-and-fluent-bit.md b/about/fluentd-and-fluent-bit.md
index 388e23d03..bceff35cb 100644
--- a/about/fluentd-and-fluent-bit.md
+++ b/about/fluentd-and-fluent-bit.md
@@ -26,7 +26,7 @@ The following table describes a comparison of different areas of the projects:
| Performance | Medium Performance | High Performance |
| Dependencies | Built as a Ruby Gem, depends on other gems. | Zero dependencies, unless required by a plugin. |
| Plugins | Over 1,000 external plugins available. | Over 100 built-in plugins available. |
-| License | [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) | [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0) |
+| License | [Apache License v2.0](https://apache.org/licenses/LICENSE-2.0) | [Apache License v2.0](https://apache.org/licenses/LICENSE-2.0) |
Both [Fluentd](https://www.fluentd.org) and [Fluent Bit](https://fluentbit.io) can work as Aggregators or Forwarders, and can complement each other or be used as standalone solutions.
diff --git a/administration/aws-credentials.md b/administration/aws-credentials.md
index 9ab87f568..ea93f65e2 100644
--- a/administration/aws-credentials.md
+++ b/administration/aws-credentials.md
@@ -21,7 +21,7 @@ Plugins read the shared `config` file at `$AWS_CONFIG_FILE` (or `$HOME/.aws/conf
The shared settings evaluate in the following order:
| Setting | File | Description |
-|---|---|---|
+| --- | --- | --- |
| `credential_process` | `config` | Linux only. See [Sourcing credentials with an external process in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html). |
| `aws_access_key_id`, `aws_secret_access_key`, `aws_session_token` | `credentials` | Access key ID and secret key to use to authenticate. The session token must be set for temporary credentials. |
@@ -33,7 +33,7 @@ Credentials are fetched using a signed web identity token for a Kubernetes servi
## ECS HTTP credentials endpoint
-Credentials are fetched for the ECS task's role. See [Amazon ECS task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html).
+Credentials are fetched for the ECS task's role. See [Amazon ECS task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).
## EKS Pod Identity credentials
diff --git a/administration/configuring-fluent-bit.md b/administration/configuring-fluent-bit.md
index 150b0059b..a4fd8c6db 100644
--- a/administration/configuring-fluent-bit.md
+++ b/administration/configuring-fluent-bit.md
@@ -7,7 +7,7 @@ Fluent Bit uses configuration files to store information about your specified [i
## Unit sizes
-Some configuration settings in Fluent Bit use standardized unit sizes to define data and storage limits. For example, the `buffer_chunk_size` and `buffer_max_size` parameters for the [Tail](../data-pipeline/inputs/tail.md) input plugin use unit sizes.
+Some configuration settings in Fluent Bit use standardized unit sizes to define data and storage limits. For example, the `buffer_chunk_size` and `buffer_max_size` parameters for the [Tail](../pipeline/inputs/tail.md) input plugin use unit sizes.
The following table describes the unit sizes you can use and what they mean.
diff --git a/administration/configuring-fluent-bit/classic-mode/configuration-file.md b/administration/configuring-fluent-bit/classic-mode/configuration-file.md
index 117d5bf51..50dbb3053 100644
--- a/administration/configuring-fluent-bit/classic-mode/configuration-file.md
+++ b/administration/configuring-fluent-bit/classic-mode/configuration-file.md
@@ -52,7 +52,7 @@ For scheduler and retry details, see [scheduling and retries](../../scheduling-a
## Config input
-The `INPUT` section defines a source (related to an input plugin). Each [input plugin](https://docs.fluentbit.io/manual/pipeline/inputs) can add its own configuration keys:
+The `INPUT` section defines a source (related to an input plugin). Each [input plugin](../../../pipeline/inputs.md) can add its own configuration keys:
| Key | Description |
| ----------- | ------------|
@@ -160,4 +160,4 @@ Wildcard character (`*`) supports including multiple files. For example:
Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly.
-Environment variables aren't supported in includes section. The file path must be specified as a literal string.
\ No newline at end of file
+Environment variables aren't supported in the `includes` section. The path to the file must be specified as a literal string.
diff --git a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
index 8f3e3d3d9..5f590af39 100644
--- a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
+++ b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
@@ -1,6 +1,6 @@
# Upstream servers
-Fluent Bit [output plugins](../../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
+Fluent Bit [output plugins](../../../pipeline/outputs.md) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
An `Upstream` defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the `Upstream` feature. The following plugin has `Upstream` support:
@@ -60,4 +60,6 @@ The following example defines an `Upstream` called forward-balancing which aims
shared_key secret
```
-Every `Upstream` definition must exists in its own configuration file in the file system. Adding multiple `Upstream` configurations in the same file or different files isn't allowed.
\ No newline at end of file
+Every `Upstream` definition must exists in its own configuration file in the file
+system. Adding multiple `Upstream` configurations in the same file or different files
+isn't allowed.
diff --git a/administration/configuring-fluent-bit/yaml/multiline-parsers-section.md b/administration/configuring-fluent-bit/yaml/multiline-parsers-section.md
index 427cdb398..b5d493815 100644
--- a/administration/configuring-fluent-bit/yaml/multiline-parsers-section.md
+++ b/administration/configuring-fluent-bit/yaml/multiline-parsers-section.md
@@ -1,6 +1,6 @@
# Multiline parsers
-You can define custom [multiline parsers](../../pipeline/parsers/multiline-parsing.md) in the `multiline_parsers` section of YAML configuration files.
+You can define custom [multiline parsers](../../../pipeline/parsers/multiline-parsing.md) in the `multiline_parsers` section of YAML configuration files.
{% hint style="info" %}
@@ -34,4 +34,4 @@ multiline_parsers:
This example defines a multiline parser named `multiline-regex-test` that uses regular expressions to handle multi-event logs. The parser contains two rules: the first rule transitions from `start_state` to cont when a matching log entry is detected, and the second rule continues to match subsequent lines.
-For information about supported configuration options for custom multiline parsers, see [configuring multiline parsers](../../pipeline/parsers/multiline-parsing.md#configuring-multiline-parsers).
+For information about supported configuration options for custom multiline parsers, see [configuring multiline parsers](../../../pipeline/parsers/multiline-parsing.md#configuring-multiline-parsers).
diff --git a/administration/configuring-fluent-bit/yaml/parsers-section.md b/administration/configuring-fluent-bit/yaml/parsers-section.md
index bc78d8d7a..4aec29143 100644
--- a/administration/configuring-fluent-bit/yaml/parsers-section.md
+++ b/administration/configuring-fluent-bit/yaml/parsers-section.md
@@ -1,6 +1,6 @@
# Parsers
-You can define customer [parsers](../pipeline/parsers.md) in the `parsers` section of YAML configuration files.
+You can define customer [parsers](../../../pipeline/parsers.md) in the `parsers` section of YAML configuration files.
{% hint style="info" %}
@@ -35,7 +35,7 @@ parsers:
{% endtab %}
{% endtabs %}
-For information about supported configuration options for custom parsers, see [configuring parsers](../../pipeline/parsers/configuring-parser.md).
+For information about supported configuration options for custom parsers, see [configuring custom parsers](../../../pipeline/parsers/configuring-parser.md).
## Standalone parsers files
@@ -47,10 +47,10 @@ To add a standalone parsers file to Fluent Bit, use the `parsers_file` parameter
To add a standalone parsers file to Fluent Bit, follow these steps.
-1. Define custom parsers in a standalone YAML file. For example, `my-parsers.yaml` defines two custom parsers:
+1. Define custom parsers in a standalone YAML file. For example, `custom-parsers.yaml` defines two custom parsers:
{% tabs %}
-{% tab title="my-parsers.yaml" %}
+{% tab title="custom-parsers.yaml" %}
```yaml
parsers:
diff --git a/administration/configuring-fluent-bit/yaml/service-section.md b/administration/configuring-fluent-bit/yaml/service-section.md
index 7a4e98cb6..3f024a730 100644
--- a/administration/configuring-fluent-bit/yaml/service-section.md
+++ b/administration/configuring-fluent-bit/yaml/service-section.md
@@ -10,9 +10,9 @@ The `service` section of YAML configuration files defines global properties of t
| `dns.mode` | Sets the primary transport layer protocol used by the asynchronous DNS resolver. Can be overridden on a per-plugin basis. | `UDP` |
| `log_file` | Absolute path for an optional log file. By default, all logs are redirected to the standard error interface (`stderr`). | _none_ |
| `log_level` | Sets the logging verbosity level. Possible values: `off`, `error`, `warn`, `info`, `debug`, and `trace`. Values are cumulative. For example, if `debug` is set, it will include `error`, `warning`, `info`, and `debug`. The `trace` mode is only available if Fluent Bit was built with the `WITH_TRACE` option enabled. | `info` |
-| `parsers_file` | Path for [standalone parsers configuration files](../administration/configuring-fluent-bit/yaml/parsers-section.md#standalone-parsers-files). You can include one or more files. | _none_ |
-| `plugins_file` | Path for a `plugins` configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. Plugins can also be declared directly in the [`plugins` section](../administration/configuring-fluent-bit/yaml/plugins-section.md) of YAML configuration files. | _none_ |
-| `streams_file` | Path for the [stream processor](../stream-processing/overview.md) configuration file. This file defines the rules and operations for stream processing in Fluent Bit. Stream processor configurations can also be defined directly in the `streams` section of YAML configuration files. | _none_ |
+| `parsers_file` | Path for [standalone parsers configuration files](../yaml/parsers-section.md#standalone-parsers-files). You can include one or more files. | _none_ |
+| `plugins_file` | Path for a `plugins` configuration file. This file specifies the paths to external plugins (.so files) that Fluent Bit can load at runtime. Plugins can also be declared directly in the [`plugins` section](../yaml/plugins-section.md) of YAML configuration files. | _none_ |
+| `streams_file` | Path for the [stream processor](../../../stream-processing/overview.md) configuration file. This file defines the rules and operations for stream processing in Fluent Bit. Stream processor configurations can also be defined directly in the `streams` section of YAML configuration files. | _none_ |
| `http_server` | Enables the built-in HTTP server. | `off` |
| `http_listen` | Sets the listening interface for the HTTP Server when it's enabled. | `0.0.0.0` |
| `http_port` | Sets the TCP port for the HTTP server. | `2020` |
diff --git a/administration/configuring-fluent-bit/yaml/upstream-servers-section.md b/administration/configuring-fluent-bit/yaml/upstream-servers-section.md
index 357f38edb..6082e6dbb 100644
--- a/administration/configuring-fluent-bit/yaml/upstream-servers-section.md
+++ b/administration/configuring-fluent-bit/yaml/upstream-servers-section.md
@@ -1,6 +1,6 @@
# Upstream servers
-The `upstream_servers` section of YAML configuration files defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include [Forward](https://docs.fluentbit.io/manual/pipeline/outputs/forward) and [Elasticsearch](https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch).
+The `upstream_servers` section of YAML configuration files defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include [Forward](../../../pipeline/outputs/forward.md) and [Elasticsearch](../../../pipeline/outputs/elasticsearch).
The `upstream_servers` section require specifying a `name` for the group and a list
of `nodes`. The following example defines two upstream server groups, `forward-balancing` and `forward-balancing-2`:
diff --git a/administration/http-proxy.md b/administration/http-proxy.md
index 7928cc56b..607d62fe0 100644
--- a/administration/http-proxy.md
+++ b/administration/http-proxy.md
@@ -29,7 +29,7 @@ The `HTTP_PROXY` environment variable is a [standard way](https://docs.docker.co
{% hint style="info" %}
-The [HTTP output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the `HTTP_PROXY` or `http_proxy` environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel using [HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT). Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.
+The [HTTP output plugin](../pipeline/outputs/http.md) also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the `HTTP_PROXY` or `http_proxy` environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel using [HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods/CONNECT). Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic.
{% endhint %}
diff --git a/administration/memory-management.md b/administration/memory-management.md
index 14857ea1b..690ee8f9b 100644
--- a/administration/memory-management.md
+++ b/administration/memory-management.md
@@ -18,7 +18,7 @@ When imposing a limit of `10MB` for the input plugins, and a worst case scenario
In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service.
-It's strongly suggested that in any production environment, Fluent Bit should be built with [jemalloc](http://jemalloc.net/) enabled (`-DFLB_JEMALLOC=On`). The jemalloc implementation of `malloc` is an alternative memory allocator that can reduce fragmentation, resulting in better performance.
+It's strongly suggested that in any production environment, Fluent Bit should be built with [jemalloc](https://jemalloc.net/) enabled (`-DFLB_JEMALLOC=On`). The jemalloc implementation of `malloc` is an alternative memory allocator that can reduce fragmentation, resulting in better performance.
Use the following command to determine if Fluent Bit has been built with jemalloc:
@@ -35,4 +35,4 @@ FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
```
-If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
\ No newline at end of file
+If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
diff --git a/administration/monitoring.md b/administration/monitoring.md
index 2d49cf39e..ef990070b 100644
--- a/administration/monitoring.md
+++ b/administration/monitoring.md
@@ -458,7 +458,7 @@ When querying the related metrics, the aliases are returned instead of the plugi
You can create Grafana dashboards and alerts using Fluent Bit exposed Prometheus style metrics.
-The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://github.com/banzaicloud)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.
+The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://github.com/banzaicloud)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752-logging-dashboard) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label/) for more information.

diff --git a/administration/troubleshooting.md b/administration/troubleshooting.md
index 25aa57ccb..1c7a0bc1e 100644
--- a/administration/troubleshooting.md
+++ b/administration/troubleshooting.md
@@ -283,7 +283,7 @@ In the first terminal, you should see the output similar to the following:
This parameter shows stdout in JSON format.
-See [output plugins](https://docs.fluentbit.io/manual/pipeline/outputs) for additional information.
+See [output plugins](../pipeline/outputs.md) for additional information.
### Analyze a single Tap record
@@ -392,9 +392,9 @@ Overall ingestion status of the plugin.
| Entry | Sub-entry | Description |
| :--- | :--- | :--- |
-| `overlimit` | | If the plugin has been configured with [`Mem_Buf_Limit`](backpressure.md), this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints `yes`, otherwise `no`. |
-| | `mem_size` | Current memory size in use by the input plugin in-memory. |
-| | `mem_limit` | Limit set by `Mem_Buf_Limit`. |
+| `overlimit` | _none_ | If the plugin has been configured with [`Mem_Buf_Limit`](backpressure.md), this entry will report if the plugin is over the limit or not at the moment of the dump. Over the limit prints `yes`, otherwise `no`. |
+| | `mem_size` | Current memory size in use by the input plugin in-memory. |
+| | `mem_limit` | Limit set by `Mem_Buf_Limit`. |
### Tasks
@@ -417,12 +417,12 @@ Depending on the buffering strategy and limits imposed by configuration, some Ch
| Entry | Sub-entry | Description |
| :--- | :--- | :--- |
-| `total_chunks` | | Total number of Chunks generated by the input plugin that are still being processed by the engine. |
-| `up_chunks` | | Total number of Chunks loaded in memory. |
-| `down_chunks` | | Total number of Chunks stored in the filesystem but not loaded in memory yet. |
-| `busy_chunks` | | Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed. |
-| | `size` | Amount of bytes used by the Chunk. |
-| | `size err` | Number of Chunks in an error state where its size couldn't be retrieved. |
+| `total_chunks` | | Total number of Chunks generated by the input plugin that are still being processed by the engine. |
+| `up_chunks` | | Total number of Chunks loaded in memory. |
+| `down_chunks` | | Total number of Chunks stored in the filesystem but not loaded in memory yet. |
+| `busy_chunks` | | Chunks marked as busy (being flushed) or locked. Busy Chunks are immutable and likely are ready to be or are being processed. |
+| | `size` | Amount of bytes used by the Chunk. |
+| | `size err` | Number of Chunks in an error state where its size couldn't be retrieved. |
### Storage layer
@@ -430,8 +430,8 @@ Fluent Bit relies on a custom storage layer interface designed for hybrid buffer
| Entry | Sub-Entry | Description |
| :--- | :--- | :--- |
-| `total chunks` | | Total number of Chunks. |
-| `mem chunks` | | Total number of Chunks memory-based. |
-| `fs chunks` | | Total number of Chunks filesystem based. |
-| | `up` | Total number of filesystem chunks up in memory. |
-| | `down` | Total number of filesystem chunks down (not loaded in memory). |
+| `total chunks` | | Total number of Chunks. |
+| `mem chunks` | | Total number of Chunks memory-based. |
+| `fs chunks` | | Total number of Chunks filesystem based. |
+| | `up` | Total number of filesystem chunks up in memory. |
+| | `down` | Total number of filesystem chunks down (not loaded in memory). |
diff --git a/installation/downloads/aws-container.md b/installation/downloads/aws-container.md
index 5fe227ebe..162e9999f 100644
--- a/installation/downloads/aws-container.md
+++ b/installation/downloads/aws-container.md
@@ -7,20 +7,20 @@ AWS maintains a distribution of Fluent Bit that combines the latest official rel
The [AWS for Fluent Bit](https://github.com/aws/aws-for-fluent-bit) image contains Go Plugins for:
- Amazon CloudWatch as `cloudwatch_logs`. See the
- [Fluent Bit docs](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch) or the
+ [Fluent Bit docs](../../pipeline/outputs/cloudwatch) or the
[Plugin repository](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit).
- Amazon Kinesis Data Firehose as `kinesis_firehose`. See the
- [Fluent Bit docs](https://docs.fluentbit.io/manual/pipeline/outputs/firehose) or the
+ [Fluent Bit docs](../../pipeline/outputs/firehose) or the
[Plugin repository](https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit).
- Amazon Kinesis Data Streams as `kinesis_streams`. See the
- [Fluent Bit docs](https://docs.fluentbit.io/manual/pipeline/outputs/kinesis) or the
+ [Fluent Bit docs](../../pipeline/outputs/kinesis) or the
[Plugin repository](https://github.com/aws/amazon-kinesis-streams-for-fluent-bit).
These plugins are higher performance than Go plugins.
Also, Fluent Bit includes an S3 output plugin named `s3`.
-- [Amazon S3](https://docs.fluentbit.io/manual/pipeline/outputs/s3)
+- [Amazon S3](../../pipeline/outputs/s3)
## Versions and regional repositories
@@ -52,4 +52,4 @@ Parameters:
Description: Fluent Bit image for the FireLens Container
Type: AWS::SSM::Parameter::Value
Default: /aws/service/aws-for-fluent-bit/latest
-```
\ No newline at end of file
+```
diff --git a/installation/downloads/docker.md b/installation/downloads/docker.md
index 5c5f2fb28..d1bd90385 100644
--- a/installation/downloads/docker.md
+++ b/installation/downloads/docker.md
@@ -49,7 +49,7 @@ The following table describes the Linux container tags that are available on Doc
| 4.2.0-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
| 4.2.0 | x86_64, arm64v8, arm32v7, s390x | Release [v4.2.0](https://fluentbit.io/announcements/v4.2.0/) |
| 4.1.2-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
-| 4.1.2 | x86_64, arm64v8, arm32v7, s390x | Release [v4.1.2](https://fluentbit.io/announcements/v4.1.2/) |
+| 4.1.2 | x86_64, arm64v8, arm32v7, s390x | Release v4.1.2 |
| 4.1.1-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
| 4.1.1 | x86_64, arm64v8, arm32v7, s390x | Release [v4.1.1](https://fluentbit.io/announcements/v4.1.1/) |
| 4.1.0-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
diff --git a/installation/downloads/kubernetes.md b/installation/downloads/kubernetes.md
index 92d3c954e..be31bf1ad 100644
--- a/installation/downloads/kubernetes.md
+++ b/installation/downloads/kubernetes.md
@@ -4,7 +4,7 @@ description: Kubernetes Production Grade Log Processor
# Kubernetes
-[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes:
+[Fluent Bit](https://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes:
- Process Kubernetes containers logs from the file system or Systemd/Journald.
- Enrich logs with Kubernetes Metadata.
@@ -14,7 +14,7 @@ description: Kubernetes Production Grade Log Processor
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
-When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) filter plugin.
+When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../../pipeline/filters/kubernetes) filter plugin.
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
@@ -46,7 +46,7 @@ helm upgrade --install fluent-bit fluent/fluent-bit
### Default values
-The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
+The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/main/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
## Details
diff --git a/installation/downloads/macos.md b/installation/downloads/macos.md
index f4aead5c9..a682ef90c 100644
--- a/installation/downloads/macos.md
+++ b/installation/downloads/macos.md
@@ -36,45 +36,49 @@ brew install git cmake openssl bison libyaml
1. Download a copy of the Fluent Bit source code (upstream):
- ```shell
- $ git clone https://github.com/fluent/fluent-bit
-
- $ cd fluent-bit
- ```
+```shell
+git clone https://github.com/fluent/fluent-bit
+```
- If you want to use a specific version, checkout to the proper tag. For example, to use `v4.0.4`, use the command:
+1. Go to the Fluent Bit directory.
- ```shell
- git checkout v4.0.4
- ```
+```shell
+cd fluent-bit
+```
-1. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
+If you want to use a specific version, checkout to the proper tag. For example, to use `v4.0.4`, use the command:
- ```shell
- $ export OPENSSL_ROOT_DIR=`brew --prefix openssl`
-
- $ export PATH=`brew --prefix bison`/bin:$PATH
- ```
+```shell
+git checkout v4.0.4
+```
+
+1. To prepare the build system, you must export certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
+
+```shell
+export OPENSSL_ROOT_DIR=`brew --prefix openssl`
+
+export PATH=`brew --prefix bison`/bin:$PATH
+```
1. Change to the `build/` directory inside the Fluent Bit sources:
- ```shell
- cd build/
- ```
+```shell
+cd build/
+```
1. Build Fluent Bit. This example indicates to the build system the location the final binaries and `config` files should be installed:
- ```shell
- $ cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
-
- $ make -j 16
- ```
+```shell
+cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
+
+make -j 16
+```
1. Install Fluent Bit to the previously specified directory. Writing to this directory requires root privileges.
- ```shell
- sudo make install
- ```
+```shell
+sudo make install
+```
The binaries and configuration examples can be located at `/opt/fluent-bit/`.
@@ -82,48 +86,51 @@ The binaries and configuration examples can be located at `/opt/fluent-bit/`.
1. Clone the Fluent Bit source code (upstream):
- ```shell
- $ git clone https://github.com/fluent/fluent-bit
-
- $ cd fluent-bit
- ```
+```shell
+git clone https://github.com/fluent/fluent-bit
+```
+
+1. Change to the Fluent Bit directory
+
+```shell
+cd fluent-bit
+```
- If you want to use a specific version, checkout to the proper tag. For example,
- to use `v4.0.4` do:
+To use a specific version, checkout to the proper tag. For example, to use `v4.0.4` do:
- ```shell
- git checkout v4.0.4
- ```
+```shell
+git checkout v4.0.4
+```
1. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries:
- ```shell
- $ export OPENSSL_ROOT_DIR=`brew --prefix openssl`
-
- $ export PATH=`brew --prefix bison`/bin:$PATH
- ```
+```shell
+export OPENSSL_ROOT_DIR=`brew --prefix openssl`
+
+export PATH=`brew --prefix bison`/bin:$PATH
+```
1. Create the specific macOS SDK target. For example, to specify macOS Big Sur (11.3) SDK environment:
- ```shell
- export MACOSX_DEPLOYMENT_TARGET=11.3
- ```
+```shell
+export MACOSX_DEPLOYMENT_TARGET=11.3
+```
1. Change to the `build/` directory inside the Fluent Bit sources:
- ```shell
- cd build/
- ```
+```shell
+cd build/
+```
1. Build the Fluent Bit macOS installer:
- ```shell
- $ cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
-
- $ make -j 16
-
- $ cpack -G productbuild
- ```
+```shell
+cmake -DCPACK_GENERATOR=productbuild -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../
+
+make -j 16
+
+cpack -G productbuild
+```
The macOS installer will be generated as:
@@ -156,7 +163,7 @@ To make the access path easier to Fluent Bit binary, extend the `PATH` variable:
export PATH=/opt/fluent-bit/bin:$PATH
```
-To test, try Fluent Bit by generating a test message using the [Dummy input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/dummy) which prints to the standard output interface every one second:
+To test, try Fluent Bit by generating a test message using the [Dummy input plugin](../../pipeline/inputs/dummy) which prints to the standard output interface every one second:
```shell
fluent-bit -i dummy -o stdout -f 1
@@ -170,4 +177,4 @@ You will see an output similar to this:
[0] dummy.0: [1644362034.676914000, {"message"=>"dummy"}]
```
-To halt the process, press `ctrl-c` in the terminal.
\ No newline at end of file
+To halt the process, press `ctrl-c` in the terminal.
diff --git a/installation/downloads/source/build-and-install.md b/installation/downloads/source/build-and-install.md
index 2d645ab59..62a0e3bf6 100644
--- a/installation/downloads/source/build-and-install.md
+++ b/installation/downloads/source/build-and-install.md
@@ -1,6 +1,6 @@
# Build and install
-[Fluent Bit](http://fluentbit.io) uses [CMake](http://cmake.org) as its build system.
+[Fluent Bit](https://fluentbit.io) uses [CMake](https://cmake.org) as its build system.
## Requirements
@@ -28,7 +28,7 @@ The following steps explain how to build and install the project with the defaul
cd build/
```
-1. Let [CMake](http://cmake.org) configure the project specifying where the root path is located:
+1. Let [CMake](https://cmake.org) configure the project specifying where the root path is located:
```shell
cmake ../
@@ -144,7 +144,7 @@ Fluent Bit provides configurable options to CMake that can be enabled or disable
| Option | Description | Default |
|:--------------------------|:-----------------------------|:--------|
-| `FLB_BACKTRACE` | Enable stack trace support | `Yes` |
+| `FLB_BACKTRACE` | Enable stack trace support | `Yes` |
| `FLB_DEBUG` | Build with debug mode (`-g`) | `No` |
| `FLB_SMALL` | Optimize for small size | `No` |
| `FLB_TESTS_INTERNAL` | Enable internal tests | `No` |
@@ -156,10 +156,10 @@ Fluent Bit provides configurable options to CMake that can be enabled or disable
### Optimization options
-| Option | Description | Default |
-|:------------------------------------------|:---------------------------------------------------------------------------------------------------------|:--------|
-| `FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE` | Determine initial buffer size for `msgpack` to `json` conversion in terms of memory used by payload. | `2.0` |
-| `FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE` | Determine percentage of reallocation size when `msgpack` to `json` conversion buffer runs out of memory. | `0.1` |
+| Option | Description | Default |
+| ------ | ----------- | ------- |
+| `FLB_MSGPACK_TO_JSON_INIT_BUFFER_SIZE` | Determine initial buffer size for `msgpack` to `json` conversion in terms of memory used by payload. | `2.0` |
+| `FLB_MSGPACK_TO_JSON_REALLOC_BUFFER_SIZE` | Determine percentage of reallocation size when `msgpack` to `json` conversion buffer runs out of memory. | `0.1` |
### Input plugins
@@ -240,30 +240,30 @@ Filter plugins let you modify, enrich or drop records.
The following table describes the filters available on this version:
-| Option | Description | Default |
-|:-----------------------------------------------------------------------------|:-----------------------------------|:--------|
-| [`FLB_FILTER_AWS`](../../../pipeline/filters/aws-metadata.md) | Enable AWS metadata filter | `On` |
-| [`FLB_FILTER_CHECKLIST`](../../../pipeline/filters/checklist.md) | Enable Checklist filter | `On` |
-| [`FLB_FILTER_ECS`](../../../pipeline/filters/ecs-metadata.md) | Enable AWS ECS metadata filter | `On` |
-| [`FLB_FILTER_EXPECT`](../../../pipeline/filters/expect.md) | Enable Expect data test filter | `On` |
-| [`FLB_FILTER_GIOIP2`](../../../pipeline/filters/geoip2-filter.md) | Enable Geoip2 filter | `On` |
-| [`FLB_FILTER_GREP`](../../../pipeline/filters/grep.md) | Enable Grep filter | `On` |
-| [`FLB_FILTER_KUBERNETES`](../../../pipeline/filters/kubernetes.md) | Enable Kubernetes metadata filter | `On` |
-| [`FLB_FILTER_LOG_TO_METRICS`](../../../pipeline/filters/log_to_metrics.md) | Enable Log derived metrics filter | `On` |
-| [`FLB_FILTER_LUA`](../../../pipeline/filters/lua.md) | Enable Lua scripting filter | `On` |
-| [`FLB_FILTER_MODIFY`](../../../pipeline/filters/modify.md) | Enable Modify filter | `On` |
+| Option | Description | Default |
+| ---------------------------------------------------------------------------- | ---------------------------------- | ------- |
+| [`FLB_FILTER_AWS`](../../../pipeline/filters/aws-metadata.md) | Enable AWS metadata filter | `On` |
+| [`FLB_FILTER_CHECKLIST`](../../../pipeline/filters/checklist.md) | Enable Checklist filter | `On` |
+| [`FLB_FILTER_ECS`](../../../pipeline/filters/ecs-metadata.md) | Enable AWS ECS metadata filter | `On` |
+| [`FLB_FILTER_EXPECT`](../../../pipeline/filters/expect.md) | Enable Expect data test filter | `On` |
+| [`FLB_FILTER_GIOIP2`](../../../pipeline/filters/geoip2-filter.md) | Enable Geoip2 filter | `On` |
+| [`FLB_FILTER_GREP`](../../../pipeline/filters/grep.md) | Enable Grep filter | `On` |
+| [`FLB_FILTER_KUBERNETES`](../../../pipeline/filters/kubernetes.md) | Enable Kubernetes metadata filter | `On` |
+| [`FLB_FILTER_LOG_TO_METRICS`](../../../pipeline/filters/log_to_metrics.md) | Enable Log derived metrics filter | `On` |
+| [`FLB_FILTER_LUA`](../../../pipeline/filters/lua.md) | Enable Lua scripting filter | `On` |
+| [`FLB_FILTER_MODIFY`](../../../pipeline/filters/modify.md) | Enable Modify filter | `On` |
| [`FLB_FILTER_MULTILINE`](../../../pipeline/filters/multiline-stacktrace.md) | Enable Multiline stack trace filter | `On` |
-| [`FLB_FILTER_NEST`](../../../pipeline/filters/nest.md) | Enable Nest filter | `On` |
-| [`FLB_FILTER_NIGHTFALL`](../../../pipeline/filters/nightfall.md) | Enable Nightfall filter | `On` |
-| [`FLB_FILTER_PARSER`](../../../pipeline/filters/parser.md) | Enable Parser filter | `On` |
-| [`FLB_FILTER_RECORD_MODIFIER`](../../../pipeline/filters/record-modifier.md) | Enable Record Modifier filter | `On` |
-| [`FLB_FILTER_REWRITE_TAG`](../../../pipeline/filters/rewrite-tag.md) | Enable Rewrite Tag filter | `On` |
-| [`FLB_FILTER_STDOUT`](../../../pipeline/filters/standard-output.md) | Enable Stdout filter | `On` |
-| [`FLB_FILTER_SYSINFO`](../../../pipeline/filters/sysinfo.md) | Enable Sysinfo filter | `On` |
-| [`FLB_FILTER_TENSORFLOW`](../../../pipeline/filters/tensorflow.md) | Enable Tensorflow filter | `Off` |
-| [`FLB_FILTER_THROTTLE`](../../../pipeline/filters/throttle.md) | Enable Throttle filter | `On` |
-| [`FLB_FILTER_TYPE_CONVERTER`](../../../pipeline/filters/type-converter.md) | Enable Type Converter filter | `On` |
-| [`FLB_FILTER_WASM`](../../../pipeline/filters/wasm.md) | Enable Wasm filter | `On` |
+| [`FLB_FILTER_NEST`](../../../pipeline/filters/nest.md) | Enable Nest filter | `On` |
+| [`FLB_FILTER_NIGHTFALL`](../../../pipeline/filters/nightfall.md) | Enable Nightfall filter | `On` |
+| [`FLB_FILTER_PARSER`](../../../pipeline/filters/parser.md) | Enable Parser filter | `On` |
+| [`FLB_FILTER_RECORD_MODIFIER`](../../../pipeline/filters/record-modifier.md) | Enable Record Modifier filter | `On` |
+| [`FLB_FILTER_REWRITE_TAG`](../../../pipeline/filters/rewrite-tag.md) | Enable Rewrite Tag filter | `On` |
+| [`FLB_FILTER_STDOUT`](../../../pipeline/filters/standard-output.md) | Enable Stdout filter | `On` |
+| [`FLB_FILTER_SYSINFO`](../../../pipeline/filters/sysinfo.md) | Enable Sysinfo filter | `On` |
+| [`FLB_FILTER_TENSORFLOW`](../../../pipeline/filters/tensorflow.md) | Enable Tensorflow filter | `Off` |
+| [`FLB_FILTER_THROTTLE`](../../../pipeline/filters/throttle.md) | Enable Throttle filter | `On` |
+| [`FLB_FILTER_TYPE_CONVERTER`](../../../pipeline/filters/type-converter.md) | Enable Type Converter filter | `On` |
+| [`FLB_FILTER_WASM`](../../../pipeline/filters/wasm.md) | Enable Wasm filter | `On` |
### Output plugins
@@ -286,7 +286,7 @@ The following table describes the output plugins available:
| [`FLB_OUT_EXIT`](../../../pipeline/outputs/exit.md) | Enable Exit output plugin | `On` |
| [`FLB_OUT_FILE`](../../../pipeline/outputs/file.md) | Enable File output plugin | `On` |
| [`FLB_OUT_FLOWCOUNTER`](../../../pipeline/outputs/flowcounter.md) | Enable Flow counter output plugin | `On` |
-| [`FLB_OUT_FORWARD`](../../../pipeline/outputs/forward.md) | Enable [Fluentd](http://www.fluentd.org) output plugin | `On` |
+| [`FLB_OUT_FORWARD`](../../../pipeline/outputs/forward.md) | Enable [Fluentd](https://www.fluentd.org) output plugin | `On` |
| [`FLB_OUT_GELF`](../../../pipeline/outputs/gelf.md) | Enable GELF output plugin | `On` |
| [`FLB_OUT_HTTP`](../../../pipeline/outputs/http.md) | Enable HTTP output plugin | `On` |
| [`FLB_OUT_INFLUXDB`](../../../pipeline/outputs/influxdb.md) | Enable InfluxDB output plugin | `On` |
@@ -305,7 +305,7 @@ The following table describes the output plugins available:
| [`FLB_OUT_ORACLE_LOG_ANALYTICS`](../../../pipeline/outputs/oci-logging-analytics.md) | Enable Oracle Cloud Infrastructure Logging output plugin | `On` |
| [`FLB_OUT_PGSQL`](../../../pipeline/outputs/postgresql.md) | Enable PostgreSQL output plugin | `Off` |
| [`FLB_OUT_PLOT`](../../../pipeline/outputs/plot.md) | Enable Plot output plugin | `On` |
-| [`FLB_OUT_PROMETHEUS_EXPORTER`](../../../pipeline/outputs/prometheus-exporter.md) | Enable Prometheus exporter output plugin | `On` |
+| [`FLB_OUT_PROMETHEUS_EXPORTER`](../../../pipeline/outputs/prometheus-exporter.md) | Enable Prometheus exporter output plugin | `On` |
| [`FLB_OUT_PROMETHEUS_REMOTE_WRITE`](../../../pipeline/inputs/prometheus-remote-write.md) | Enable Prometheus remote write output plugin | `On` |
| [`FLB_OUT_S3`](../../../pipeline/outputs/s3.md) | Enable Amazon S3 output plugin | `On` |
| [`FLB_OUT_SKYWALKING`](../../../pipeline/outputs/skywalking.md) | Enable Apache Skywalking output plugin | `On` |
diff --git a/installation/upgrade-notes.md b/installation/upgrade-notes.md
index e650715f2..c4cedc78b 100644
--- a/installation/upgrade-notes.md
+++ b/installation/upgrade-notes.md
@@ -24,7 +24,7 @@ To keep the old behavior, set the option `read_from_head` to `true`.
### Stackdriver output plugin
-The `project_id` of [resource](https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource) in [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) sent to Google Cloud Logging would be set to the `project_id` rather than the project number. To learn the difference between Project ID and project number, see [Creating and managing projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects#before_you_begin).
+The `project_id` of [resource](https://docs.cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource) in [LogEntry](https://docs.cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) sent to Google Cloud Logging would be set to the `project_id` rather than the project number. To learn the difference between Project ID and project number, see [Creating and managing projects](https://docs.cloud.google.com/resource-manager/docs/creating-managing-projects#before_you_begin).
If you have existing queries based on the resource's `project_id,` update your query accordingly.
@@ -51,12 +51,12 @@ Fluent Bit v1.2 fixed many issues associated with JSON encoding and decoding.
For example, when parsing Docker logs, it's no longer necessary to use decoders. The new Docker parser looks like this:
```text
-[PARSER]
- Name docker
- Format json
- Time_Key time
- Time_Format %Y-%m-%dT%H:%M:%S.%L
- Time_Keep On
+[PARSER]
+ Name docker
+ Format json
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L
+ Time_Keep On
```
### Kubernetes filter
@@ -65,25 +65,25 @@ Fluent Bit made improvements to Kubernetes Filter handling of stringified `log`
In addition, fixes and improvements were made to the `Merge_Log_Key` option. If a merge log succeed, all new keys will be packaged under the key specified by this option. A suggested configuration is as follows:
-```text
-[FILTER]
- Name Kubernetes
- Match kube.*
- Kube_Tag_Prefix kube.var.log.containers.
- Merge_Log On
- Merge_Log_Key log_processed
+```text
+[FILTER]
+ Name Kubernetes
+ Match kube.*
+ Kube_Tag_Prefix kube.var.log.containers.
+ Merge_Log On
+ Merge_Log_Key log_processed
```
As an example, if the original log content is the following map:
```json
-{"key1": "val1", "key2": "val2"}
+{"key1": "val1", "key2": "val2"}
```
the final record will be composed as follows:
-```json
-{"log": "{\"key1\": \"val1\", \"key2\": \"val2\"}", "log_processed": { "key1": "val1", "key2": "val2" } }
+```json
+{"log": "{\"key1\": \"val1\", \"key2\": \"val2\"}", "log_processed": { "key1": "val1", "key2": "val2" } }
```
## Fluent Bit v1.1
@@ -96,11 +96,11 @@ Fluent Bit introduced a new configuration property called `Kube_Tag_Prefix` to h
During the `1.0.x` release cycle, a commit in the Tail input plugin changed the default behavior on how the Tag was composed when using the wildcard for expansion generating breaking compatibility with other services. Consider the following configuration example:
-```text
-[INPUT]
- Name tail
- Path /var/log/containers/*.log
- Tag kube.*
+```text
+[INPUT]
+ Name tail
+ Path /var/log/containers/*.log
+ Tag kube.*
```
The expected behavior is that Tag will be expanded to:
@@ -117,16 +117,17 @@ Having absolute path in the Tag is relevant for routing and flexible configurati
This behavior switch in Tail input plugin affects how Filter Kubernetes operates. When the filter is used it needs to perform local metadata lookup that comes from the file names when using Tail as a source. With the new `Kube_Tag_Prefix` option you can specify the prefix used in the Tail input plugin. For the previous configuration example the new configuration will look like:
-```text
-[INPUT]
- Name tail
- Path /var/log/containers/*.log
+```text
+[INPUT]
+ Name tail
+ Path /var/log/containers/*.log
Tag kube.*
-[FILTER]
- Name kubernetes
- Match *
- Kube_Tag_Prefix kube.var.log.containers.
+[FILTER]
+ Name kubernetes
+ Match *
+ Kube_Tag_Prefix kube.var.log.containers.
```
-The proper value for `Kube_Tag_Prefix` must be composed by Tag prefix set in Tail input plugin plus the converted monitored directory replacing slashes with dots.
\ No newline at end of file
+The proper value for `Kube_Tag_Prefix` must be composed by Tag prefix set in Tail
+input plugin plus the converted monitored directory replacing slashes with dots.
diff --git a/pipeline/filters/aws-metadata.md b/pipeline/filters/aws-metadata.md
index 9be8488a3..c93e0152f 100644
--- a/pipeline/filters/aws-metadata.md
+++ b/pipeline/filters/aws-metadata.md
@@ -17,10 +17,10 @@ The plugin supports the following configuration parameters:
| `account_id` | The account ID for the current EC2 instance. | `false` |
| `hostname` | The hostname for the current EC2 instance. | `false` |
| `vpc_id` | The VPC ID for the current EC2 instance. | `false` |
-| `tags_enabled` | Specifies whether to attach EC2 instance tags. The EC2 instance must have the [`instance-metadata-tags`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-instance-metadata-options.html) option enabled, which is disabled by default. | `false` |
+| `tags_enabled` | Specifies whether to attach EC2 instance tags. The EC2 instance must have the [`instance-metadata-tags`](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-instance-metadata-options.html) option enabled, which is disabled by default. | `false` |
| `tags_include` | Defines a list of specific EC2 tag keys to inject into the logs. Tag keys must be comma-separated (`,`). Tags not included in this list will be ignored. Example: `Name,tag1,tag2`. | _none_ |
| `tags_exclude` | Defines a list of specific EC2 tag keys not to inject into the logs. Tag keys must be comma-separated (`,`). Tags not included in this list will be injected into the logs. If both `tags_include` and `tags_exclude` are specified, the configuration is invalid and the plugin fails. Example: `Name,tag1,tag2` | _none_ |
-| `retry_interval_s` |Defines minimum duration between retries for fetching EC2 instance tags. | `300` |
+| `retry_interval_s` | Defines minimum duration between retries for fetching EC2 instance tags. | `300` |
If you run Fluent Bit in a container, you might need to use instance metadata v1. The plugin behaves the same regardless of which version is used.
diff --git a/pipeline/filters/ecs-metadata.md b/pipeline/filters/ecs-metadata.md
index 3d10e1903..ebfca5407 100644
--- a/pipeline/filters/ecs-metadata.md
+++ b/pipeline/filters/ecs-metadata.md
@@ -10,8 +10,8 @@ The plugin supports the following configuration parameters:
| Key | Description | Default |
| :--- | :--- | :--- |
-| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option lets you control both the key names for metadata and the format for metadata values. | _none_ |
-| `ECS_Tag_Prefix` | Similar to the `Kube_Tag_Prefix` option in the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) and performs the same function. The full log tag should be prefixed with this string and after the prefix the filter must find the next characters in the tag to be the Docker Container Short ID (the first 12 characters of the full container ID). The filter uses this to identify which container the log came from so it can find which task it's a part of. See the design section for more information. If not specified, it defaults to empty string, meaning that the tag must be prefixed with the 12 character container short ID. If you want to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks, don't set this parameter and enable the `Cluster_Metadata_Only` option | empty string |
+| `Add` | Similar to the `ADD` option in the [modify filter](../filters/modify.md). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option lets you control both the key names for metadata and the format for metadata values. | _none_ |
+| `ECS_Tag_Prefix` | Similar to the `Kube_Tag_Prefix` option in the [Kubernetes filter](../filters/kubernetes.md) and performs the same function. The full log tag should be prefixed with this string and after the prefix the filter must find the next characters in the tag to be the Docker Container Short ID (the first 12 characters of the full container ID). The filter uses this to identify which container the log came from so it can find which task it's a part of. See the design section for more information. If not specified, it defaults to empty string, meaning that the tag must be prefixed with the 12 character container short ID. If you want to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks, don't set this parameter and enable the `Cluster_Metadata_Only` option | empty string |
| `Cluster_Metadata_Only` | When enabled, the plugin will only attempt to attach cluster metadata values. Use to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks. | `Off` |
| `ECS_Meta_Cache_TTL` | The filter builds a hash table in memory mapping each unique container short ID to its metadata. This option sets a max `TTL` for objects in the hash table. You should set this if you have frequent container or task restarts. For example, if your cluster runs short running batch jobs that complete in less than 10 minutes, there is no reason to keep any stored metadata longer than 10 minutes. You would therefore set this parameter to `10m`. | `1h` |
@@ -26,7 +26,7 @@ The following template variables can be used for values with the `ADD` option. S
| `$ContainerInstanceID` | The ID of the ECS EC2 Container Instance. | `Yes` |
| `$ECSAgentVersion` | The version string of the ECS Agent running on the container instance. | `Yes` |
| `$ECSContainerName` | The name of the container from which the log originated. This is the name in your ECS Task Definition. | `No` |
-| `$DockerContainerName` | The name of the container from which the log originated. This is the name obtained from Docker and is the name shown if you run `docker ps` on the instance. | `No` |
+| `$DockerContainerName` | The name of the container from which the log originated. This is the name obtained from Docker and is the name shown if you run `docker ps` on the instance. | `No` |
| `$ContainerID` | The ID of the container from which the log originated. This is the full 64-character-long container ID. | `No` |
| `$TaskDefinitionFamily` | The family name of the task definition for the task from which the log originated. | `No` |
| `$TaskDefinitionVersion` | The version or revision of the task definition for the task from which the log originated. | `No` |
diff --git a/pipeline/filters/geoip2-filter.md b/pipeline/filters/geoip2-filter.md
index 0eba3a6f2..b284e5141 100644
--- a/pipeline/filters/geoip2-filter.md
+++ b/pipeline/filters/geoip2-filter.md
@@ -6,7 +6,7 @@ description: Look up Geo data from IP.
The GeoIP2 filter lets you enrich the incoming data stream with location data from the GeoIP2 database.
-The `GeoLite2-City.mmdb` database is available from [MaxMind's official site](https://dev.maxmind.com/geoip/geoip2/geolite2/).
+The `GeoLite2-City.mmdb` database is available from [MaxMind's official site](https://dev.maxmind.com/geoip/geolite2-free-geolocation-data/).
## Configuration parameters
diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md
index 9997bde4e..bf92a47b0 100644
--- a/pipeline/filters/kubernetes.md
+++ b/pipeline/filters/kubernetes.md
@@ -54,7 +54,7 @@ The plugin supports the following configuration parameters:
| `kube_tag_prefix` | When the source records come from the `tail` input plugin, this option specifies the prefix used in `tail` configuration. | `kube.var.log.containers.` |
| `kube_token_command` | Command to get Kubernetes authorization token. Defaults to `NULL` uses the token file to get the token. To manually choose a command to get it, set the command here. For example, run `aws-iam-authenticator -i your-cluster-name token --token-only` to set token. This option is currently Linux-only. | `NULL` |
| `kube_token_file` | Token file | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
-| `kube_token_ttl` | Configurable time-to-live for the Kubernetes token. After this time, the token is reloaded from `kube_token_file` or the `kube_token_command`.| `600` |
+| `kube_token_ttl` | Configurable time-to-live for the Kubernetes token. After this time, the token is reloaded from `kube_token_file` or the `kube_token_command`. | `600` |
| `kube_url` | API Server endpoint | `https://kubernetes.default.svc:443` |
| `kubelet_host` | Kubelet host to use for HTTP requests. This only works when `use_kubelet` is set to `On`. | `127.0.0.1` |
| `kubelet_port` | Kubelet port to use for HTTP requests. This only works when `use_kubelet` is set to `On`. | `10250` |
@@ -63,9 +63,9 @@ The plugin supports the following configuration parameters:
| `merge_log_key` | When `merge_log` is enabled, the filter assumes the `log` field from the incoming message is a JSON string message and attempts to create a structured representation of it at the same level of the `log` field in the map. If `merge_log_key` is set (a string name), all the new structured fields taken from the original `log` content are inserted under the new key. | _none_ |
| `merge_log_trim` | When `merge_log` is enabled, trim (remove possible `\n` or `\r\`) field values. | `On` |
| `merge_parser` | Optional parser name to specify how to parse the data contained in the `log` key. Recommended for developers or testing only. | _none_ |
-| `namespace_annotations` | Include Kubernetes namespace resource annotations in the extra metadata. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta)| `Off` |
-| `namespace_labels` | Include Kubernetes namespace resource labels in the extra metadata. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta)| `Off` |
-| `namespace_metadata_only` | Include Kubernetes namespace metadata and no pod metadata. When set, the values of `labels` and `annotations` are ignored. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta)| `Off` |
+| `namespace_annotations` | Include Kubernetes namespace resource annotations in the extra metadata. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta) | `Off` |
+| `namespace_labels` | Include Kubernetes namespace resource labels in the extra metadata. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta) | `Off` |
+| `namespace_metadata_only` | Include Kubernetes namespace metadata and no pod metadata. When set, the values of `labels` and `annotations` are ignored. See [Kubernetes Namespace Meta](#kubernetes-namespace-meta) | `Off` |
| `owner_references` | Include Kubernetes owner references in the extra metadata. | `Off` |
| `regex_parser` | Set an alternative Parser to process record tags and extract `pod_name`, `namespace_name`, `container_name`, and `docker_id`. The parser must be registered in a [parsers file](https://github.com/fluent/fluent-bit/blob/master/conf/parsers.conf) (refer to parser `filter-kube-test` as an example). | _none_ |
| `set_platform` | Manually set the Kubernetes platform type. Possible values are `k8s` (native Kubernetes) and `eks` (Amazon EKS). When set, this completely overrides automatic detection based on the service account token issuer; automatic detection is skipped entirely. Intended for testing or environments where token-based detection isn't available. | Auto-detected |
@@ -74,7 +74,7 @@ The plugin supports the following configuration parameters:
| `tls.verify_hostname` | When enabled, turns on hostname validation for certificates. | `Off` |
| `tls.vhost` | Set an optional TLS virtual host for the Kubernetes API server connection. | _none_ |
| `use_journal` | When enabled, the filter reads logs in `Journald` format. | `Off` |
-| `use_kubelet` | Optional feature flag to get metadata information from Kubelet instead of calling Kube Server API to enhance the log. This could mitigate the [Kube API heavy traffic issue for large cluster](kubernetes.md#optional-feature-using-kubelet-to-get-metadata). If used when any [Kubernetes Namespace Meta](#kubernetes-namespace-meta) fields are enabled, Kubelet will be used to fetch pod data, but namespace meta will still be fetched using the `kube_url` settings.| `Off` |
+| `use_kubelet` | Optional feature flag to get metadata information from Kubelet instead of calling Kube Server API to enhance the log. This could mitigate the [Kube API heavy traffic issue for large cluster](kubernetes.md#optional-feature-using-kubelet-to-get-metadata). If used when any [Kubernetes Namespace Meta](#kubernetes-namespace-meta) fields are enabled, Kubelet will be used to fetch pod data, but namespace meta will still be fetched using the `kube_url` settings. | `Off` |
| `use_pod_association` | Deprecated alias for `aws_use_pod_association`. Kept for backward compatibility with AWS Observability users. | `Off` |
| `use_tag_for_meta` | When enabled, Kubernetes metadata (for example, `pod_name`, `container_name`, and `namespace_name`) will be extracted from the tag itself. Connection to Kubernetes API Server won't get established and API calls for metadata won't be made. See [Workflow of Tail and Kubernetes Filter](#workflow-of-tail-and-kubernetes-filter) and [Custom tag For enhanced filtering](#custom-tags-for-enhanced-filtering) to better understand metadata extraction from tags. | `Off` |
@@ -592,6 +592,7 @@ When platform detection is active, an `aws_entity_platform` field is added to th
```
Possible values:
+
- `eks`: Running on Amazon EKS
- `k8s`: Running on native Kubernetes
diff --git a/pipeline/filters/multiline-stacktrace.md b/pipeline/filters/multiline-stacktrace.md
index efafd0286..3c570f218 100644
--- a/pipeline/filters/multiline-stacktrace.md
+++ b/pipeline/filters/multiline-stacktrace.md
@@ -15,7 +15,7 @@ Along with multiline filters, you can enable one of the following built-in Fluen
When using this filter:
-- The usage of this filter depends on a previous configuration of a [multiline parser](../pipeline/parsers/multiline-parsing.md) definition.
+- The usage of this filter depends on a previous configuration of a [multiline parser](../../pipeline/parsers/multiline-parsing.md) definition.
- To concatenate messages read from a log file, it's highly recommended to use the multiline support in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) itself. This is because performing concatenation while reading the log file is more performant. Concatenating messages that were originally one line, but split by Docker or CRI container engines because of their size, is supported in the [Tail plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) in combination with the `docker` or `cri` parser. To concatenate application logs like stacktraces on top of that, you can use this multiline filter.
{% hint style="warning" %}
@@ -315,7 +315,7 @@ Lines that don't match a pattern aren't considered as part of the multiline mess
## Docker partial message use case
When Fluent Bit is consuming logs from a container runtime, such as Docker, these logs will be split when larger than a certain limit, usually 16 KB.
-If your application emits a 100K log line, it will be split into seven partial messages. The docker parser will merge these back to one line. If instead you are using the [Fluentd Docker Log Driver](https://docs.docker.com/config/containers/logging/fluentd/) to send the logs to Fluent Bit, they might look like this:
+If your application emits a 100K log line, it will be split into seven partial messages. The docker parser will merge these back to one line. If instead you are using the [Fluentd Docker Log Driver](https://docs.docker.com/engine/logging/drivers/fluentd/) to send the logs to Fluent Bit, they might look like this:
```text
{"source": "stdout", "log": "... omitted for brevity...", "partial_message": "true", "partial_id": "dc37eb08b4242c41757d4cd995d983d1cdda4589193755a22fcf47a638317da0", "partial_ordinal": "1", "partial_last": "false", "container_id": "a96998303938eab6087a7f8487ca40350f2c252559bc6047569a0b11b936f0f2", "container_name": "/hopeful_taussig"}]
diff --git a/pipeline/filters/tensorflow.md b/pipeline/filters/tensorflow.md
index 581105c70..5113871ee 100644
--- a/pipeline/filters/tensorflow.md
+++ b/pipeline/filters/tensorflow.md
@@ -1,8 +1,8 @@
# Tensorflow
-The _Tensorflow_ filter plugin allows running machine learning inference tasks on the records of data coming from input plugins or stream processors. This filter uses [Tensorflow Lite](https://www.tensorflow.org/lite/) as the inference engine, and requires Tensorflow Lite shared library to be present during build and at runtime.
+The _Tensorflow_ filter plugin allows running machine learning inference tasks on the records of data coming from input plugins or stream processors. This filter uses [Tensorflow Lite](https://ai.google.dev/edge/litert) as the inference engine, and requires Tensorflow Lite shared library to be present during build and at runtime.
-Tensorflow Lite is a lightweight open source deep learning framework used for mobile and IoT applications. Tensorflow Lite only handles inference, not training. It loads pre-trained models (`.tflite` files) that are converted into Tensorflow Lite format (`FlatBuffer`). You can read more on converting [Tensorflow models](https://www.tensorflow.org/lite/convert).
+Tensorflow Lite is a lightweight open source deep learning framework used for mobile and IoT applications. Tensorflow Lite only handles inference, not training. It loads pre-trained models (`.tflite` files) that are converted into Tensorflow Lite format (`FlatBuffer`). You can read more on converting [Tensorflow models](https://ai.google.dev/edge/litert/conversion/tensorflow/overview).
The Tensorflow plugin for Fluent Bit has the following limitations:
@@ -33,7 +33,7 @@ To create a Tensorflow Lite shared library:
```
The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`.
-
+
1. Copy the library to a location such as `/usr/lib` that can be used by Fluent Bit.
## Building Fluent Bit with Tensorflow filter plugin
diff --git a/pipeline/inputs/cpu-metrics.md b/pipeline/inputs/cpu-metrics.md
index 4b20d6596..4077e3d5d 100644
--- a/pipeline/inputs/cpu-metrics.md
+++ b/pipeline/inputs/cpu-metrics.md
@@ -54,7 +54,7 @@ The command returns results similar to the following:
...
```
-As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the `stdout` plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as [Fluentd](http://fluentd.org) or [Elasticsearch](http://elastic.co).
+As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the `stdout` plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as [Fluentd](https://fluentd.org) or [Elasticsearch](https://elastic.co).
### Configuration file
@@ -89,4 +89,4 @@ pipeline:
```
{% endtab %}
-{% endtabs %}
\ No newline at end of file
+{% endtabs %}
diff --git a/pipeline/inputs/forward.md b/pipeline/inputs/forward.md
index 7a6e0ca1b..fc5608f38 100644
--- a/pipeline/inputs/forward.md
+++ b/pipeline/inputs/forward.md
@@ -1,13 +1,13 @@
# Forward
-_Forward_ is the protocol used by [Fluent Bit](http://fluentbit.io) and [Fluentd](http://www.fluentd.org) to route messages between peers. This plugin implements the input service to listen for Forward messages.
+_Forward_ is the protocol used by [Fluent Bit](https://fluentbit.io) and [Fluentd](https://www.fluentd.org) to route messages between peers. This plugin implements the input service to listen for Forward messages.
## Configuration parameters
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|
+| Key | Description | Default |
+|:--------------------|:-------------|:----------|
| `buffer_chunk_size` | By default the buffer to store the incoming Forward messages, don't allocate the maximum memory allowed, instead it allocate memory when it's required. The rounds of allocations are set by `buffer_chunk_size`. The value must be according to the [Unit Size ](../../administration/configuring-fluent-bit.md#unit-sizes)specification. | `1024000` |
| `buffer_max_size` | Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit.md#unit-sizes) specification. | `6144000` |
| `empty_shared_key` | Enable secure forward protocol with a zero-length shared key. Use this to enable user authentication without requiring a shared key, or to connect to Fluentd with a zero-length shared key. | `false` |
@@ -188,7 +188,7 @@ pipeline:
## Testing
-After Fluent Bit is running, you can send some messages using the `fluent-cat` tool, provided by [Fluentd](http://www.fluentd.org):
+After Fluent Bit is running, you can send some messages using the `fluent-cat` tool, provided by [Fluentd](https://www.fluentd.org):
```shell
echo '{"key 1": 123456789, "key 2": "abcdefg"}' | fluent-cat my_tag
@@ -200,7 +200,7 @@ When you run the plugin with the following command:
fluent-bit -i forward -o stdout
```
-In [Fluent Bit](http://fluentbit.io) you should see the following output:
+In [Fluent Bit](https://fluentbit.io) you should see the following output:
```text
...
diff --git a/pipeline/inputs/kafka.md b/pipeline/inputs/kafka.md
index 54bcb444f..4abe2bc9d 100644
--- a/pipeline/inputs/kafka.md
+++ b/pipeline/inputs/kafka.md
@@ -18,7 +18,7 @@ This plugin uses the official [librdkafka C library](https://github.com/edenhill
| `group_id` | Group id passed to librdkafka. | `fluent-bit` |
| `poll_ms` | Kafka brokers polling interval in milliseconds. | `500` |
| `poll_timeout_ms` | Timeout in milliseconds for Kafka consumer poll operations. Only effective when `threaded` is enabled. | `1` |
-| `rdkafka.{property}` | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md). | _none_ |
+| `rdkafka.{property}` | `{property}` can be any [librdkafka properties](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). | _none_ |
| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
| `topics` | Single entry or list of comma-separated topics (`,`) that Fluent Bit will subscribe to. | _none_ |
diff --git a/pipeline/inputs/nginx.md b/pipeline/inputs/nginx.md
index c01913c49..5ce09fe71 100644
--- a/pipeline/inputs/nginx.md
+++ b/pipeline/inputs/nginx.md
@@ -188,4 +188,4 @@ This returns output similar to the following:
## Exported metrics
-For a list of available metrics, see the [NGINX Prometheus Exporter metrics documentation](https://github.com/nginxinc/nginx-prometheus-exporter/blob/main/README.md) on GitHub.
+For a list of available metrics, see the [NGINX Prometheus Exporter metrics documentation](https://github.com/nginx/nginx-prometheus-exporter/blob/main/README.md) on GitHub.
diff --git a/pipeline/inputs/standard-input.md b/pipeline/inputs/standard-input.md
index 0f7f3a2d9..07a45b520 100644
--- a/pipeline/inputs/standard-input.md
+++ b/pipeline/inputs/standard-input.md
@@ -47,7 +47,7 @@ The Fluent Bit event timestamp will be set from the input record if the two-elem
### JSON input
-To demonstrate how the plugin works, you can use a `bash` script that generates messages and writes them to [Fluent Bit](http://fluentbit.io).
+To demonstrate how the plugin works, you can use a `bash` script that generates messages and writes them to [Fluent Bit](https://fluentbit.io).
1. Write the following content in a file named `test.sh`:
@@ -60,7 +60,7 @@ To demonstrate how the plugin works, you can use a `bash` script that generates
done
```
-2. Start the script and [Fluent Bit](http://fluentbit.io):
+2. Start the script and [Fluent Bit](https://fluentbit.io):
```shell
bash test.sh | fluent-bit -q -i stdin -o stdout
diff --git a/pipeline/outputs/azure.md b/pipeline/outputs/azure.md
index 24af7f304..d5521cc8c 100644
--- a/pipeline/outputs/azure.md
+++ b/pipeline/outputs/azure.md
@@ -4,9 +4,9 @@ description: Send logs, metrics to Azure Log Analytics
# Azure Log Analytics
-The Azure output plugin lets you ingest your records into [Azure Log Analytics](https://azure.microsoft.com/en-us/services/log-analytics/) service.
+The Azure output plugin lets you ingest your records into [Azure Log Analytics](https://azure.microsoft.com/en-us/products/monitor/) service.
-For details about how to setup Azure Log Analytics, see the [Azure Log Analytics](https://docs.microsoft.com/en-us/azure/log-analytics/) documentation.
+For details about how to setup Azure Log Analytics, see the [Azure Log Analytics](https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/overview) documentation.
## Configuration parameters
diff --git a/pipeline/outputs/azure_blob.md b/pipeline/outputs/azure_blob.md
index 36796e7e2..cd632a89b 100644
--- a/pipeline/outputs/azure_blob.md
+++ b/pipeline/outputs/azure_blob.md
@@ -4,7 +4,7 @@ description: Microsoft certified Azure Storage Blob connector
# Azure Blob
-The Azure Blob output plugin allows ingesting your records into [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) service. This connector is designed to use the Append Blob and Block Blob API.
+The Azure Blob output plugin allows ingesting your records into [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs/) service. This connector is designed to use the Append Blob and Block Blob API.
The Fluent Bit plugin works with the official Azure Service and can be configured to be used with a service emulator such as [Azurite](https://github.com/Azure/Azurite).
diff --git a/pipeline/outputs/azure_kusto.md b/pipeline/outputs/azure_kusto.md
index b9fb5c9aa..00e3a7b9a 100644
--- a/pipeline/outputs/azure_kusto.md
+++ b/pipeline/outputs/azure_kusto.md
@@ -4,14 +4,14 @@ description: Send logs to Azure Data Explorer (Kusto)
# Azure Data Explorer
-The _Kusto_ output plugin lets you ingest your logs into an [Azure Data Explorer](https://azure.microsoft.com/en-us/services/data-explorer/) cluster, using the [Queued Ingestion](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/api/netfx/about-kusto-ingest#queued-ingestion) mechanism. This output plugin can also be used to ingest logs into an [Eventhouse](https://blog.fabric.microsoft.com/en-us/blog/eventhouse-overview-handling-real-time-data-with-microsoft-fabric/) cluster in Microsoft Fabric Real Time Analytics.
+The _Kusto_ output plugin lets you ingest your logs into an [Azure Data Explorer](https://azure.microsoft.com/en-us/products/data-explorer/) cluster, using the [Queued Ingestion](https://learn.microsoft.com/en-us/kusto/api/netfx/about-kusto-ingest?view=azure-data-explorer&preserve-view=true&tabs=csharp#queued-ingestion) mechanism. This output plugin can also be used to ingest logs into an [Eventhouse](https://blog.fabric.microsoft.com/en-us/blog/eventhouse-overview-handling-real-time-data-with-microsoft-fabric/) cluster in Microsoft Fabric Real Time Analytics.
## Ingest into Azure Data Explorer: create a Kusto cluster and database
Create an Azure Data Explorer cluster in one of the following ways:
- [Create a free-tier cluster](https://dataexplorer.azure.com/freecluster)
-- [Create a fully featured cluster](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
+- [Create a fully featured cluster](https://learn.microsoft.com/en-us/azure/data-explorer/create-cluster-and-database?tabs=free)
## Ingest into Microsoft Fabric real time analytics: create an Eventhouse cluster and KQL database
diff --git a/pipeline/outputs/azure_logs_ingestion.md b/pipeline/outputs/azure_logs_ingestion.md
index 9354c9bf4..5dc01024a 100644
--- a/pipeline/outputs/azure_logs_ingestion.md
+++ b/pipeline/outputs/azure_logs_ingestion.md
@@ -18,22 +18,22 @@ To visualize the basic logs ingestion operation, see the following image:
To get more details about how to set up these components, refer to the following documentation:
-- [Azure Logs Ingestion API](https://docs.microsoft.com/en-us/azure/log-analytics/)
+- [Azure Logs Ingestion API](https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/overview)
- [Send data to Azure Monitor Logs with Logs ingestion API (setup DCE, DCR and Log Analytics)](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal)
## Configuration parameters
-| Key | Description | Default |
+| Key | Description | Default |
| :------------ | :------------------------- | :------ |
-| `tenant_id` | The tenant ID of the Azure Active Directory (AAD) application. | _none_ |
-| `client_id` | The client ID of the AAD application. | _none_ |
-| `client_secret`| The client secret of the AAD application ([App Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret)). | _none_ |
-| `dce_url` | Data Collection Endpoint(DCE) URL. | _none_ |
-| `dcr_id` | Data Collection Rule (DCR) [immutable ID](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal#collect-information-from-the-dcr). | _none_ |
-| `table_name` | The name of the custom log table (include the `_CL` suffix as well if applicable) | _none_ |
-| `time_key` | Optional. Specify the key name where the timestamp will be stored. | `@timestamp` |
+| `tenant_id` | The tenant ID of the Azure Active Directory (AAD) application. | _none_ |
+| `client_id` | The client ID of the AAD application. | _none_ |
+| `client_secret` | The client secret of the AAD application ([App Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret)). | _none_ |
+| `dce_url` | Data Collection Endpoint(DCE) URL. | _none_ |
+| `dcr_id` | Data Collection Rule (DCR) [immutable ID](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal#collect-information-from-the-dcr). | _none_ |
+| `table_name` | The name of the custom log table (include the `_CL` suffix as well if applicable) | _none_ |
+| `time_key` | Optional. Specify the key name where the timestamp will be stored. | `@timestamp` |
| `time_generated` | Optional. If enabled, will generate a timestamp and append it to JSON. The key name is set by the `time_key` parameter. | `true` |
-| `compress` | Optional. Enable HTTP payload gzip compression. | `true` |
+| `compress` | Optional. Enable HTTP payload gzip compression. | `true` |
| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Get started
diff --git a/pipeline/outputs/elasticsearch.md b/pipeline/outputs/elasticsearch.md
index 96ee4a9bd..0c21f31f1 100644
--- a/pipeline/outputs/elasticsearch.md
+++ b/pipeline/outputs/elasticsearch.md
@@ -4,7 +4,7 @@ description: Send logs to Elasticsearch (including Amazon OpenSearch Service)
# Elasticsearch
-The _Elasticsearch_ (`es`) output plugin lets you ingest your records into an [Elasticsearch](http://www.elastic.co) database. To use this plugin, you must have an operational Elasticsearch service running in your environment.
+The _Elasticsearch_ (`es`) output plugin lets you ingest your records into an [Elasticsearch](https://www.elastic.co) database. To use this plugin, you must have an operational Elasticsearch service running in your environment.
## Configuration parameters
@@ -36,7 +36,7 @@ This plugin has the following configuration parameters:
| `Logstash_Prefix` | When `Logstash_Format` is enabled, the Index name is composed using a prefix and the date, e.g: If `Logstash_Prefix` is equal to `mydata` your index will become `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. | `logstash` |
| `Logstash_Prefix_Key` | When included: the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record then the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ |
| `Logstash_Prefix_Separator` | Set a separator between `Logstash_Prefix` and date.| `-` |
-| `Logstash_DateFormat` | Time format based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html) to generate the second part of the Index name. | `%Y.%m.%d` |
+| `Logstash_DateFormat` | Time format based on [strftime](https://man7.org/linux/man-pages/man3/strftime.3.html) to generate the second part of the Index name. | `%Y.%m.%d` |
| `Time_Key` | When `Logstash_Format` is enabled, each record will get a new timestamp field. The `Time_Key` property defines the name of that field. | `@timestamp` |
| `Time_Key_Format` | When `Logstash_Format` is enabled, this property defines the format of the timestamp. | `%Y-%m-%dT%H:%M:%S` |
| `Time_Key_Nanos` | When `Logstash_Format` is enabled, enabling this property sends nanosecond precision timestamps. | `Off` |
diff --git a/pipeline/outputs/forward.md b/pipeline/outputs/forward.md
index 8e6bd67c9..bb36b716c 100644
--- a/pipeline/outputs/forward.md
+++ b/pipeline/outputs/forward.md
@@ -1,8 +1,8 @@
# Forward
-_Forward_ is the protocol used by [Fluentd](http://www.fluentd.org) to route messages between peers. The `forward` output plugin provides interoperability between [Fluent Bit](http://fluentbit.io) and [Fluentd](http://fluentd.org).
+_Forward_ is the protocol used by [Fluentd](https://www.fluentd.org) to route messages between peers. The `forward` output plugin provides interoperability between [Fluent Bit](https://fluentbit.io) and [Fluentd](https://fluentd.org).
-There are no configuration steps required besides specifying where [Fluentd](http://fluentd.org) is located, which can be a local or a remote destination.
+There are no configuration steps required besides specifying where [Fluentd](https://fluentd.org) is located, which can be a local or a remote destination.
This plugin offers the following transports and modes:
@@ -49,7 +49,7 @@ When using Secure Forward mode, the [TLS](../../administration/transport-securit
## Forward setup
-Before proceeding, ensure that [Fluentd](http://fluentd.org) is installed. If it's not, refer to the [Fluentd Installation](http://docs.fluentd.org/installation) document.
+Before proceeding, ensure that [Fluentd](https://fluentd.org) is installed. If it's not, refer to the [Fluentd Installation](https://docs.fluentd.org/installation) document.
After installing Fluentd, create the following configuration file example which lets you to stream data into it:
diff --git a/pipeline/outputs/http.md b/pipeline/outputs/http.md
index d5f4bd5c9..c21cf7103 100644
--- a/pipeline/outputs/http.md
+++ b/pipeline/outputs/http.md
@@ -1,6 +1,6 @@
# HTTP
-The _HTTP_ output plugin lets you flush your records into an HTTP endpoint. It issues `POST` or `PUT` requests with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
+The _HTTP_ output plugin lets you flush your records into an HTTP endpoint. It issues `POST` or `PUT` requests with the data records in [MessagePack](https://msgpack.org) (or JSON) format.
## Configuration parameters
diff --git a/pipeline/outputs/kafka-rest-proxy.md b/pipeline/outputs/kafka-rest-proxy.md
index eca41e1b5..685508b2b 100644
--- a/pipeline/outputs/kafka-rest-proxy.md
+++ b/pipeline/outputs/kafka-rest-proxy.md
@@ -1,6 +1,6 @@
# Kafka REST proxy
-The _Kafka rest_ (`kafka-rest`) output plugin lets you flush your records into a [Kafka REST Proxy](http://docs.confluent.io/current/kafka-rest/docs/index.html) server. The following instructions assume you have an operational Kafka REST Proxy and Kafka services running in your environment.
+The _Kafka rest_ (`kafka-rest`) output plugin lets you flush your records into a [Kafka REST Proxy](https://docs.confluent.io/platform/current/kafka-rest/index.html) server. The following instructions assume you have an operational Kafka REST Proxy and Kafka services running in your environment.
## Configuration parameters
diff --git a/pipeline/outputs/observe.md b/pipeline/outputs/observe.md
index ebc32b532..b162add17 100644
--- a/pipeline/outputs/observe.md
+++ b/pipeline/outputs/observe.md
@@ -1,6 +1,6 @@
# Observe
-Use the [HTTP output plugin](./http.md) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
+Use the [HTTP output plugin](./http.md) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](https://msgpack.org) (or JSON) format.
## Configuration parameters
diff --git a/pipeline/outputs/opensearch.md b/pipeline/outputs/opensearch.md
index a19d58d32..e1879d4f6 100644
--- a/pipeline/outputs/opensearch.md
+++ b/pipeline/outputs/opensearch.md
@@ -32,7 +32,7 @@ This plugin supports the following parameters:
| `Logstash_Prefix` | When `Logstash_Format` is enabled, the Index name is composed using a prefix and the date. For example, if `Logstash_Prefix` is equal to `mydata` your index will become `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. | `logstash` |
| `Logstash_Prefix_Key` | When included, the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record then the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ |
| `Logstash_Prefix_Separator` | Set a separator between `Logstash_Prefix` and `Date`. | `-` |
-| `Logstash_DateFormat` | Time format, based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html), to generate the second part of the `Index` name. | `%Y.%m.%d` |
+| `Logstash_DateFormat` | Time format, based on [strftime](https://man7.org/linux/man-pages/man3/strftime.3.html), to generate the second part of the `Index` name. | `%Y.%m.%d` |
| `Time_Key` | When `Logstash_Format` is enabled, each record will get a new timestamp field. The `Time_Key` property defines the name of that field. | `@timestamp` |
| `Time_Key_Format` | When `Logstash_Format` is enabled, this property defines the format of the timestamp. | `%Y-%m-%dT%H:%M:%S` |
| `Time_Key_Nanos` | When `Logstash_Format` is enabled, enabling this property sends nanosecond precision timestamps. | `Off` |
diff --git a/pipeline/outputs/plot.md b/pipeline/outputs/plot.md
index 5afe326e3..6885dc75a 100644
--- a/pipeline/outputs/plot.md
+++ b/pipeline/outputs/plot.md
@@ -4,7 +4,7 @@ description: Generate data file for GNU Plot
# Plot
-The _Plot_ output plugin generates data files in a format compatible with [GNU Plot](http://www.gnuplot.info/) (`gnuplot`), a command-line graphing tool. This plugin lets you export your telemetry data for visualization and analysis using `gnuplot`.
+The _Plot_ output plugin generates data files in a format compatible with [GNU Plot](https://www.gnuplot.info/) (`gnuplot`), a command-line graphing tool. This plugin lets you export your telemetry data for visualization and analysis using `gnuplot`.
## Configuration parameters
diff --git a/pipeline/outputs/splunk.md b/pipeline/outputs/splunk.md
index f905392ad..5adba02a8 100644
--- a/pipeline/outputs/splunk.md
+++ b/pipeline/outputs/splunk.md
@@ -6,7 +6,7 @@ description: Send logs to Splunk HTTP Event Collector
The _Splunk_ output plugin lets you ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector (HEC) interface.
-To learn how to set up the HEC in Splunk, refer to [Splunk / Use the HTTP Event Collector](http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector).
+To learn how to set up the HEC in Splunk, refer to [Splunk / Use the HTTP Event Collector](https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector).
## Configuration parameters
diff --git a/pipeline/outputs/treasure-data.md b/pipeline/outputs/treasure-data.md
index 8799fbf12..c017942f0 100644
--- a/pipeline/outputs/treasure-data.md
+++ b/pipeline/outputs/treasure-data.md
@@ -1,6 +1,6 @@
# Treasure Data
-The _Treasure Data_ (TD) output plugin lets you flush your records into the [Treasure Data](http://treasuredata.com) cloud service.
+The _Treasure Data_ (TD) output plugin lets you flush your records into the [Treasure Data](https://treasuredata.com) cloud service.
## Configuration parameters
diff --git a/pipeline/outputs/websocket.md b/pipeline/outputs/websocket.md
index 7be6bc72a..128622ef9 100644
--- a/pipeline/outputs/websocket.md
+++ b/pipeline/outputs/websocket.md
@@ -1,6 +1,6 @@
# WebSocket
-The _WebSocket_ output plugin lets you flush your records into a WebSocket endpoint. It issues an HTTP `GET` request to do the handshake, and then uses TCP connections to send the data records in either JSON or [MessagePack](http://msgpack.org) format.
+The _WebSocket_ output plugin lets you flush your records into a WebSocket endpoint. It issues an HTTP `GET` request to do the handshake, and then uses TCP connections to send the data records in either JSON or [MessagePack](https://msgpack.org) format.
## Configuration parameters
@@ -137,7 +137,7 @@ When Fluent Bit is running, you can send some messages using `netcat`:
echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170; sleep 35; echo '{"key 1": 123456789, "key 2": "abcdefg"}' | nc 127.0.0.1 5170
```
-In [Fluent Bit](http://fluentbit.io) you should see the following output:
+In [Fluent Bit](https://fluentbit.io) you should see the following output:
```shell
fluent-bit -c ../conf/out_ws.conf
diff --git a/pipeline/parsers/configuring-parser.md b/pipeline/parsers/configuring-parser.md
index dc06ba9be..03f7cd3f2 100644
--- a/pipeline/parsers/configuring-parser.md
+++ b/pipeline/parsers/configuring-parser.md
@@ -4,7 +4,7 @@ Use the information on this page to configure custom [parsers](../parsers.md).
{% hint style="info" %}
-To define a custom parser, add an entry to the [`parsers` section](../administration/configuring-fluent-bit/yaml/parsers-section.md) of your YAML configuration file, or create a [standalone parser file](../administration/configuring-fluent-bit/yaml/parsers-section.md#standalone-parsers-files).
+To define a custom parser, add an entry to the [`parsers` section](../../administration/configuring-fluent-bit/yaml/parsers-section.md) of your YAML configuration file, or create a [standalone parser file](../../administration/configuring-fluent-bit/yaml/parsers-section.md#standalone-parsers-files).
{% endhint %}
@@ -30,7 +30,7 @@ Custom parsers support the following configuration parameters:
### Time resolution and fractional seconds
-Time resolution and its format supported are handled by using the [strftime\(3\)](http://man7.org/linux/man-pages/man3/strftime.3.html) `libc` system function.
+Time resolution and its format supported are handled by using the [strftime\(3\)](https://man7.org/linux/man-pages/man3/strftime.3.html) `libc` system function.
In addition, Fluent Bit extends its time resolution to support fractional seconds like `017-05-17T15:44:31**.187512963**Z`. The `%L` format option for `Time_Format` is provided as a way to indicate that content must be interpreted as fractional seconds.
diff --git a/pipeline/parsers/multiline-parsing.md b/pipeline/parsers/multiline-parsing.md
index e288a04be..2345f3f3d 100644
--- a/pipeline/parsers/multiline-parsing.md
+++ b/pipeline/parsers/multiline-parsing.md
@@ -32,7 +32,7 @@ Custom multiline parsers support the following configuration parameters.
{% hint style="info" %}
-To define a custom multiline parser, add an entry to the [`multiline_parsers` section](../configuring-fluent-bit/yaml/multiline-parsers-section.md) of your YAML configuration file.
+To define a custom multiline parser, add an entry to the [`multiline_parsers` section](../../administration/configuring-fluent-bit/yaml/multiline-parsers-section.md) of your YAML configuration file.
{% endhint %}