diff --git a/SUMMARY.md b/SUMMARY.md index 2d6f3b6fd..49322acaf 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -24,63 +24,63 @@ ## Installation -* [Getting Started with Fluent Bit](installation/getting-started-with-fluent-bit.md) -* [Upgrade Notes](installation/upgrade-notes.md) -* [Supported Platforms](installation/supported-platforms.md) +* [Get started with Fluent Bit](installation/getting-started-with-fluent-bit.md) +* [Upgrade notes](installation/upgrade-notes.md) +* [Supported platforms](installation/supported-platforms.md) * [Requirements](installation/requirements.md) * [Sources](installation/sources/README.md) - * [Download Source Code](installation/sources/download-source-code.md) - * [Build and Install](installation/sources/build-and-install.md) - * [Build with Static Configuration](installation/sources/build-with-static-configuration.md) -* [Linux Packages](installation/linux/README.md) + * [Download source code](installation/sources/download-source-code.md) + * [Build and install](installation/sources/build-and-install.md) + * [Build with static configuration](installation/sources/build-with-static-configuration.md) +* [Linux packages](installation/linux/README.md) * [Amazon Linux](installation/linux/amazon-linux.md) - * [Alma / Rocky Linux](installation/linux/alma-rocky.md) - * [Redhat / CentOS](installation/linux/redhat-centos.md) + * [Rocky Linux and Alma Linux ](installation/linux/alma-rocky.md) + * [Red Hat and CentOS](installation/linux/redhat-centos.md) * [Debian](installation/linux/debian.md) * [Ubuntu](installation/linux/ubuntu.md) - * [Raspbian / Raspberry Pi](installation/linux/raspbian-raspberry-pi.md) + * [Raspbian and Raspberry Pi](installation/linux/raspbian-raspberry-pi.md) * [Docker](installation/docker.md) * [Containers on AWS](installation/aws-container.md) * [Amazon EC2](installation/amazon-ec2.md) * [Kubernetes](installation/kubernetes.md) * [macOS](installation/macos.md) * [Windows](installation/windows.md) -* [Yocto / Embedded Linux](installation/yocto-embedded-linux.md) -* [Buildroot / Embedded Linux](installation/buildroot-embedded-linux.md) +* [Yocto embedded Linux](installation/yocto-embedded-linux.md) +* [Buildroot embedded Linux](installation/buildroot-embedded-linux.md) ## Administration -* [Configuring Fluent Bit](administration/configuring-fluent-bit/README.md) - * [YAML Configuration](administration/configuring-fluent-bit/yaml/README.md) +* [Configure Fluent Bit](administration/configuring-fluent-bit/README.md) + * [YAML configuration](administration/configuring-fluent-bit/yaml/README.md) * [Service](administration/configuring-fluent-bit/yaml/service-section.md) * [Parsers](administration/configuring-fluent-bit/yaml/parsers-section.md) - * [Multiline Parsers](administration/configuring-fluent-bit/yaml/multiline-parsers-section.md) + * [Multiline parsers](administration/configuring-fluent-bit/yaml/multiline-parsers-section.md) * [Pipeline](administration/configuring-fluent-bit/yaml/pipeline-section.md) * [Plugins](administration/configuring-fluent-bit/yaml/plugins-section.md) - * [Upstream Servers](administration/configuring-fluent-bit/yaml/upstream-servers-section.md) - * [Environment Variables](administration/configuring-fluent-bit/yaml/environment-variables-section.md) + * [Upstream servers](administration/configuring-fluent-bit/yaml/upstream-servers-section.md) + * [Environment variables](administration/configuring-fluent-bit/yaml/environment-variables-section.md) * [Includes](administration/configuring-fluent-bit/yaml/includes-section.md) * [Classic mode](administration/configuring-fluent-bit/classic-mode/README.md) - * [Format and Schema](administration/configuring-fluent-bit/classic-mode/format-schema.md) - * [Configuration File](administration/configuring-fluent-bit/classic-mode/configuration-file.md) + * [Format and schema](administration/configuring-fluent-bit/classic-mode/format-schema.md) + * [Configuration file](administration/configuring-fluent-bit/classic-mode/configuration-file.md) * [Variables](administration/configuring-fluent-bit/classic-mode/variables.md) * [Commands](administration/configuring-fluent-bit/classic-mode/commands.md) - * [Upstream Servers](administration/configuring-fluent-bit/classic-mode/upstream-servers.md) - * [Record Accessor](administration/configuring-fluent-bit/classic-mode/record-accessor.md) + * [Upstream servers](administration/configuring-fluent-bit/classic-mode/upstream-servers.md) + * [Record accessor syntax](administration/configuring-fluent-bit/classic-mode/record-accessor.md) * [Unit Sizes](administration/configuring-fluent-bit/unit-sizes.md) - * [Multiline Parsing](administration/configuring-fluent-bit/multiline-parsing.md) -* [Transport Security](administration/transport-security.md) -* [Buffering and Storage](administration/buffering-and-storage.md) + * [Multiline parsing](administration/configuring-fluent-bit/multiline-parsing.md) +* [TLS](administration/transport-security.md) +* [Buffering and storage](administration/buffering-and-storage.md) * [Backpressure](administration/backpressure.md) -* [Scheduling and Retries](administration/scheduling-and-retries.md) +* [Scheduling and retries](administration/scheduling-and-retries.md) * [Networking](administration/networking.md) -* [Memory Management](administration/memory-management.md) +* [Memory management](administration/memory-management.md) * [Monitoring](administration/monitoring.md) * [Multithreading](administration/multithreading.md) -* [HTTP Proxy](administration/http-proxy.md) -* [Hot Reload](administration/hot-reload.md) +* [HTTP proxy](administration/http-proxy.md) +* [Hot reload](administration/hot-reload.md) * [Troubleshooting](administration/troubleshooting.md) -* [Performance Tips](administration/performance.md) +* [Performance tips](administration/performance.md) * [AWS credentials](administration/aws-credentials.md) ## Local Testing diff --git a/administration/aws-credentials.md b/administration/aws-credentials.md index c36fcc661..9fbd5ea68 100644 --- a/administration/aws-credentials.md +++ b/administration/aws-credentials.md @@ -1,7 +1,6 @@ -# AWS Credentials +# AWS credentials -Plugins that interact with AWS services fetch credentials from the following providers -in order. Only the first provider that provides credentials is used. +Plugins that interact with AWS services fetch credentials from the following providers in order. Only the first provider that provides credentials is used. - [Environment variables](#environment-variables) - [Shared configuration and credentials files](#shared-configuration-and-credentials-files) @@ -9,22 +8,15 @@ in order. Only the first provider that provides credentials is used. - [ECS HTTP credentials endpoint](#ecs-http-credentials-endpoint) - [EC2 Instance Profile Credentials (IMDS)](#ec2-instance-profile-credentials-imds) -All AWS plugins additionally support a `role_arn` (or `AWS_ROLE_ARN`, for -[Elasticsearch](../pipeline/outputs/elasticsearch.md)) configuration parameter. If -specified, the fetched credentials are used to assume the given role. +All AWS plugins additionally support a `role_arn` (or `AWS_ROLE_ARN`, for [Elasticsearch](../pipeline/outputs/elasticsearch.md)) configuration parameter. If specified, the fetched credentials are used to assume the given role. ## Environment variables -Plugins use the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (and optionally -`AWS_SESSION_TOKEN`) environment variables if set. +Plugins use the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (and optionally `AWS_SESSION_TOKEN`) environment variables if set. ## Shared configuration and credentials files -Plugins read the shared `config` file at `$AWS_CONFIG_FILE` (or `$HOME/.aws/config`), -and the shared credentials file at `$AWS_SHARED_CREDENTIALS_FILE` (or -`$HOME/.aws/credentials`) to fetch the credentials for the profile named -`$AWS_PROFILE` or `$AWS_DEFAULT_PROFILE` (or "default"). See -[Configuration and credential file settings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). +Plugins read the shared `config` file at `$AWS_CONFIG_FILE` (or `$HOME/.aws/config`), and the shared credentials file at `$AWS_SHARED_CREDENTIALS_FILE` (or `$HOME/.aws/credentials`) to fetch the credentials for the profile named `$AWS_PROFILE` or `$AWS_DEFAULT_PROFILE` (or "default"). See [Configuration and credential file settings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). The shared settings evaluate in the following order: @@ -37,22 +29,16 @@ No other settings are supported. ## EKS Web Identity Token (OIDC) -Credentials are fetched using a signed web identity token for a Kubernetes service account. -See [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). +Credentials are fetched using a signed web identity token for a Kubernetes service account. See [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). ## ECS HTTP credentials endpoint -Credentials are fetched for the ECS task's role. See -[Amazon ECS task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html). +Credentials are fetched for the ECS task's role. See [Amazon ECS task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html). ## EKS Pod Identity credentials -Credentials are fetched using a pod identity endpoint. See -[Learn how EKS Pod Identity grants pods access to AWS services](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html). +Credentials are fetched using a pod identity endpoint. See [Learn how EKS Pod Identity grants pods access to AWS services](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html). ## EC2 instance profile credentials (IMDS) -Fetches credentials for the EC2 instance profile's role. See -[IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). -As of Fluent Bit version 1.8.8, IMDSv2 is used by default and IMDSv1 might be disabled. -Prior versions of Fluent Bit require enabling IMDSv1 on EC2. +Fetches credentials for the EC2 instance profile's role. See [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). As of Fluent Bit version 1.8.8, IMDSv2 is used by default and IMDSv1 might be disabled. Prior versions of Fluent Bit require enabling IMDSv1 on EC2. diff --git a/administration/backpressure.md b/administration/backpressure.md index 8cfa03723..90f0ac9ee 100644 --- a/administration/backpressure.md +++ b/administration/backpressure.md @@ -2,47 +2,21 @@ -It's possible for logs or data to be ingested or created faster than the ability to -flush it to some destinations. A common scenario is when reading from big log files, -especially with a large backlog, and dispatching the logs to a backend over the -network, which takes time to respond. This generates _backpressure_, leading to high -memory consumption in the service. - -To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts -the amount of data an input plugin can ingest. Restriction is done through the -configuration parameters `Mem_Buf_Limit` and `storage.Max_Chunks_Up`. - -As described in the [Buffering](../concepts/buffering.md) concepts section, Fluent -Bit offers two modes for data handling: in-memory only (default) and in-memory and -filesystem (optional). - -The default `storage.type memory` buffer can be restricted with `Mem_Buf_Limit`. If -memory reaches this limit and you reach a backpressure scenario, you won't be able -to ingest more data until the data chunks that are in memory can be flushed. The -input pauses and Fluent Bit -[emits](https://github.com/fluent/fluent-bit/blob/v2.0.0/src/flb_input_chunk.c#L1334) -a `[warn] [input] {input name or alias} paused (mem buf overlimit)` log message. - -Depending on the input plugin in use, this might cause incoming data to be discarded -(for example, TCP input plugin). The tail plugin can handle pauses without data -ingloss, storing its current file offset and resuming reading later. When buffer -memory is available, the input resumes accepting logs. Fluent Bit -[emits](https://github.com/fluent/fluent-bit/blob/v2.0.0/src/flb_input_chunk.c#L1277) -a `[info] [input] {input name or alias} resume (mem buf overlimit)` message. - -Mitigate the risk of data loss by configuring secondary storage on the filesystem -using the `storage.type` of `filesystem` (as described in [Buffering & -Storage](buffering-and-storage.md)). Initially, logs will be buffered to both memory -and the filesystem. When the `storage.max_chunks_up` limit is reached, all new data -will be stored in the filesystem. Fluent Bit stops queueing new data in memory and -buffers only to the filesystem. When `storage.type filesystem` is set, the -`Mem_Buf_Limit` setting no longer has any effect. Instead, the `[SERVICE]` level -`storage.max_chunks_up` setting controls the size of the memory buffer. +It's possible for logs or data to be ingested or created faster than the ability toflush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates _backpressure_, leading to high memory consumption in the service. + +To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters `Mem_Buf_Limit` and `storage.Max_Chunks_Up`. + +As described in the [Buffering](../concepts/buffering.md) concepts section, Fluent Bit offers two modes for data handling: in-memory only (default) and in-memory and filesystem (optional). + +The default `storage.type memory` buffer can be restricted with `Mem_Buf_Limit`. If memory reaches this limit and you reach a backpressure scenario, you won't be able to ingest more data until the data chunks that are in memory can be flushed. The input pauses and Fluent Bit [emits](https://github.com/fluent/fluent-bit/blob/v2.0.0/src/flb_input_chunk.c#L1334) a `[warn] [input] {input name or alias} paused (mem buf overlimit)` log message. + +Depending on the input plugin in use, this might cause incoming data to be discarded (for example, TCP input plugin). The tail plugin can handle pauses without data ingloss, storing its current file offset and resuming reading later. When buffer memory is available, the input resumes accepting logs. Fluent Bit [emits](https://github.com/fluent/fluent-bit/blob/v2.0.0/src/flb_input_chunk.c#L1277) a `[info] [input] {input name or alias} resume (mem buf overlimit)` message. + +Mitigate the risk of data loss by configuring secondary storage on the filesystem using the `storage.type` of `filesystem` (as described in [Buffering & Storage](buffering-and-storage.md)). Initially, logs will be buffered to both memory and the filesystem. When the `storage.max_chunks_up` limit is reached, all new data will be stored in the filesystem. Fluent Bit stops queueing new data in memory and buffers only to the filesystem. When `storage.type filesystem` is set, the `Mem_Buf_Limit` setting no longer has any effect. Instead, the `[SERVICE]` level `storage.max_chunks_up` setting controls the size of the memory buffer. ## `Mem_Buf_Limit` -`Mem_Buf_Limit` applies only with the default `storage.type memory`. This option is -disabled by default and can be applied to all input plugins. +`Mem_Buf_Limit` applies only with the default `storage.type memory`. This option is disabled by default and can be applied to all input plugins. As an example situation: @@ -53,21 +27,14 @@ As an example situation: - Engine scheduler retries the flush after 10 seconds. - The input plugin tries to append 500 KB. -In this situation, the engine allows appending those 500 KB of data into the memory, -with a total of 1.2 MB of data buffered. The limit is permissive and will -allow a single write past the limit. When the limit is exceeded, the following -actions are taken: +In this situation, the engine allows appending those 500 KB of data into the memory, with a total of 1.2 MB of data buffered. The limit is permissive and will allow a single write past the limit. When the limit is exceeded, the following actions are taken: - Block local buffers for the input plugin (can't append more data). - Notify the input plugin, invoking a `pause` callback. -The engine protects itself and won't append more data coming from the input plugin in -question. It's the responsibility of the plugin to keep state and decide what to do -in a `paused` state. +The engine protects itself and won't append more data coming from the input plugin in question. It's the responsibility of the plugin to keep state and decide what to do in a `paused` state. -In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it -has given up after retrying, that amount of memory is released and the following -actions occur: +In a few seconds, if the scheduler was able to flush the initial 700 KB of data or it has given up after retrying, that amount of memory is released and the following actions occur: - Upon data buffer release (700 KB), the internal counters get updated. - Counters now are set at 500 KB. @@ -77,42 +44,28 @@ actions occur: ## `storage.max_chunks_up` -The `[SERVICE]` level `storage.max_chunks_up` setting controls the size of the memory -buffer. When `storage.type filesystem` is set, the `Mem_Buf_Limit` setting no longer -has an effect. +The `[SERVICE]` level `storage.max_chunks_up` setting controls the size of the memory buffer. When `storage.type filesystem` is set, the `Mem_Buf_Limit` setting no longer has an effect. -The setting behaves similar to the `Mem_Buf_Limit` scenario when the non-default -`storage.pause_on_chunks_overlimit` is enabled. +The setting behaves similar to the `Mem_Buf_Limit` scenario when the non-default `storage.pause_on_chunks_overlimit` is enabled. -When (default) `storage.pause_on_chunks_overlimit` is disabled, the input won't pause -when the memory limit is reached. Instead, it switches to buffering logs only in -the filesystem. Limit the disk spaced used for filesystem buffering with -`storage.total_limit_size`. +When (default) `storage.pause_on_chunks_overlimit` is disabled, the input won't pause when the memory limit is reached. Instead, it switches to buffering logs only in the filesystem. Limit the disk spaced used for filesystem buffering with `storage.total_limit_size`. See [Buffering & Storage](buffering-and-storage.md) docs for more information. ## About pause and resume callbacks -Each plugin is independent and not all of them implement `pause` and `resume` -callbacks. These callbacks are a notification mechanism for the plugin. +Each plugin is independent and not all of them implement `pause` and `resume` callbacks. These callbacks are a notification mechanism for the plugin. -One example of a plugin that implements these callbacks and keeps state correctly is -the [Tail Input](../pipeline/inputs/tail.md) plugin. When the `pause` callback -triggers, it pauses its collectors and stops appending data. Upon `resume`, it -resumes the collectors and continues ingesting data. Tail tracks the current file -offset when it pauses, and resumes at the same position. If the file hasn't been -deleted or moved, it can still be read. +One example of a plugin that implements these callbacks and keeps state correctly is the [Tail Input](../pipeline/inputs/tail.md) plugin. When the `pause` callback triggers, it pauses its collectors and stops appending data. Upon `resume`, it resumes the collectors and continues ingesting data. Tail tracks the current file offset when it pauses, and resumes at the same position. If the file hasn't been deleted or moved, it can still be read. -With the default `storage.type memory` and `Mem_Buf_Limit`, the following log -messages emit for `pause` and `resume`: +With the default `storage.type memory` and `Mem_Buf_Limit`, the following log messages emit for `pause` and `resume`: ```text [warn] [input] {input name or alias} paused (mem buf overlimit) [info] [input] {input name or alias} resume (mem buf overlimit) ``` -With `storage.type filesystem` and `storage.max_chunks_up`, the following log -messages emit for `pause` and `resume`: +With `storage.type filesystem` and `storage.max_chunks_up`, the following log messages emit for `pause` and `resume`: ```text [input] {input name or alias} paused (storage buf overlimit) diff --git a/administration/buffering-and-storage.md b/administration/buffering-and-storage.md index beb191fe2..42d77f64b 100644 --- a/administration/buffering-and-storage.md +++ b/administration/buffering-and-storage.md @@ -1,73 +1,42 @@ -# Buffering and Storage +# Buffering and storage -[Fluent Bit](https://fluentbit.io) collects, parses, filters, and ships logs to a -central place. A critical piece of this workflow is the ability to do _buffering_: a -mechanism to place processed data into a temporary location until is ready to be -shipped. +[Fluent Bit](https://fluentbit.io) collects, parses, filters, and ships logs to a central place. A critical piece of this workflow is the ability to do _buffering_: a mechanism to place processed data into a temporary location until is ready to be shipped. -By default when Fluent Bit processes data, it uses Memory as a primary and temporary -place to store the records. There are scenarios where it would be ideal -to have a persistent buffering mechanism based in the filesystem to provide -aggregation and data safety capabilities. +By default when Fluent Bit processes data, it uses Memory as a primary and temporary place to store the records. There are scenarios where it would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities. -Choosing the right configuration is critical and the behavior of the service can be -conditioned based in the backpressure settings. Before jumping into the configuration -it helps to understand the relationship between _chunks_, _memory_, -_filesystem_, and _backpressure_. +Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before jumping into the configuration it helps to understand the relationship between _chunks_, _memory_, _filesystem_, and _backpressure_. ## Chunks, memory, filesystem, and backpressure -Understanding chunks, buffering, and backpressure is critical for a proper -configuration. +Understanding chunks, buffering, and backpressure is critical for a proper configuration. ### Backpressure -See [Backpressure](https://docs.fluentbit.io/manual/administration/backpressure) -for a full explanation. +See [Backpressure](https://docs.fluentbit.io/manual/administration/backpressure) for a full explanation. ### Chunks -When an input plugin source emits records, the engine groups the records together -in a _chunk_. A chunk's size usually is around 2 MB. By configuration, the engine -decides where to place this chunk. By default, all chunks are created only in -memory. +When an input plugin source emits records, the engine groups the records together in a _chunk_. A chunk's size usually is around 2 MB. By configuration, the engine decides where to place this chunk. By default, all chunks are created only in memory. ### Irrecoverable chunks There are two scenarios where Fluent Bit marks chunks as irrecoverable: -- When Fluent Bit encounters a bad layout in a chunk. A bad layout is a chunk that - doesn't conform to the expected format. - [Chunk definition](https://github.com/fluent/fluent-bit/blob/master/CHUNKS.md) +- When Fluent Bit encounters a bad layout in a chunk. A bad layout is a chunk that doesn't conform to the expected format. [Chunk definition](https://github.com/fluent/fluent-bit/blob/master/CHUNKS.md) - When Fluent Bit encounters an incorrect or invalid chunk header size. -In both scenarios Fluent Bit logs an error message and then discards the -irrecoverable chunks. +In both scenarios Fluent Bit logs an error message and then discards the irrecoverable chunks. #### Buffering and memory -As mentioned previously, chunks generated by the engine are placed in memory by -default, but this is configurable. - -If memory is the only mechanism set for the input plugin, it will store as much data -as possible in memory. This is the fastest mechanism with the least system -overhead. However, if the service isn't able to deliver the records fast enough, -Fluent Bit memory usage increases as it accumulates more data than it can deliver. - -In a high load environment with backpressure, having high memory usage risks getting -killed by the kernel's OOM Killer. To work around this backpressure scenario, -limit the amount of memory in records that an input plugin can register using the -`mem_buf_limit` property. If a -plugin has queued more than the `mem_buf_limit`, it won't be able to ingest more -until that data can be delivered or flushed properly. In this scenario the input -plugin in question is paused. When the input is paused, records won't be ingested -until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will -almost certainly lead to log loss. For the tail input, Fluent Bit can save its -current offset in the current file it's reading, and pick back up when the input -resumes. +As mentioned previously, chunks generated by the engine are placed in memory by default, but this is configurable. + +If memory is the only mechanism set for the input plugin, it will store as much data as possible in memory. This is the fastest mechanism with the least system overhead. However, if the service isn't able to deliver the records fast enough, Fluent Bit memory usage increases as it accumulates more data than it can deliver. + +In a high load environment with backpressure, having high memory usage risks getting killed by the kernel's OOM Killer. To work around this backpressure scenario, limit the amount of memory in records that an input plugin can register using the `mem_buf_limit` property. If a plugin has queued more than the `mem_buf_limit`, it won't be able to ingest more until that data can be delivered or flushed properly. In this scenario the input plugin in question is paused. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input resumes. Look for messages in the Fluent Bit log output like: @@ -76,11 +45,7 @@ Look for messages in the Fluent Bit log output like: [input] tail.1 resume (mem buf overlimit) ``` -Using `mem_buf_limit` is good for certain scenarios and environments. It -helps to control the memory usage of the service. However, if a file rotates while -the plugin is paused, data can be lost since it won't be able to -register new records. This can happen with any input source plugin. The goal of -`mem_buf_limit` is memory control and survival of the service. +Using `mem_buf_limit` is good for certain scenarios and environments. It helps to control the memory usage of the service. However, if a file rotates while the plugin is paused, data can be lost since it won't be able to register new records. This can happen with any input source plugin. The goal of `mem_buf_limit` is memory control and survival of the service. For a full data safety guarantee, use filesystem buffering. @@ -117,8 +82,7 @@ pipeline: {% endtab %} {% endtabs %} -If this input uses more than 50 MB memory to buffer logs, you will get a warning like -this in the Fluent Bit logs: +If this input uses more than 50 MB memory to buffer logs, you will get a warning like this in the Fluent Bit logs: ```text [input] tcp.1 paused (mem buf overlimit) @@ -131,49 +95,19 @@ this in the Fluent Bit logs: #### Filesystem buffering -Filesystem buffering helps with backpressure and overall memory control. Enable it -using `storage.type filesystem`. - -Memory and filesystem buffering mechanisms aren't mutually exclusive. Enabling -filesystem buffering for your input plugin source can improve both performance and -data safety. - -Enabling filesystem buffering changes the behavior of the engine. Upon chunk -creation, the engine stores the content in memory and also maps a copy on disk -through [mmap(2)](https://man7.org/linux/man-pages/man2/mmap.2.html). The newly -created chunk is active in memory, backed up on disk, and called to be -`up`, which means the chunk content is up in memory. - -Fluent Bit controls the number of chunks that are `up` in memory by using the -filesystem buffering mechanism to deal with high memory usage and -backpressure. - -By default, the engine allows a total of 128 chunks `up` in memory in total, -considering all chunks. This value is controlled by the service property -`storage.max_chunks_up`. The active chunks that are `up` are ready for delivery -and are still receiving records. Any other remaining chunk is in a `down` -state, which means that it's only in the filesystem and won't be `up` in memory -unless it's ready to be delivered. Chunks are never much larger than 2 MB, -so with the default `storage.max_chunks_up` value of 128, each input is limited to -roughly 256 MB of memory. - -If the input plugin has enabled `storage.type` as `filesystem`, when reaching the -`storage.max_chunks_up` threshold, instead of the plugin being paused, all new data -will go to chunks that are `down` in the filesystem. This lets you control -memory usage by the service and also provides a guarantee that the service won't lose -any data. By default, the enforcement of the `storage.max_chunks_up` limit is -best-effort. Fluent Bit can only append new data to chunks that are `up`. When the -limit is reached chunks will be temporarily brought `up` in memory to ingest new -data, and then put to a `down` state afterwards. In general, Fluent Bit works to -keep the total number of `up` chunks at or below `storage.max_chunks_up`. - -If `storage.pause_on_chunks_overlimit` is enabled (default is off), the input plugin -pauses upon exceeding `storage.max_chunks_up`. With this option, -`storage.max_chunks_up` becomes a hard limit for the input. When the input is paused, -records won't be ingested until the plugin resumes. For some inputs, such as TCP and -tail, pausing the input will almost certainly lead to log loss. For the tail input, -Fluent Bit can save its current offset in the current file it's reading, and pick -back up when the input is resumed. +Filesystem buffering helps with backpressure and overall memory control. Enable it using `storage.type filesystem`. + +Memory and filesystem buffering mechanisms aren't mutually exclusive. Enabling filesystem buffering for your input plugin source can improve both performance and data safety. + +Enabling filesystem buffering changes the behavior of the engine. Upon chunk creation, the engine stores the content in memory and also maps a copy on disk through [mmap(2)](https://man7.org/linux/man-pages/man2/mmap.2.html). The newly created chunk is active in memory, backed up on disk, and called to be `up`, which means the chunk content is up in memory. + +Fluent Bit controls the number of chunks that are `up` in memory by using the filesystem buffering mechanism to deal with high memory usage and backpressure. + +By default, the engine allows a total of 128 chunks `up` in memory in total, considering all chunks. This value is controlled by the service property `storage.max_chunks_up`. The active chunks that are `up` are ready for delivery and are still receiving records. Any other remaining chunk is in a `down` state, which means that it's only in the filesystem and won't be `up` in memory unless it's ready to be delivered. Chunks are never much larger than 2 MB, so with the default `storage.max_chunks_up` value of 128, each input is limited to roughly 256 MB of memory. + +If the input plugin has enabled `storage.type` as `filesystem`, when reaching the `storage.max_chunks_up` threshold, instead of the plugin being paused, all new data will go to chunks that are `down` in the filesystem. This lets you control memory usage by the service and also provides a guarantee that the service won't lose any data. By default, the enforcement of the `storage.max_chunks_up` limit is best-effort. Fluent Bit can only append new data to chunks that are `up`. When the limit is reached chunks will be temporarily brought `up` in memory to ingest new data, and then put to a `down` state afterwards. In general, Fluent Bit works to keep the total number of `up` chunks at or below `storage.max_chunks_up`. + +If `storage.pause_on_chunks_overlimit` is enabled (default is off), the input plugin pauses upon exceeding `storage.max_chunks_up`. With this option, `storage.max_chunks_up` becomes a hard limit for the input. When the input is paused, records won't be ingested until the plugin resumes. For some inputs, such as TCP and tail, pausing the input will almost certainly lead to log loss. For the tail input, Fluent Bit can save its current offset in the current file it's reading, and pick back up when the input is resumed. Look for messages in the Fluent Bit log output like: @@ -184,19 +118,11 @@ Look for messages in the Fluent Bit log output like: ##### Limiting filesystem space for chunks -Fluent Bit implements the concept of logical queues. Based on its tag, a chunk can be -routed to multiple destinations. Fluent Bit keeps an internal reference from where a -chunk was created and where it needs to go. +Fluent Bit implements the concept of logical queues. Based on its tag, a chunk can be routed to multiple destinations. Fluent Bit keeps an internal reference from where a chunk was created and where it needs to go. -It's common to find cases where multiple destinations with different response times -exist for a chunk, or one of the destinations is generating backpressure. +It's common to find cases where multiple destinations with different response times exist for a chunk, or one of the destinations is generating backpressure. -To limit the amount of filesystem chunks logically queueing, Fluent Bit v1.6 and -later includes the `storage.total_limit_size` configuration property for output -This property limits the total size in bytes of chunks that can exist in the -filesystem for a certain logical output destination. If one of the destinations -reaches the configured `storage.total_limit_size`, the oldest chunk from its queue -for that logical output destination will be discarded to make room for new data. +To limit the amount of filesystem chunks logically queueing, Fluent Bit v1.6 and later includes the `storage.total_limit_size` configuration property for output This property limits the total size in bytes of chunks that can exist in the filesystem for a certain logical output destination. If one of the destinations reaches the configured `storage.total_limit_size`, the oldest chunk from its queue for that logical output destination will be discarded to make room for new data. ## Configuration @@ -206,14 +132,11 @@ The storage layer configuration takes place in three sections: - Input - Output -The known Service section configures a global environment for the storage layer, the -Input sections define which buffering mechanism to use, and the Output defines limits for -the logical filesystem queues. +The known Service section configures a global environment for the storage layer, the Input sections define which buffering mechanism to use, and the Output defines limits for the logical filesystem queues. ### Service section configuration -The Service section refers to the section defined in the main -[configuration file](configuring-fluent-bit/classic-mode/configuration-file.md): +The Service section refers to the section defined in the main [configuration file](configuring-fluent-bit/classic-mode/configuration-file.md): | Key | Description | Default | | :--- | :--- | :--- | @@ -260,23 +183,18 @@ service: {% endtab %} {% endtabs %} -This configuration sets an optional buffering mechanism where the route to the data -is `/var/log/flb-storage/`. It uses `normal` synchronization mode, without -running a checksum and up to a maximum of 5 MB of memory when processing backlog data. +This configuration sets an optional buffering mechanism where the route to the data is `/var/log/flb-storage/`. It uses `normal` synchronization mode, without running a checksum and up to a maximum of 5 MB of memory when processing backlog data. ### Input Section Configuration -Optionally, any Input plugin can configure their storage preference. The following -table describes the options available: +Optionally, any Input plugin can configure their storage preference. The following table describes the options available: | Key | Description | Default | | :--- | :--- | :--- | | `storage.type` | Specifies the buffering mechanism to use. Accepted values: `memory`, `filesystem`. | `memory` | | `storage.pause_on_chunks_overlimit` | Specifies if the input plugin should pause (stop ingesting new data) when the `storage.max_chunks_up` value is reached. |`off` | -The following example configures a service offering filesystem buffering -capabilities and two input plugins being the first based in filesystem and the second -with memory only. +The following example configures a service offering filesystem buffering capabilities and two input plugins being the first based in filesystem and the second with memory only. {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -328,17 +246,13 @@ pipeline: ### Output Section Configuration -If certain chunks are filesystem `storage.type` based, it's possible to control the -size of the logical queue for an output plugin. The following table describes the -options available: +If certain chunks are filesystem `storage.type` based, it's possible to control the size of the logical queue for an output plugin. The following table describes the options available: | Key | Description | Default | | :--- | :--- | :--- | | `storage.total_limit_size` | Limit the maximum disk space size in bytes for buffering chunks in the filesystem for the current output logical destination. | _none_ | -The following example creates records with CPU usage samples in the filesystem which -are delivered to Google Stackdriver service while limiting the logical queue -(buffering) to `5M`: +The following example creates records with CPU usage samples in the filesystem which are delivered to Google Stackdriver service while limiting the logical queue (buffering) to `5M`: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -391,5 +305,4 @@ pipeline: {% endtab %} {% endtabs %} -If Fluent Bit is offline because of a network issue, it will continue buffering CPU -samples, keeping a maximum of 5 MB of the newest data. \ No newline at end of file +If Fluent Bit is offline because of a network issue, it will continue buffering CPU samples, keeping a maximum of 5 MB of the newest data. diff --git a/administration/configuring-fluent-bit/README.md b/administration/configuring-fluent-bit/README.md index d851c91c9..0065fd259 100644 --- a/administration/configuring-fluent-bit/README.md +++ b/administration/configuring-fluent-bit/README.md @@ -1,4 +1,4 @@ -# Configuring Fluent Bit +# Configure Fluent Bit Fluent Bit supports two configuration formats: diff --git a/administration/configuring-fluent-bit/classic-mode/README.md b/administration/configuring-fluent-bit/classic-mode/README.md index 4d30e8a3b..e4db3b567 100644 --- a/administration/configuring-fluent-bit/classic-mode/README.md +++ b/administration/configuring-fluent-bit/classic-mode/README.md @@ -1 +1 @@ -# Fluent Bit classic mode \ No newline at end of file +# Classic mode diff --git a/administration/configuring-fluent-bit/classic-mode/commands.md b/administration/configuring-fluent-bit/classic-mode/commands.md index 06aecad8c..1f1cad60f 100644 --- a/administration/configuring-fluent-bit/classic-mode/commands.md +++ b/administration/configuring-fluent-bit/classic-mode/commands.md @@ -2,8 +2,7 @@ Configuration files must be flexible enough for any deployment need, but they must keep a clean and readable format. -Fluent Bit `Commands` extends a configuration file with specific built-in features. -The following commands are available: +Fluent Bit `Commands` extends a configuration file with specific built-in features. The following commands are available: | Command | Prototype | Description | | :--- | :--- | :--- | @@ -35,8 +34,7 @@ Fluent Bit will respects the following order when including: ### `inputs.conf` -The following is an example of an `inputs.conf` file, like the one called in the -previous example. +The following is an example of an `inputs.conf` file, like the one called in the previous example. ```text [INPUT] @@ -51,8 +49,7 @@ previous example. ### outputs.conf -The following is an example of an `outputs.conf` file, like the one called in the -previous example. +The following is an example of an `outputs.conf` file, like the one called in the previous example. ```text [OUTPUT] diff --git a/administration/configuring-fluent-bit/classic-mode/configuration-file.md b/administration/configuring-fluent-bit/classic-mode/configuration-file.md index 431866276..9f0287a3e 100644 --- a/administration/configuring-fluent-bit/classic-mode/configuration-file.md +++ b/administration/configuring-fluent-bit/classic-mode/configuration-file.md @@ -6,9 +6,7 @@ description: This page describes the main configuration file used by Fluent Bit. -One of the ways to configure Fluent Bit is using a main configuration file. Fluent -Bit allows the use one configuration file that works at a global scope and uses the -defined [Format and Schema](format-schema.md). +One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined [Format and Schema](format-schema.md). The main configuration file supports four sections: @@ -17,13 +15,11 @@ The main configuration file supports four sections: - Filter - Output -It's also possible to split the main configuration file into multiple files using -the Include File feature to include external files. +It's also possible to split the main configuration file into multiple files using the Include File feature to include external files. ## Service -The `Service` section defines global properties of the service. The following keys -are: +The `Service` section defines global properties of the service. The following keys are: | Key | Description | Default Value | | --------------- | ------------- | ------------- | @@ -58,9 +54,7 @@ For scheduler and retry details, see [scheduling and retries](../../scheduling-a ## Config input -The `INPUT` section defines a source (related to an input plugin). Each -[input plugin](https://docs.fluentbit.io/manual/pipeline/inputs) can add its own -configuration keys: +The `INPUT` section defines a source (related to an input plugin). Each [input plugin](https://docs.fluentbit.io/manual/pipeline/inputs) can add its own configuration keys: | Key | Description | | ----------- | ------------| @@ -68,9 +62,7 @@ configuration keys: | `Tag` | Tag name associated to all records coming from this plugin. | | `Log_Level` | Set the plugin's logging verbosity level. Allowed values are: `off`, `error`, `warn`, `info`, `debug`, and `trace`. Defaults to the `SERVICE` section's `Log_Level`. | -`Name` is mandatory and tells Fluent Bit which input plugin to load. `Tag` is -mandatory for all plugins except for the `input forward` plugin, which provides -dynamic tags. +`Name` is mandatory and tells Fluent Bit which input plugin to load. `Tag` is mandatory for all plugins except for the `input forward` plugin, which provides dynamic tags. ### Example @@ -84,9 +76,7 @@ The following is an example of an `INPUT` section: ## Config filter -The `FILTER` section defines a filter (related to an filter plugin). Each filter -plugin can add it own configuration keys. The base configuration for each -`FILTER` section contains: +The `FILTER` section defines a filter (related to an filter plugin). Each filter plugin can add it own configuration keys. The base configuration for each `FILTER` section contains: | Key | Description | | ----------- | ------------ | @@ -95,9 +85,7 @@ plugin can add it own configuration keys. The base configuration for each | `Match_Regex` | A regular expression to match against the tags of incoming records. Use this option if you want to use the full regular expression syntax. | | `Log_Level` | Set the plugin's logging verbosity level. Allowed values are: `off`, `error`, `warn`, `info`, `debug`, and `trace`. Defaults to the `SERVICE` section's `Log_Level`. | -`Name` is mandatory and lets Fluent Bit know which filter plugin should be loaded. -`Match` or `Match_Regex` is mandatory for all plugins. If both are specified, -`Match_Regex` takes precedence. +`Name` is mandatory and lets Fluent Bit know which filter plugin should be loaded. `Match` or `Match_Regex` is mandatory for all plugins. If both are specified, `Match_Regex` takes precedence. ### Filter example @@ -112,9 +100,7 @@ The following is an example of a `FILTER` section: ## Config output -The `OUTPUT` section specifies a destination that certain records should go to -after a `Tag` match. Fluent Bit can route up to 256 `OUTPUT` plugins. The -configuration supports the following keys: +The `OUTPUT` section specifies a destination that certain records should go to after a `Tag` match. Fluent Bit can route up to 256 `OUTPUT` plugins. The configuration supports the following keys: | Key | Description | | ----------- | -------------- | @@ -135,8 +121,7 @@ The following is an example of an `OUTPUT` section: ### Example: collecting CPU metrics -The following configuration file example demonstrates how to collect CPU metrics and -flush the results every five seconds to the standard output: +The following configuration file example demonstrates how to collect CPU metrics and flush the results every five seconds to the standard output: ```python [SERVICE] @@ -155,24 +140,19 @@ flush the results every five seconds to the standard output: ## Config Include File -To avoid complicated long configuration files is better to split specific parts in -different files and call them (include) from one main file. The `@INCLUDE` can be used -in the following way: +To avoid complicated long configuration files is better to split specific parts in different files and call them (include) from one main file. The `@INCLUDE` can be used in the following way: ```text @INCLUDE somefile.conf ``` -The configuration reader will try to open the path `somefile.conf`. If not found, the -reader assumes the file is on a relative path based on the path of the base -configuration file: +The configuration reader will try to open the path `somefile.conf`. If not found, the reader assumes the file is on a relative path based on the path of the base configuration file: - Main configuration path: `/tmp/main.conf` - Included file: `somefile.conf` - Fluent Bit will try to open `somefile.conf`, if it fails it will try `/tmp/somefile.conf`. -The `@INCLUDE` command only works at top-left level of the configuration line, and -can't be used inside sections. +The `@INCLUDE` command only works at top-left level of the configuration line, and can't be used inside sections. Wildcard character (`*`) supports including multiple files. For example: @@ -180,5 +160,4 @@ Wildcard character (`*`) supports including multiple files. For example: @INCLUDE input_*.conf ``` -Files matching the wildcard character are included unsorted. If plugin ordering -between files needs to be preserved, the files should be included explicitly. +Files matching the wildcard character are included unsorted. If plugin ordering between files needs to be preserved, the files should be included explicitly. diff --git a/administration/configuring-fluent-bit/classic-mode/record-accessor.md b/administration/configuring-fluent-bit/classic-mode/record-accessor.md index 4be0486fa..753e9a743 100644 --- a/administration/configuring-fluent-bit/classic-mode/record-accessor.md +++ b/administration/configuring-fluent-bit/classic-mode/record-accessor.md @@ -2,7 +2,7 @@ description: A full feature set to access content of your records. --- -# Record accessor +# Record accessor syntax Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map. diff --git a/administration/configuring-fluent-bit/multiline-parsing.md b/administration/configuring-fluent-bit/multiline-parsing.md index a26485f85..f4d8a4ba4 100644 --- a/administration/configuring-fluent-bit/multiline-parsing.md +++ b/administration/configuring-fluent-bit/multiline-parsing.md @@ -1,9 +1,6 @@ # Multiline parsing -In an ideal world, applications might log their messages within a single line, but in -reality applications generate multiple log messages that sometimes belong to the same -context. Processing this information can be complex, like in application stack traces, -which always have multiple log lines. +In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. Processing this information can be complex, like in application stack traces, which always have multiple log lines. Fluent Bit v1.8 implemented a unified Multiline core capability to solve corner cases. @@ -16,8 +13,7 @@ The Multiline parser engine exposes two ways to configure and use the feature: ### Built-in multiline parsers -Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific -multiline parser cases. For example: +Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific multiline parser cases. For example: | Parser | Description | | ------ | ----------- | @@ -29,17 +25,11 @@ multiline parser cases. For example: ### Configurable multiline parsers -You can define your own Multiline parsers with their own rules, using a configuration -file. +You can define your own Multiline parsers with their own rules, using a configuration file. -A multiline parser is defined in a `parsers configuration file` by using a -`[MULTILINE_PARSER]` section definition. The multiline parser must have a unique name -and a type, plus other configured properties associated with each type. +A multiline parser is defined in a `parsers configuration file` by using a `[MULTILINE_PARSER]` section definition. The multiline parser must have a unique name and a type, plus other configured properties associated with each type. -To understand which multiline parser type is required for your use case you have to -know the conditions in the content that determine the beginning of a multiline -message, and the continuation of subsequent lines. Fluent Bit provides a regular expression-based -configuration that supports states to handle from the most cases. +To understand which multiline parser type is required for your use case you have to know the conditions in the content that determine the beginning of a multiline message, and the continuation of subsequent lines. Fluent Bit provides a regular expression-based configuration that supports states to handle from the most cases. | Property | Description | Default | | -------- | ----------- | ------- | @@ -59,8 +49,7 @@ Before configuring your parser you need to know the answer to the following ques When matching a regular expression, you must to define `states`. Some states define the start of a multiline message while others are states for the continuation of multiline messages. You can have multiple `continuation states` definitions to solve complex cases. -The first regular expression that matches the start of a multiline message is called -`start_state`. Other regular expression continuation lines can have different state names. +The first regular expression that matches the start of a multiline message is called `start_state`. Other regular expression continuation lines can have different state names. #### Rules definition @@ -70,8 +59,7 @@ A rule specifies how to match a multiline pattern and perform the concatenation. - regular expression pattern - next state -A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic -configuration examples below: +A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below: {% tabs %} {% tab title="parsers_multiline.yaml" %} @@ -112,15 +100,13 @@ To simplify the configuration of regular expressions, you can use the [Rubular]( #### Configuration example -The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition -explained previously. It is provided in corresponding YAML and classic configuration examples below: +The following example provides a full Fluent Bit configuration file for multiline parsing by using the definition explained previously. It is provided in corresponding YAML and classic configuration examples below: {% tabs %} {% tab title="fluent-bit.yaml" %} -This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.yaml` and tails the file `test.log` -by applying the multiline parser `multiline-regex-test`. Then it sends the processing to the standard output. +This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.yaml` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. Then it sends the processing to the standard output. ```yaml service: @@ -144,8 +130,7 @@ pipeline: {% tab title="fluent-bit.conf" %} -This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` -by applying the multiline parser `multiline-regex-test`. Then it sends the processing to the standard output. +This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. Then it sends the processing to the standard output. ```text [SERVICE] @@ -287,9 +272,7 @@ Example files content: {% tab title="fluent-bit.yaml" %} -This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` -by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser `named-capture-test`. -Then it sends the processing to the standard output. +This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser `named-capture-test`. Then it sends the processing to the standard output. ```yaml service: @@ -319,9 +302,7 @@ pipeline: {% tab title="fluent-bit.conf" %} -This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file -`test.log` by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser -`named-capture-test`. Then it sends the processing to the standard output. +This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser `named-capture-test`. Then it sends the processing to the standard output. ```text [SERVICE] @@ -454,4 +435,4 @@ $ ./fluent-bit --config fluent-bit.conf "}] [2] tail.0: [[1750333602.460998000, {}], {"log"=>"another line... "}] -``` \ No newline at end of file +``` diff --git a/administration/configuring-fluent-bit/yaml/README.md b/administration/configuring-fluent-bit/yaml/README.md index a62354cf3..0d2a33855 100644 --- a/administration/configuring-fluent-bit/yaml/README.md +++ b/administration/configuring-fluent-bit/yaml/README.md @@ -1,4 +1,4 @@ -# Fluent Bit YAML Configuration +# YAML configuration ## Before You Get Started diff --git a/administration/configuring-fluent-bit/yaml/environment-variables-section.md b/administration/configuring-fluent-bit/yaml/environment-variables-section.md index 0c35839a0..d1fa23715 100644 --- a/administration/configuring-fluent-bit/yaml/environment-variables-section.md +++ b/administration/configuring-fluent-bit/yaml/environment-variables-section.md @@ -1,4 +1,4 @@ -# Environment variables section +# Environment variables The `env` section lets you define environment variables directly within the configuration file. These variables can then be used to dynamically replace values throughout your configuration using the `${VARIABLE_NAME}` syntax. diff --git a/administration/configuring-fluent-bit/yaml/includes-section.md b/administration/configuring-fluent-bit/yaml/includes-section.md index cfdc54663..216f89fed 100644 --- a/administration/configuring-fluent-bit/yaml/includes-section.md +++ b/administration/configuring-fluent-bit/yaml/includes-section.md @@ -1,4 +1,4 @@ -# Includes section +# Includes The `includes` section lets you specify additional YAML configuration files to be merged into the current configuration. These files are identified as a list of filenames and can include relative or absolute paths. If no absolute path is provided, the file is assumed to be located in a directory relative to the file that references it. diff --git a/administration/configuring-fluent-bit/yaml/parsers-section.md b/administration/configuring-fluent-bit/yaml/parsers-section.md index 92836f3d8..bb946758a 100644 --- a/administration/configuring-fluent-bit/yaml/parsers-section.md +++ b/administration/configuring-fluent-bit/yaml/parsers-section.md @@ -1,4 +1,4 @@ -# Parsers section +# Parsers Parsers enable Fluent Bit components to transform unstructured data into a structured internal representation. You can define parsers either directly in the main configuration file or in separate external files for better organization. diff --git a/administration/configuring-fluent-bit/yaml/pipeline-section.md b/administration/configuring-fluent-bit/yaml/pipeline-section.md index 2ff016d50..7dcf5a656 100644 --- a/administration/configuring-fluent-bit/yaml/pipeline-section.md +++ b/administration/configuring-fluent-bit/yaml/pipeline-section.md @@ -1,4 +1,4 @@ -# Pipeline section +# Pipeline The `pipeline` section defines the flow of how data is collected, processed, and sent to its final destination. It encompasses the following core concepts: @@ -13,8 +13,7 @@ The `pipeline` section defines the flow of how data is collected, processed, and {% hint style="info" %} -**Note:** Processors can be enabled only by using the YAML configuration format. Classic mode configuration format -doesn't support processors. +**Note:** Processors can be enabled only by using the YAML configuration format. Classic mode configuration format doesn't support processors. {% endhint %} @@ -33,7 +32,7 @@ pipeline: processors: logs: - name: record_modifier - + filters: - name: grep match: '*' @@ -75,7 +74,7 @@ pipeline: action: upsert key: my_new_key value: 123 - + filters: - name: grep match: '*' @@ -106,12 +105,12 @@ pipeline: - name: random tag: test-tag interval_sec: 1 - + processors: logs: - name: modify add: hostname monox - + - name: lua call: append_tag code: | @@ -124,7 +123,7 @@ pipeline: outputs: - name: stdout match: '*' - + processors: logs: - name: lua @@ -182,4 +181,4 @@ pipeline: ``` {% endtab %} -{% endtabs %} \ No newline at end of file +{% endtabs %} diff --git a/administration/configuring-fluent-bit/yaml/plugins-section.md b/administration/configuring-fluent-bit/yaml/plugins-section.md index 77271f660..9e2aca8f3 100644 --- a/administration/configuring-fluent-bit/yaml/plugins-section.md +++ b/administration/configuring-fluent-bit/yaml/plugins-section.md @@ -1,4 +1,4 @@ -# Plugins section +# Plugins Fluent Bit comes with a variety of built-in plugins, and also supports loading external plugins at runtime. This feature is especially useful for loading Go or WebAssembly (Wasm) plugins that are built as shared object files (.so). Fluent Bit YAML configuration provides the following ways to load these external plugins: diff --git a/administration/configuring-fluent-bit/yaml/service-section.md b/administration/configuring-fluent-bit/yaml/service-section.md index 44158e4b3..ffa6581db 100644 --- a/administration/configuring-fluent-bit/yaml/service-section.md +++ b/administration/configuring-fluent-bit/yaml/service-section.md @@ -1,4 +1,4 @@ -# Service section +# Service The `service` section defines global properties of the service. The available configuration keys are: @@ -43,4 +43,4 @@ pipeline: outputs: - name: stdout match: '*' -``` \ No newline at end of file +``` diff --git a/administration/configuring-fluent-bit/yaml/upstream-servers-section.md b/administration/configuring-fluent-bit/yaml/upstream-servers-section.md index 2d9f1618d..527b9fa0c 100644 --- a/administration/configuring-fluent-bit/yaml/upstream-servers-section.md +++ b/administration/configuring-fluent-bit/yaml/upstream-servers-section.md @@ -35,8 +35,6 @@ upstream_servers: port: 51000 ``` -Each node in the `upstream_servers` group must specify a `name`, `host`, and `port`. -Additional settings like `tls`, `tls_verify`, and `shared_key` can be configured for -secure communication. +Each node in the `upstream_servers` group must specify a `name`, `host`, and `port`. Additional settings like `tls`, `tls_verify`, and `shared_key` can be configured for secure communication. While the `upstream_servers` section can be defined globally, some output plugins might require the configuration to be specified in a separate YAML file. Consult the documentation for each specific output plugin to understand its requirements. diff --git a/administration/hot-reload.md b/administration/hot-reload.md index 71af3f6f5..a6f83051d 100644 --- a/administration/hot-reload.md +++ b/administration/hot-reload.md @@ -4,15 +4,13 @@ description: Enable hot reload through SIGHUP signal or an HTTP endpoint # Hot reload -Fluent Bit supports the reloading feature when enabled in the configuration file -or on the command line with `-Y` or `--enable-hot-reload` option. +Fluent Bit supports the reloading feature when enabled in the configuration file or on the command line with `-Y` or `--enable-hot-reload` option. Hot reloading is supported on Linux, macOS, and Windows operating systems. ## Update the configuration -To get started with reloading over HTTP, enable the HTTP Server -in the configuration file: +To get started with reloading over HTTP, enable the HTTP Server in the configuration file: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -41,8 +39,7 @@ service: ## How to reload -After updating the configuration, use one of the following methods to perform a -hot reload: +After updating the configuration, use one of the following methods to perform a hot reload: ### HTTP @@ -79,4 +76,4 @@ The endpoint returns `hot_reload_count` as follows: {"hot_reload_count":3} ``` -The default value of the counter is `0`. \ No newline at end of file +The default value of the counter is `0`. diff --git a/administration/http-proxy.md b/administration/http-proxy.md index ee28e6d51..7928cc56b 100644 --- a/administration/http-proxy.md +++ b/administration/http-proxy.md @@ -2,10 +2,9 @@ description: Enable traffic through a proxy server using the HTTP_PROXY environment variable. --- -# HTTP Proxy +# HTTP proxy -Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic -using the `HTTP_PROXY` or `http_proxy` environment variable. +Fluent Bit supports configuring an HTTP proxy for all egress HTTP/HTTPS traffic using the `HTTP_PROXY` or `http_proxy` environment variable. The format for the HTTP proxy environment variable is `http://USER:PASS@HOST:PORT`, where: @@ -26,50 +25,33 @@ When no authentication is required, omit the username and password: HTTP_PROXY='http://proxy.example.com:8080' ``` -The `HTTP_PROXY` environment variable is a [standard -way](https://docs.docker.com/network/proxy/#use-environment-variables) of setting a -HTTP proxy in a containerized environment, and it's also natively supported by any -application written in Go. Fluent Bit implements the same convention. The -`http_proxy` environment variable is also supported. When both the `HTTP_PROXY` and -`http_proxy` environment variables are provided, `HTTP_PROXY` will be preferred. +The `HTTP_PROXY` environment variable is a [standard way](https://docs.docker.com/network/proxy/#use-environment-variables) of setting a HTTP proxy in a containerized environment, and it's also natively supported by any application written in Go. Fluent Bit implements the same convention. The `http_proxy` environment variable is also supported. When both the `HTTP_PROXY` and `http_proxy` environment variables are provided, `HTTP_PROXY` will be preferred. {% hint style="info" %} -The [HTTP output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) also -supports configuring an HTTP proxy. This configuration works, but shouldn't be used -with the `HTTP_PROXY` or `http_proxy` environment variable. The environment -variable-based proxy configuration is implemented by creating a TCP connection tunnel -using -[HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT). Unlike -the plugin's implementation, this supports both HTTP and HTTPS egress traffic. +The [HTTP output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) also supports configuring an HTTP proxy. This configuration works, but shouldn't be used with the `HTTP_PROXY` or `http_proxy` environment variable. The environment variable-based proxy configuration is implemented by creating a TCP connection tunnel using [HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT). Unlike the plugin's implementation, this supports both HTTP and HTTPS egress traffic. {% endhint %} ## `NO_PROXY` -Use the `NO_PROXY` environment variable when traffic shouldn't flow through the HTTP -proxy. The `no_proxy` environment variable is also supported. When both `NO_PROXY` -and `no_proxy` environment variables are provided, `NO_PROXY` takes precedence. +Use the `NO_PROXY` environment variable when traffic shouldn't flow through the HTTP proxy. The `no_proxy` environment variable is also supported. When both `NO_PROXY` and `no_proxy` environment variables are provided, `NO_PROXY` takes precedence. -The format for the `no_proxy` environment variable is a comma-separated list of -host names or IP addresses. +The format for the `no_proxy` environment variable is a comma-separated list of host names or IP addresses. -A domain name matches itself and all of its subdomains (for example, `example.com` -matches both `example.com` and `test.example.com`): +A domain name matches itself and all of its subdomains (for example, `example.com` matches both `example.com` and `test.example.com`): ```text NO_PROXY='foo.com,127.0.0.1,localhost' ``` -A domain with a leading dot (`.`) matches only its subdomains (for example, -`.example.com` matches `test.example.com` but not `example.com`): +A domain with a leading dot (`.`) matches only its subdomains (for example, `.example.com` matches `test.example.com` but not `example.com`): ```text NO_PROXY='.example.com,127.0.0.1,localhost' ``` -As an example, you might use `NO_PROXY` when running Fluent Bit in a Kubernetes -environment, where and you want: +As an example, you might use `NO_PROXY` when running Fluent Bit in a Kubernetes environment, where and you want: - All real egress traffic to flow through an HTTP proxy. - All local Kubernetes traffic to not flow through the HTTP proxy. diff --git a/administration/memory-management.md b/administration/memory-management.md index 5289bb6e1..4bc0464ec 100644 --- a/administration/memory-management.md +++ b/administration/memory-management.md @@ -2,40 +2,23 @@ -You might need to estimate how much memory Fluent Bit could be using in scenarios -like containerized environments where memory limits are essential. +You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential. -To make an estimate, in-use input plugins must set the `Mem_Buf_Limit`option. -Learn more about it in [Backpressure](backpressure.md). +To make an estimate, in-use input plugins must set the `Mem_Buf_Limit`option. Learn more about it in [Backpressure](backpressure.md). ## Estimating -Input plugins append data independently. To make an estimation, impose a limit with -the `Mem_Buf_Limit` option. If the limit was set to `10MB`, you can estimate that in -the worst case, the output plugin likely could use `20MB`. +Input plugins append data independently. To make an estimation, impose a limit with the `Mem_Buf_Limit` option. If the limit was set to `10MB`, you can estimate that in the worst case, the output plugin likely could use `20MB`. -Fluent Bit has an internal binary representation for the data being processed. When -this data reaches an output plugin, it can create its own representation in a new -memory buffer for processing. The best examples are the -[InfluxDB](../pipeline/outputs/influxdb.md) and -[Elasticsearch](../pipeline/outputs/elasticsearch.md) output plugins, which need to -convert the binary representation to their respective custom JSON formats before -sending data to the backend servers. +Fluent Bit has an internal binary representation for the data being processed. When this data reaches an output plugin, it can create its own representation in a new memory buffer for processing. The best examples are the [InfluxDB](../pipeline/outputs/influxdb.md) and [Elasticsearch](../pipeline/outputs/elasticsearch.md) output plugins, which need to convert the binary representation to their respective custom JSON formats before sending data to the backend servers. -When imposing a limit of `10MB` for the input plugins, and a worst case scenario of -the output plugin consuming `20MB`, you need to allocate a minimum (`30MB` x 1.2) = -`36MB`. +When imposing a limit of `10MB` for the input plugins, and a worst case scenario of the output plugin consuming `20MB`, you need to allocate a minimum (`30MB` x 1.2) = `36MB`. ## Glibc and memory fragmentation -In intensive environments where memory allocations happen in the orders of magnitude, -the default memory allocator provided by Glibc could lead to high fragmentation, -reporting a high memory usage by the service. +In intensive environments where memory allocations happen in the orders of magnitude, the default memory allocator provided by Glibc could lead to high fragmentation, reporting a high memory usage by the service. -It's strongly suggested that in any production environment, Fluent Bit should be -built with [jemalloc](http://jemalloc.net/) enabled (`-DFLB_JEMALLOC=On`). -The jemalloc implementation of malloc is an alternative memory allocator that can -reduce fragmentation, resulting in better performance. +It's strongly suggested that in any production environment, Fluent Bit should be built with [jemalloc](http://jemalloc.net/) enabled (`-DFLB_JEMALLOC=On`). The jemalloc implementation of malloc is an alternative memory allocator that can reduce fragmentation, resulting in better performance. Use the following command to determine if Fluent Bit has been built with jemalloc: diff --git a/administration/monitoring.md b/administration/monitoring.md index 701a717b9..dde8b59e9 100644 --- a/administration/monitoring.md +++ b/administration/monitoring.md @@ -6,9 +6,7 @@ description: Learn how to monitor your Fluent Bit data pipelines -Fluent Bit includes features for monitoring the internals of your pipeline, in -addition to connecting to Prometheus and Grafana, Health checks, and connectors to -use external services: +Fluent Bit includes features for monitoring the internals of your pipeline, in addition to connecting to Prometheus and Grafana, Health checks, and connectors to use external services: - [HTTP Server: JSON and Prometheus Exporter-style metrics](monitoring.md#http-server) - [Grafana Dashboards and Alerts](monitoring.md#grafana-dashboard-and-alerts) @@ -17,16 +15,13 @@ use external services: ## HTTP server -Fluent Bit includes an HTTP server for querying internal information and monitoring -metrics of each running plugin. +Fluent Bit includes an HTTP server for querying internal information and monitoring metrics of each running plugin. You can integrate the monitoring interface with Prometheus. ### Get started -To get started, enable the HTTP server from the configuration file. The following -configuration instructs Fluent Bit to start an HTTP server on TCP port `2020` and -listen on all network interfaces: +To get started, enable the HTTP server from the configuration file. The following configuration instructs Fluent Bit to start an HTTP server on TCP port `2020` and listen on all network interfaces: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -36,12 +31,12 @@ service: http_server: on http_listen: 0.0.0.0 http_port: 2020 - + pipeline: inputs: - name: cpu - - outputs: + + outputs: - name: stdout match: '*' ``` @@ -90,9 +85,7 @@ Fluent Bit v1.4.0 [2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020 ``` -Use `curl` to gather information about the HTTP server. The following command sends -the command output to the `jq` program, which outputs human-readable JSON data to the -terminal. +Use `curl` to gather information about the HTTP server. The following command sends the command output to the `jq` program, which outputs human-readable JSON data to the terminal. ```shell $ curl -s http://127.0.0.1:2020 | jq @@ -143,21 +136,14 @@ The following descriptions apply to v1 metric endpoints. #### `/api/v1/metrics/prometheus` endpoint -The following descriptions apply to metrics outputted in Prometheus format by the -`/api/v1/metrics/prometheus` endpoint. +The following descriptions apply to metrics outputted in Prometheus format by the `/api/v1/metrics/prometheus` endpoint. The following terms are key to understanding how Fluent Bit processes metrics: -- **Record**: a single message collected from a source, such as a single long line in - a file. -- **Chunk**: log records ingested and stored by Fluent Bit input plugin instances. A - batch of records in a chunk are tracked together as a single unit. +- **Record**: a single message collected from a source, such as a single long line in a file. +- **Chunk**: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit. - The Fluent Bit engine attempts to fit records into chunks of at most `2 MB`, but - the size can vary at runtime. Chunks are then sent to an output. An output plugin - instance can either successfully send the full chunk to the destination and mark it - as successful, or it can fail the chunk entirely if an unrecoverable error is - encountered, or it can ask for the chunk to be retried. + The Fluent Bit engine attempts to fit records into chunks of at most `2 MB`, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried. | Metric name | Labels | Description | Type | Unit | | ----------- | ------ | ----------- | ---- | ---- | @@ -175,8 +161,7 @@ The following terms are key to understanding how Fluent Bit processes metrics: #### `/api/v1/storage` endpoint -The following descriptions apply to metrics outputted in JSON format by the -`/api/v1/storage` endpoint. +The following descriptions apply to metrics outputted in JSON format by the `/api/v1/storage` endpoint. | Metric Key | Description | Unit | |-----------------------------------------------|---------------|---------| @@ -200,21 +185,14 @@ The following descriptions apply to v2 metric endpoints. #### `/api/v2/metrics/prometheus` or `/api/v2/metrics` endpoint -The following descriptions apply to metrics outputted in Prometheus format by the -`/api/v2/metrics/prometheus` or `/api/v2/metrics` endpoints. +The following descriptions apply to metrics outputted in Prometheus format by the `/api/v2/metrics/prometheus` or `/api/v2/metrics` endpoints. The following terms are key to understanding how Fluent Bit processes metrics: -- **Record**: a single message collected from a source, such as a single long line in - a file. -- **Chunk**: log records ingested and stored by Fluent Bit input plugin instances. A - batch of records in a chunk are tracked together as a single unit. +- **Record**: a single message collected from a source, such as a single long line in a file. +- **Chunk**: log records ingested and stored by Fluent Bit input plugin instances. A batch of records in a chunk are tracked together as a single unit. - The Fluent Bit engine attempts to fit records into chunks of at most `2 MB`, but - the size can vary at runtime. Chunks are then sent to an output. An output plugin - instance can either successfully send the full chunk to the destination and mark it - as successful, or it can fail the chunk entirely if an unrecoverable error is - encountered, or it can ask for the chunk to be retried. + The Fluent Bit engine attempts to fit records into chunks of at most `2 MB`, but the size can vary at runtime. Chunks are then sent to an output. An output plugin instance can either successfully send the full chunk to the destination and mark it as successful, or it can fail the chunk entirely if an unrecoverable error is encountered, or it can ask for the chunk to be retried. | Metric Name | Labels | Description | Type | Unit | |--------------------------------------------|-------------------------------------------------------------------------|-------------|---------|---------| @@ -238,8 +216,7 @@ The following terms are key to understanding how Fluent Bit processes metrics: #### Storage layer -The following are detailed descriptions for the metrics collected by the storage -layer. +The following are detailed descriptions for the metrics collected by the storage layer. | Metric Name | Labels | Description | Type | Unit | |---------------------------------------------|------------------------------|---------------|---------|---------| @@ -329,13 +306,9 @@ fluentbit_output_retries_failed_total{name="stdout.0"} 0 1509150350542 ### Configure aliases -By default, configured plugins on runtime get an internal name in the format -`_plugin_name.ID_`. For monitoring purposes, this can be confusing if many plugins of -the same type were configured. To make a distinction each configured input or output -section can get an _alias_ that will be used as the parent name for the metric. +By default, configured plugins on runtime get an internal name in the format `_plugin_name.ID_`. For monitoring purposes, this can be confusing if many plugins of the same type were configured. To make a distinction each configured input or output section can get an _alias_ that will be used as the parent name for the metric. -The following example sets an alias to the `INPUT` section of the configuration file, -which is using the [CPU](../pipeline/inputs/cpu-metrics.md) input plugin: +The following example sets an alias to the `INPUT` section of the configuration file, which is using the [CPU](../pipeline/inputs/cpu-metrics.md) input plugin: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -345,13 +318,13 @@ service: http_server: on http_listen: 0.0.0.0 http_port: 2020 - + pipeline: inputs: - name: cpu alias: server1_cpu - - outputs: + + outputs: - name: stdout alias: raw_output match: '*' @@ -380,8 +353,7 @@ pipeline: {% endtab %} {% endtabs %} -When querying the related metrics, the aliases are returned instead of the plugin -name: +When querying the related metrics, the aliases are returned instead of the plugin name: ```javascript { @@ -407,16 +379,9 @@ name: -You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus -style metrics. +You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics. -The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) -is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s -[logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few -key differences, such as the use of the `instance` label, stacked graphs, and a focus -on Fluent Bit metrics. See -[this blog post](https://www.robustperception.io/controlling-the-instance-label) -for more information. +The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information. ![dashboard](/.gitbook/assets/dashboard.png) @@ -435,13 +400,9 @@ Fluent bit supports the following configurations to set up the health check. | `HC_Retry_Failure_Count` | the retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined `HC_Period`, example for retry failure: `[2022/02/16 20:11:36] [ warn] [engine] chunk '1-1645042288.260516436.flb' cannot be retried: task_id=0, input=tcp.3 > output=cloudwatch_logs.1` | `5` | | `HC_Period` | The time period by second to count the error and retry failure data point | `60` | -Not every error log means an error to be counted. The error retry failures count only -on specific errors, which is the example in configuration table description. +Not every error log means an error to be counted. The error retry failures count only on specific errors, which is the example in configuration table description. -Based on the `HC_Period` setting, if the real error number is over `HC_Errors_Count`, -or retry failure is over `HC_Retry_Failure_Count`, Fluent Bit is considered -unhealthy. The health endpoint returns an HTTP status `500` and an `error` message. -Otherwise, the endpoint returns HTTP status `200` and an `ok` message. +Based on the `HC_Period` setting, if the real error number is over `HC_Errors_Count`, or retry failure is over `HC_Retry_Failure_Count`, Fluent Bit is considered unhealthy. The health endpoint returns an HTTP status `500` and an `error` message. Otherwise, the endpoint returns HTTP status `200` and an `ok` message. The equation to calculate this behavior is: @@ -451,8 +412,7 @@ health status = (HC_Errors_Count > HC_Errors_Count config value) OR the HC_Period interval ``` -The `HC_Errors_Count` and `HC_Retry_Failure_Count` only count for output plugins and -count a sum for errors and retry failures from all running output plugins. +The `HC_Errors_Count` and `HC_Retry_Failure_Count` only count for output plugins and count a sum for errors and retry failures from all running output plugins. The following configuration examples show how to define these settings: @@ -468,12 +428,12 @@ service: hc_errors_count: 5 hc_retry_failure_count: 5 hc_period: 5 - + pipeline: inputs: - name: cpu - - outputs: + + outputs: - name: stdout match: '*' ``` @@ -520,6 +480,4 @@ Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 secon ## Telemetry Pipeline -[Telemetry Pipeline](https://chronosphere.io/platform/telemetry-pipeline/) is a -hosted service that lets you monitor your Fluent Bit agents including data flow, -metrics, and configurations. \ No newline at end of file +[Telemetry Pipeline](https://chronosphere.io/platform/telemetry-pipeline/) is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations. diff --git a/administration/multithreading.md b/administration/multithreading.md index 8656317ef..d31d24775 100644 --- a/administration/multithreading.md +++ b/administration/multithreading.md @@ -33,18 +33,10 @@ run in threaded mode regardless of configuration. These always-threaded inputs a - [Process Exporter Metrics](../pipeline/inputs/process-exporter-metrics.md) - [Windows Exporter Metrics](../pipeline/inputs/windows-exporter-metrics.md) -Inputs aren't internally aware of multithreading. If an input runs in threaded -mode, Fluent Bit manages the logistics of that input's thread. +Inputs aren't internally aware of multithreading. If an input runs in threaded mode, Fluent Bit manages the logistics of that input's thread. ## Outputs -When outputs flush data, they can either perform this operation inside Fluent Bit's -main thread or inside a separate dedicated thread called a _worker_. Each output -can have one or more workers running in parallel, and each worker can handle multiple -concurrent flushes. You can configure this behavior by changing the value of the -`workers` setting. +When outputs flush data, they can either perform this operation inside Fluent Bit's main thread or inside a separate dedicated thread called a _worker_. Each output can have one or more workers running in parallel, and each worker can handle multiple concurrent flushes. You can configure this behavior by changing the value of the `workers` setting. -All outputs are capable of running in multiple workers, and each output has -a default value of `0`, `1`, or `2` workers. However, even if an output uses -workers by default, you can safely reduce the number of workers below the default -or disable workers entirely. +All outputs are capable of running in multiple workers, and each output has a default value of `0`, `1`, or `2` workers. However, even if an output uses workers by default, you can safely reduce the number of workers below the default or disable workers entirely. diff --git a/administration/networking.md b/administration/networking.md index 19e2c45a5..8539d5d8c 100644 --- a/administration/networking.md +++ b/administration/networking.md @@ -1,14 +1,8 @@ # Networking -[Fluent Bit](https://fluentbit.io) implements a unified networking interface that's -exposed to components like plugins. This interface abstracts the complexity of -general I/O and is fully configurable. +[Fluent Bit](https://fluentbit.io) implements a unified networking interface that's exposed to components like plugins. This interface abstracts the complexity of general I/O and is fully configurable. -A common use case is when a component or plugin needs to connect with a service to send -and receive data. There are many challenges to handle like unresponsive services, -networking latency, or any kind of connectivity error. The networking interface aims -to abstract and simplify the network I/O handling, minimize risks, and optimize -performance. +A common use case is when a component or plugin needs to connect with a service to send and receive data. There are many challenges to handle like unresponsive services, networking latency, or any kind of connectivity error. The networking interface aims to abstract and simplify the network I/O handling, minimize risks, and optimize performance. ## Networking concepts @@ -16,60 +10,37 @@ Fluent Bit uses the following networking concepts: ### TCP connect timeout -Typically, creating a new TCP connection to a remote server is straightforward -and takes a few milliseconds. However, there are cases where DNS resolving, a slow -network, or incomplete TLS handshakes might create long delays, or incomplete -connection statuses. +Typically, creating a new TCP connection to a remote server is straightforward and takes a few milliseconds. However, there are cases where DNS resolving, a slow network, or incomplete TLS handshakes might create long delays, or incomplete connection statuses. -- `net.connect_timeout` lets you configure the maximum time to wait for a connection - to be established. This value already considers the TLS handshake process. +- `net.connect_timeout` lets you configure the maximum time to wait for a connection to be established. This value already considers the TLS handshake process. -- `net.connect_timeout_log_error` indicates if an error should be logged in case of - connect timeout. If disabled, the timeout is logged as a debug level message. +- `net.connect_timeout_log_error` indicates if an error should be logged in case of connect timeout. If disabled, the timeout is logged as a debug level message. ### TCP source address -On environments with multiple network interfaces, you can choose which -interface to use for Fluent Bit data that will flow through the network. +On environments with multiple network interfaces, you can choose which interface to use for Fluent Bit data that will flow through the network. -Use `net.source_address` to specify which network address to use for a TCP connection -and data flow. +Use `net.source_address` to specify which network address to use for a TCP connection and data flow. ### Connection keepalive -A connection keepalive refers to the ability of a client to keep the TCP connection -open in a persistent way. This feature offers many benefits in terms -of performance because communication channels are always established beforehand. +A connection keepalive refers to the ability of a client to keep the TCP connection open in a persistent way. This feature offers many benefits in terms of performance because communication channels are always established beforehand. -Any component that uses TCP channels like HTTP or [TLS](transport-security.md), can -take use feature. For configuration purposes use the `net.keepalive` -property. +Any component that uses TCP channels like HTTP or [TLS](transport-security.md), can take use feature. For configuration purposes use the `net.keepalive` property. ### Connection keepalive idle timeout -If a connection keepalive is enabled, there might be scenarios where the connection -can be unused for long periods of time. Unused connections can be removed. To control -how long a keepalive connection can be idle, Fluent Bit uses a configuration property -called `net.keepalive_idle_timeout`. +If a connection keepalive is enabled, there might be scenarios where the connection can be unused for long periods of time. Unused connections can be removed. To control how long a keepalive connection can be idle, Fluent Bit uses a configuration property called `net.keepalive_idle_timeout`. ### DNS mode -The global `dns.mode` value issues DNS requests using the specified protocol, either -TCP or UDP. If a transport layer protocol is specified, plugins that configure the -`net.dns.mode` setting override the global setting. +The global `dns.mode` value issues DNS requests using the specified protocol, either TCP or UDP. If a transport layer protocol is specified, plugins that configure the `net.dns.mode` setting override the global setting. ### Maximum connections per worker -For optimal performance, Fluent Bit tries to deliver data quickly and create -TCP connections on-demand and in keepalive mode. In highly scalable -environments, you might limit how many connections are created in -parallel. +For optimal performance, Fluent Bit tries to deliver data quickly and create TCP connections on-demand and in keepalive mode. In highly scalable environments, you might limit how many connections are created in parallel. -Use the `net.max_worker_connections` property in the output plugin section to set -the maximum number of allowed connections. This property acts at the worker level. -For example, if you have five workers and `net.max_worker_connections` is set -to 10, a maximum of 50 connections is allowed. If the limit is reached, the output -plugin issues a retry. +Use the `net.max_worker_connections` property in the output plugin section to set the maximum number of allowed connections. This property acts at the worker level. For example, if you have five workers and `net.max_worker_connections` is set to 10, a maximum of 50 connections is allowed. If the limit is reached, the output plugin issues a retry. ### Listener backlog @@ -83,9 +54,7 @@ On Linux, the effective backlog value might be capped by the kernel parameter `n ## Configuration options -The following table describes the network configuration properties available and -their usage in optimizing performance or adjusting configuration needs for plugins -that rely on networking I/O: +The following table describes the network configuration properties available and their usage in optimizing performance or adjusting configuration needs for plugins that rely on networking I/O: | Property | Description | Default | | :------- |:------------|:--------| @@ -103,8 +72,7 @@ that rely on networking I/O: ## Example -This example sends five random messages through a TCP output connection. The remote -side uses the `nc` (netcat) utility to see the data. +This example sends five random messages through a TCP output connection. The remote side uses the `nc` (netcat) utility to see the data. Use the following configuration snippet of your choice in a corresponding file named `fluent-bit.yaml` or `fluent-bit.conf`: @@ -171,8 +139,7 @@ In another terminal, start `nc` and make it listen for messages on TCP port 9090 nc -l 9090 ``` -Start Fluent Bit with the configuration file you defined previously to see -data flowing to netcat: +Start Fluent Bit with the configuration file you defined previously to see data flowing to netcat: ```text $ nc -l 9090 @@ -183,8 +150,6 @@ $ nc -l 9090 {"date":1587769736.572277,"rand_value":527581343064950185} ``` -If the `net.keepalive` option isn't enabled, Fluent Bit closes the TCP connection -and netcat quits. +If the `net.keepalive` option isn't enabled, Fluent Bit closes the TCP connection and netcat quits. -After the five records arrive, the connection idles. After 10 seconds, the connection -closes due to `net.keepalive_idle_timeout`. +After the five records arrive, the connection idles. After 10 seconds, the connection closes due to `net.keepalive_idle_timeout`. diff --git a/administration/performance.md b/administration/performance.md index 9bc3117c0..93e94f7af 100644 --- a/administration/performance.md +++ b/administration/performance.md @@ -1,4 +1,4 @@ -# Performance Tips +# Performance tips Fluent Bit is designed for high performance and minimal resource usage. Depending on your use case, you can optimize further using specific configuration options to achieve faster performance or reduce resource consumption. diff --git a/administration/scheduling-and-retries.md b/administration/scheduling-and-retries.md index 05debe5a9..3c04e2611 100644 --- a/administration/scheduling-and-retries.md +++ b/administration/scheduling-and-retries.md @@ -1,38 +1,27 @@ -# Scheduling and Retries +# Scheduling and retries -[Fluent Bit](https://fluentbit.io) has an engine that helps to coordinate the data -ingestion from input plugins. The engine calls the _scheduler_ to decide when it's time to -flush the data through one or multiple output plugins. The scheduler flushes new data -at a fixed number of seconds, and retries when asked. +[Fluent Bit](https://fluentbit.io) has an engine that helps to coordinate the data ingestion from input plugins. The engine calls the _scheduler_ to decide when it's time to flush the data through one or multiple output plugins. The scheduler flushes new data at a fixed number of seconds, and retries when asked. -When an output plugin gets called to flush some data, after processing that data it -can notify the engine using these possible return statuses: +When an output plugin gets called to flush some data, after processing that data it can notify the engine using these possible return statuses: - `OK`: Data successfully processed and flushed. -- `Retry`: If a retry is requested, the engine asks the scheduler to retry flushing - that data. The scheduler decides how many seconds to wait before retry. +- `Retry`: If a retry is requested, the engine asks the scheduler to retry flushing that data. The scheduler decides how many seconds to wait before retry. - `Error`: An unrecoverable error occurred and the engine shouldn't try to flush that data again. ## Configure wait time for retry -The scheduler provides two configuration options, called `scheduler.cap` and -`scheduler.base`, which can be set in the Service section. These determine the waiting -time before a retry happens. +The scheduler provides two configuration options, called `scheduler.cap` and `scheduler.base`, which can be set in the Service section. These determine the waiting time before a retry happens. | Key | Description | Default | | --- | ------------| --------------| | `scheduler.cap` | Set a maximum retry time in seconds. Supported in v1.8.7 or later. | `2000` | | `scheduler.base` | Set a base of exponential backoff. Supported in v1.8.7 or later. | `5` | -The `scheduler.base` determines the lower bound of time and the `scheduler.cap` -determines the upper bound for each retry. +The `scheduler.base` determines the lower bound of time and the `scheduler.cap` determines the upper bound for each retry. -Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting -time before a retry. The waiting time is a random number between a configurable upper -and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see -[Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). +Fluent Bit uses an exponential backoff and jitter algorithm to determine the waiting time before a retry. The waiting time is a random number between a configurable upper and lower bound. For a detailed explanation of the exponential backoff and jitter algorithm, see [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). For example: @@ -48,22 +37,17 @@ For example: When `base` is set to 3 and `cap` is set to 30: -First retry: The lower bound will be 3. The upper bound will be `3 * 2 = 6`. -The waiting time will be a random number between (3, 6). +First retry: The lower bound will be 3. The upper bound will be `3 * 2 = 6`. The waiting time will be a random number between (3, 6). -Second retry: The lower bound will be 3. The upper bound will be `3 * (2 * 2) = 12`. -The waiting time will be a random number between (3, 12). +Second retry: The lower bound will be 3. The upper bound will be `3 * (2 * 2) = 12`. The waiting time will be a random number between (3, 12). -Third retry: The lower bound will be 3. The upper bound will be `3 * (2 * 2 * 2) =24`. -The waiting time will be a random number between (3, 24). +Third retry: The lower bound will be 3. The upper bound will be `3 * (2 * 2 * 2) =24`. The waiting time will be a random number between (3, 24). -Fourth retry: The lower bound will be 3, because `3 * (2 * 2 * 2 * 2) = 48` > `30`. -The upper bound will be 30. The waiting time will be a random number between (3, 30). +Fourth retry: The lower bound will be 3, because `3 * (2 * 2 * 2 * 2) = 48` > `30`. The upper bound will be 30. The waiting time will be a random number between (3, 30). ### Wait time example -The following example configures the `scheduler.base` as `3` seconds and -`scheduler.cap` as `30` seconds. +The following example configures the `scheduler.base` as `3` seconds and `scheduler.cap` as `30` seconds. {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -104,9 +88,7 @@ The waiting time will be: ## Configure retries -The scheduler provides a configuration option called `Retry_Limit`, which can be set -independently for each output section. This option lets you disable retries or -impose a limit to try N times and then discard the data after reaching that limit: +The scheduler provides a configuration option called `Retry_Limit`, which can be set independently for each output section. This option lets you disable retries or impose a limit to try N times and then discard the data after reaching that limit: | | Value | Description | | :--- | :--- | :--- | @@ -116,8 +98,7 @@ impose a limit to try N times and then discard the data after reaching that limi ### Retry example -The following example configures two outputs, where the HTTP plugin has an unlimited -number of retries, and the Elasticsearch plugin have a limit of `5` retries: +The following example configures two outputs, where the HTTP plugin has an unlimited number of retries, and the Elasticsearch plugin have a limit of `5` retries: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -126,7 +107,7 @@ number of retries, and the Elasticsearch plugin have a limit of `5` retries: pipeline: inputs: ... - + outputs: - name: http host: 192.168.5.6 @@ -160,4 +141,4 @@ pipeline: ``` {% endtab %} -{% endtabs %} \ No newline at end of file +{% endtabs %} diff --git a/administration/transport-security.md b/administration/transport-security.md index e082bf30d..413e39b6b 100644 --- a/administration/transport-security.md +++ b/administration/transport-security.md @@ -1,12 +1,9 @@ -# Transport Security +# TLS -Fluent Bit provides integrated support for Transport Layer Security (TLS) and -its predecessor Secure Sockets Layer (SSL). This section refers only -to TLS for both implementations. +Fluent Bit provides integrated support for Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). This section refers only to TLS for both implementations. -Both input and output plugins that perform Network I/O can optionally enable TLS and -configure the behavior. The following table describes the properties available: +Both input and output plugins that perform Network I/O can optionally enable TLS and configure the behavior. The following table describes the properties available: | Property | Description | Default | | :--- | :--- | :--- | @@ -21,11 +18,9 @@ configure the behavior. The following table describes the properties available: | `tls.key_passwd` | Optional password for `tls.key_file` file. | _none_ | | `tls.vhost` | Hostname to be used for TLS SNI extension. | _none_ | -To use TLS on input plugins, you must provide both a certificate and a -private key. +To use TLS on input plugins, you must provide both a certificate and a private key. -The listed properties can be enabled in the configuration file, specifically in each -output plugin section or directly through the command line. +The listed properties can be enabled in the configuration file, specifically in each output plugin section or directly through the command line. The following **output** plugins can take advantage of the TLS feature: @@ -77,15 +72,13 @@ The following **input** plugins can take advantage of the TLS feature: - [Syslog](../pipeline/inputs/syslog.md) - [TCP](../pipeline/inputs/tcp.md) -In addition, other plugins implement a subset of TLS support, with -restricted configuration: +In addition, other plugins implement a subset of TLS support, with restricted configuration: - [Kubernetes Filter](../pipeline/filters/kubernetes.md) ## Example: enable TLS on HTTP input -By default, the HTTP input plugin uses plain TCP. Run the following command to enable -TLS: +By default, the HTTP input plugin uses plain TCP. Run the following command to enable TLS: ```bash ./bin/fluent-bit -i http \ @@ -99,12 +92,10 @@ TLS: ``` {% hint style="info" %} -See Tips & Trick section below for details on generating `self_signed.crt` and `self_signed.key` files shown in these -examples. +See Tips & Trick section below for details on generating `self_signed.crt` and `self_signed.key` files shown in these examples. {% endhint %} -In the previous command, the two properties `tls` and `tls.verify` are set -for demonstration purposes. Always enable verification in production environments. +In the previous command, the two properties `tls` and `tls.verify` are set for demonstration purposes. Always enable verification in production environments. The same behavior can be accomplished using a configuration file: @@ -150,8 +141,7 @@ pipeline: ## Example: enable TLS on HTTP output -By default, the HTTP output plugin uses plain TCP. Run the following command to enable -TLS: +By default, the HTTP output plugin uses plain TCP. Run the following command to enable TLS: ```bash fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \ @@ -160,8 +150,7 @@ fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \ -m '*' ``` -In the previous command, the properties `tls` and `tls.verify` are enabled -for demonstration purposes. Always enable verification in production environments. +In the previous command, the properties `tls` and `tls.verify` are enabled for demonstration purposes. Always enable verification in production environments. The same behavior can be accomplished using a configuration file: @@ -211,10 +200,7 @@ pipeline: ### Generate a self signed certificates for testing purposes -The following command generates a 4096 bit RSA key pair and a certificate that's signed -using `SHA-256` with the expiration date set to 30 days in the future. In this example, -`test.host.net` is set as the common name. This example opts out of `DES`, so the -private key is stored in plain text. +The following command generates a 4096 bit RSA key pair and a certificate that's signed using `SHA-256` with the expiration date set to 30 days in the future. In this example, `test.host.net` is set as the common name. This example opts out of `DES`, so the private key is stored in plain text. ```bash openssl req -x509 \ @@ -228,10 +214,7 @@ openssl req -x509 \ ### Connect to virtual servers using TLS -Fluent Bit supports -[TLS server name indication](https://en.wikipedia.org/wiki/Server_Name_Indication). -If you are serving multiple host names on a single IP address (for example, using -virtual hosting), you can make use of `tls.vhost` to connect to a specific hostname. +Fluent Bit supports [TLS server name indication](https://en.wikipedia.org/wiki/Server_Name_Indication). If you are serving multiple host names on a single IP address (for example, using virtual hosting), you can make use of `tls.vhost` to connect to a specific hostname. {% tabs %} @@ -279,19 +262,16 @@ pipeline: ### Verify `subjectAltName` -By default, TLS verification of host names isn't done automatically. -As an example, you can extract the X509v3 Subject Alternative Name from a certificate: +By default, TLS verification of host names isn't done automatically. As an example, you can extract the X509v3 Subject Alternative Name from a certificate: ```text X509v3 Subject Alternative Name: DNS:my.fluent-aggregator.net ``` -This certificate covers only `my.fluent-aggregator.net` so if you use a different -hostname it should fail. +This certificate covers only `my.fluent-aggregator.net` so if you use a different hostname it should fail. -To fully verify the alternative name and demonstrate the failure, enable -`tls.verify_hostname`: +To fully verify the alternative name and demonstrate the failure, enable `tls.verify_hostname`: {% tabs %} @@ -343,4 +323,4 @@ This outgoing connect will fail and disconnect: [2024/06/17 16:51:31] [error] [tls] error: unexpected EOF with reason: certificate verify failed [2024/06/17 16:51:31] [debug] [upstream] connection #50 failed to other.fluent-aggregator.net:24224 [2024/06/17 16:51:31] [error] [output:forward:forward.0] no upstream connections available -``` \ No newline at end of file +``` diff --git a/administration/troubleshooting.md b/administration/troubleshooting.md index a0b0c93d5..404deadb1 100644 --- a/administration/troubleshooting.md +++ b/administration/troubleshooting.md @@ -7,8 +7,7 @@ ## Tap -Tap can be used to generate events or records detailing what messages -pass through Fluent Bit, at what time and what filters affect them. +Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them. ### Basic Tap example @@ -23,11 +22,9 @@ $ docker run --rm -ti fluent/fluent-bit:latest --help | grep trace --trace setup a trace pipeline on startup. Uses a single line, ie: "input=dummy.0 output=stdout output.format='json'" ``` -If the `--enable-chunk-trace` option is present, your Fluent Bit version supports -Fluent Bit Tap, but it's disabled by default. Use this option to enable it. +If the `--enable-chunk-trace` option is present, your Fluent Bit version supports Fluent Bit Tap, but it's disabled by default. Use this option to enable it. -You can start Fluent Bit with tracing activated from the beginning by using the -`trace-input` and `trace-output` properties: +You can start Fluent Bit with tracing activated from the beginning by using the `trace-input` and `trace-output` properties: ```bash $ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout @@ -156,16 +153,14 @@ Fluent Bit v2.0.0 ``` -In another terminal, activate Tap by either using the instance id of the input -(`dummy.0`) or its alias. The alias is more predictable, and is used here: +In another terminal, activate Tap by either using the instance id of the input (`dummy.0`) or its alias. The alias is more predictable, and is used here: ```shell $ curl 127.0.0.1:2020/api/v1/trace/input_dummy {"status":"ok"} ``` -This response means Tap is active. The terminal with Fluent Bit running should now -look like this: +This response means Tap is active. The terminal with Fluent Bit running should now look like this: ```shell [0] dummy.0: [1666346615.203253156, {"message"=>"dummy"}] @@ -190,11 +185,9 @@ All the records that display are those emitted by the activities of the dummy pl ### Complex Tap example -This example takes the same steps but demonstrates how the mechanism works with more -complicated configurations. +This example takes the same steps but demonstrates how the mechanism works with more complicated configurations. -This example follows a single input, out of many, and which passes through several -filters. +This example follows a single input, out of many, and which passes through several filters. ```shell $ docker run --rm -ti -p 2020:2020 \ @@ -211,8 +204,7 @@ $ docker run --rm -ti -p 2020:2020 \ -o null -m '*' -f 1 ``` -To ensure the window isn't cluttered by the records generated by the input plugins, -send all of it to `null`. +To ensure the window isn't cluttered by the records generated by the input plugins, send all of it to `null`. Activate with the following `curl` command: @@ -259,12 +251,9 @@ You should start seeing output similar to the following: ### Parameters for the output in Tap -When activating Tap, any plugin parameter can be given. These parameters can be used -to modify the output format, the name of the time key, the format of the date, and -other details. +When activating Tap, any plugin parameter can be given. These parameters can be used to modify the output format, the name of the time key, the format of the date, and other details. -The following example uses the parameter `"format": "json"` to demonstrate how -to show `stdout` in JSON format. +The following example uses the parameter `"format": "json"` to demonstrate how to show `stdout` in JSON format. First, run Fluent Bit enabling Tap: @@ -289,8 +278,7 @@ Fluent Bit v2.0.8 ... ``` -In another terminal, activate Tap including the output (`stdout`), and the -parameters wanted (`"format": "json"`): +In another terminal, activate Tap including the output (`stdout`), and the parameters wanted (`"format": "json"`): ```shell $ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}' @@ -308,8 +296,7 @@ In the first terminal, you should see the output similar to the following: This parameter shows stdout in JSON format. -See [output plugins](https://docs.fluentbit.io/manual/pipeline/outputs) for -additional information. +See [output plugins](https://docs.fluentbit.io/manual/pipeline/outputs) for additional information. ### Analyze a single Tap record @@ -338,35 +325,24 @@ This filter record is an example to explain the details of a Tap record: - `type`: Defines the stage the event is generated: - `1`: Input record. This is the unadulterated input record. - - `2`: Filtered record. This is a record after it was filtered. One record is - generated per filter. + - `2`: Filtered record. This is a record after it was filtered. One record is generated per filter. - `3`: Pre-output record. This is the record right before it's sent for output. - This example is a record generated by the manipulation of a record by a filter so - it has the type `2`. -- `start_time` and `end_time`: Records the start and end of an event, and is - different for each event type: + This example is a record generated by the manipulation of a record by a filter so it has the type `2`. +- `start_time` and `end_time`: Records the start and end of an event, and is different for each event type: - type 1: When the input is received, both the start and end time. - type 2: The time when filtering is matched until it has finished processing. - type 3: The time when the input is received and when it's finally slated for output. -- `trace_id`: A string composed of a prefix and a number which is incremented with - each record received by the input during the Tap session. +- `trace_id`: A string composed of a prefix and a number which is incremented with each record received by the input during the Tap session. - `plugin_instance`: The plugin instance name as generated by Fluent Bit at runtime. - `plugin_alias`: If an alias is set this field will contain the alias set for a plugin. -- `records`: An array of all the records being sent. Fluent Bit handles records in - chunks of multiple records and chunks are indivisible, the same is done in the Tap - output. Each record consists of its timestamp followed by the actual data which is - a composite type of keys and values. +- `records`: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values. ## Dump Internals / Signal -When the service is running, you can export [metrics](monitoring.md) to see the -overall status of the data flow of the service. There are other use cases where -you might need to know the current status of the service internals, like the current -status of the internal buffers. Dump Internals can help provide this information. +When the service is running, you can export [metrics](monitoring.md) to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information. -Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from -the command line triggering the `CONT` Unix signal. +Fluent Bit v1.4 introduced the Dump Internals feature, which can be triggered from the command line triggering the `CONT` Unix signal. {% hint style="info" %} This feature is only available on Linux and BSD operating systems. @@ -382,8 +358,7 @@ kill -CONT `pidof fluent-bit` The command `pidof` aims to identify the Process ID of Fluent Bit. -Fluent Bit will dump the following information to the standard output interface -(`stdout`): +Fluent Bit will dump the following information to the standard output interface (`stdout`): ```text [engine] caught signal (SIGCONT) @@ -435,9 +410,7 @@ Overall ingestion status of the plugin. ### Tasks -When an input plugin ingests data into the engine, a Chunk is created. A Chunk can -contains multiple records. At flush time, the engine creates a Task that contains the -routes for the Chunk associated in question. +When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contains multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question. The Task dump describes the tasks associated to the input plugin: @@ -450,11 +423,9 @@ The Task dump describes the tasks associated to the input plugin: ### Chunks -The Chunks dump tells more details about all the chunks that the input plugin has -generated and are still being processed. +The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed. -Depending of the buffering strategy and limits imposed by configuration, some Chunks -might be `up` (in memory) or `down` (filesystem). +Depending of the buffering strategy and limits imposed by configuration, some Chunks might be `up` (in memory) or `down` (filesystem). | Entry | Sub-entry | Description | | :--- | :--- | :--- | @@ -465,11 +436,9 @@ might be `up` (in memory) or `down` (filesystem). | | `size` | Amount of bytes used by the Chunk. | | | `size err` | Number of Chunks in an error state where its size couldn't be retrieved. | -### Storage Layer +### Storage Layer -Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. -The `Storage Layer` entry contains a total summary of Chunks registered by Fluent -Bit: +Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The `Storage Layer` entry contains a total summary of Chunks registered by Fluent Bit: | Entry | Sub-Entry | Description | | :--- | :--- | :--- | diff --git a/installation/amazon-ec2.md b/installation/amazon-ec2.md index 25c5a70ee..c92d02217 100644 --- a/installation/amazon-ec2.md +++ b/installation/amazon-ec2.md @@ -1,4 +1,3 @@ # Amazon EC2 -Learn how to install Fluent Bit and the AWS output plugins on Amazon Linux 2 using -[AWS Systems Manager](https://github.com/aws/aws-for-fluent-bit/tree/master/examples/fluent-bit/systems-manager-ec2). +Learn how to install Fluent Bit and the AWS output plugins on Amazon Linux 2 using [AWS Systems Manager](https://github.com/aws/aws-for-fluent-bit/tree/master/examples/fluent-bit/systems-manager-ec2). diff --git a/installation/aws-container.md b/installation/aws-container.md index 29b9a9363..f6d12b124 100644 --- a/installation/aws-container.md +++ b/installation/aws-container.md @@ -1,14 +1,10 @@ # Containers on AWS -AWS maintains a distribution of Fluent Bit that combines the latest official release with -a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working -together to rewrite their plugins for inclusion in the official Fluent Bit -distribution. +AWS maintains a distribution of Fluent Bit that combines the latest official release with a set of Go Plugins for sending logs to AWS services. AWS and Fluent Bit are working together to rewrite their plugins for inclusion in the official Fluent Bit distribution. ## Plugins -The [AWS for Fluent Bit](https://github.com/aws/aws-for-fluent-bit) image contains Go -Plugins for: +The [AWS for Fluent Bit](https://github.com/aws/aws-for-fluent-bit) image contains Go Plugins for: - Amazon CloudWatch as `cloudwatch_logs`. See the [Fluent Bit docs](https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch) or the @@ -28,19 +24,13 @@ Also, Fluent Bit includes an S3 output plugin named `s3`. ## Versions and Regional Repositories -AWS vends their container image using -[Docker Hub](https://hub.docker.com/r/amazon/aws-for-fluent-bit), and a set of highly -available regional Amazon ECR repositories. For more information, see the -[AWS for Fluent Bit GitHub repository](https://github.com/aws/aws-for-fluent-bit#public-images). +AWS vends their container image using [Docker Hub](https://hub.docker.com/r/amazon/aws-for-fluent-bit), and a set of highly available regional Amazon ECR repositories. For more information, see the [AWS for Fluent Bit GitHub repository](https://github.com/aws/aws-for-fluent-bit#public-images). -The AWS for Fluent Bit image uses a custom versioning scheme because it contains -multiple projects. To see what each release contains, see the [release notes on -GitHub](https://github.com/aws/aws-for-fluent-bit/releases). +The AWS for Fluent Bit image uses a custom versioning scheme because it contains multiple projects. To see what each release contains, see the [release notes on GitHub](https://github.com/aws/aws-for-fluent-bit/releases). ## SSM Public Parameters -AWS vends SSM public parameters with the regional repository link for each image. -These parameters can be queried by any AWS account. +AWS vends SSM public parameters with the regional repository link for each image. These parameters can be queried by any AWS account. To see a list of available version tags in a given region, run the following command: diff --git a/installation/buildroot-embedded-linux.md b/installation/buildroot-embedded-linux.md index a457f1c63..5badc164a 100644 --- a/installation/buildroot-embedded-linux.md +++ b/installation/buildroot-embedded-linux.md @@ -1,11 +1,10 @@ -# Buildroot / Embedded Linux +# Buildroot embedded Linux Install Fluent Bit in your embedded Linux system. ## Install -To install, select Fluent Bit in your `defconfig`. -See the `Config.in` file for all configuration options. +To install, select Fluent Bit in your `defconfig`. See the `Config.in` file for all configuration options. ```text BR2_PACKAGE_FLUENT_BIT=y @@ -23,5 +22,4 @@ Fluent Bit is started by the `S99fluent-bit` script. ## Support -All configurations with a toolchain that supports threads and dynamic library -linking are supported. +All configurations with a toolchain that supports threads and dynamic library linking are supported. diff --git a/installation/docker.md b/installation/docker.md index 918170950..4f54726a5 100644 --- a/installation/docker.md +++ b/installation/docker.md @@ -1,7 +1,6 @@ # Docker -Fluent Bit container images are available on Docker Hub ready for production usage. -Current available images can be deployed in multiple architectures. +Fluent Bit container images are available on Docker Hub ready for production usage. Current available images can be deployed in multiple architectures. ## Start Docker @@ -35,8 +34,7 @@ docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \ ## Tags and versions -The following table describes the Linux container tags that are available on Docker -Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository: +The following table describes the Linux container tags that are available on Docker Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository: | Tag(s) | Manifest Architectures | Description | | ------------ | ------------------------- | -------------------------------------------------------------- | @@ -179,28 +177,19 @@ Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository: It's strongly suggested that you always use the latest image of Fluent Bit. -Container images for Windows Server 2019 and Windows Server 2022 are provided for -v2.0.6 and later. These can be found as tags on the same Docker Hub registry. +Container images for Windows Server 2019 and Windows Server 2022 are provided for v2.0.6 and later. These can be found as tags on the same Docker Hub registry. ## Multi-architecture images -Fluent Bit production stable images are based on -[Distroless](https://github.com/GoogleContainerTools/distroless). Focusing on -security, these images contain only the Fluent Bit binary and minimal system -libraries and basic configuration. +Fluent Bit production stable images are based on [Distroless](https://github.com/GoogleContainerTools/distroless). Focusing on security, these images contain only the Fluent Bit binary and minimal system libraries and basic configuration. -Debug images are available for all architectures (for 1.9.0 and later), and contain -a full Debian shell and package manager that can be used to troubleshoot or for -testing purposes. +Debug images are available for all architectures (for 1.9.0 and later), and contain a full Debian shell and package manager that can be used to troubleshoot or for testing purposes. -From a deployment perspective, there's no need to specify an architecture. The -container client tool that pulls the image gets the proper layer for the running -architecture. +From a deployment perspective, there's no need to specify an architecture. The container client tool that pulls the image gets the proper layer for the running architecture. ## Verify signed container images -Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. -Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)): +Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)): ```shell $ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6 @@ -213,8 +202,7 @@ The following checks were performed on each of these signatures: [{"critical":{"identity":{"docker-reference":"index.docker.io/fluent/fluent-bit"},"image":{"docker-manifest-digest":"sha256:c740f90b07f42823d4ecf4d5e168f32ffb4b8bcd87bc41df8f5e3d14e8272903"},"type":"cosign container image signature"},"optional":{"release":"2.0.6","repo":"fluent/fluent-bit","workflow":"Release from staging"}}] ``` -Replace `cosign` with the binary installed if it has a different name -(for example, `cosign-linux-amd64`). +Replace `cosign` with the binary installed if it has a different name (for example, `cosign-linux-amd64`). Keyless signing is also provided but is still experimental: @@ -222,10 +210,7 @@ Keyless signing is also provided but is still experimental: COSIGN_EXPERIMENTAL=1 cosign verify fluent/fluent-bit:2.0.6 ``` -`COSIGN_EXPERIMENTAL=1` is used to allow verification of images signed in keyless -mode. To learn more about keyless signing, see the -[Sigstore keyless signature](https://docs.sigstore.dev/cosign/signing/overview/) -documentation. +`COSIGN_EXPERIMENTAL=1` is used to allow verification of images signed in keyless mode. To learn more about keyless signing, see the [Sigstore keyless signature](https://docs.sigstore.dev/cosign/signing/overview/) documentation. ## Get started @@ -235,8 +220,7 @@ documentation. docker pull cr.fluentbit.io/fluent/fluent-bit:2.0 ``` -1. After the image is in place, run the following test which makes Fluent Bit - measure CPU usage by the container: +1. After the image is in place, run the following test which makes Fluent Bit measure CPU usage by the container: ```shell docker run -ti cr.fluentbit.io/fluent/fluent-bit:2.0 \ @@ -255,18 +239,12 @@ to the standard output. For example: ### Why there is no Fluent Bit Docker image based on Alpine Linux? -Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible with -Glibc, which generated many issues in the following areas when used with Fluent Bit: +Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible with Glibc, which generated many issues in the following areas when used with Fluent Bit: -- Memory Allocator: To run properly in high-load environments, Fluent Bit uses - Jemalloc as a default memory allocator which reduces fragmentation and provides - better performance. Jemalloc can't run smoothly with Musl and requires extra work. -- Alpine Linux Musl functions bootstrap have a compatibility issue when loading - Golang shared libraries. This causes problems when trying to load Golang output - plugins in Fluent Bit. +- Memory Allocator: To run properly in high-load environments, Fluent Bit uses Jemalloc as a default memory allocator which reduces fragmentation and provides better performance. Jemalloc can't run smoothly with Musl and requires extra work. +- Alpine Linux Musl functions bootstrap have a compatibility issue when loading Golang shared libraries. This causes problems when trying to load Golang output plugins in Fluent Bit. - Alpine Linux Musl Time format parser doesn't support Glibc extensions. -- The Fluent Bit maintainers' preference for base images are Distroless and - Debian for security and maintenance reasons. +- The Fluent Bit maintainers' preference for base images are Distroless and Debian for security and maintenance reasons. ### Why use Distroless containers? @@ -284,29 +262,19 @@ The reasons for using Distroless are well covered in With any choice, there are downsides: - No shell or package manager to update or add things. - - Generally, dynamic updating is a bad idea in containers as the time it's done - affects the outcome: two containers started at different times using the same - base image can perform differently or get different dependencies. - - A better approach is to rebuild a new image version. You can do this with - Distroless, but it's harder and requires multistage builds or similar to provide - the new dependencies. + - Generally, dynamic updating is a bad idea in containers as the time it's done affects the outcome: two containers started at different times using the same base image can perform differently or get different dependencies. + - A better approach is to rebuild a new image version. You can do this with Distroless, but it's harder and requires multistage builds or similar to provide the new dependencies. - Debugging can be harder. - - More specifically you need applications set up to properly expose information for - debugging rather than rely on traditional debug approaches of connecting to - processes or dumping memory. This can be an upfront cost versus a runtime cost but - does shift left in the development process so hopefully is a reduction overall. -- Assumption that Distroless is secure: nothing is secure and there are still - exploits so it doesn't remove the need for securing your system. -- Sometimes you need to use a common base image, such as with audits, security, - health, and so on. + - More specifically you need applications set up to properly expose information for debugging rather than rely on traditional debug approaches of connecting to processes or dumping memory. This can be an upfront cost versus a runtime cost but does shift left in the development process so hopefully is a reduction overall. +- Assumption that Distroless is secure: nothing is secure and there are still exploits so it doesn't remove the need for securing your system. +- Sometimes you need to use a common base image, such as with audits, security, health, and so on. Using `exec` to access a container will potentially impact resource limits. For debugging, debug containers are available now in K8S: -- This can be a significantly different container from the one you want to - investigate, with lots of extra tools or even a different base. +- This can be a significantly different container from the one you want to investigate, with lots of extra tools or even a different base. - No resource limits applied to this container, which can be good or bad. - Runs in pod namespaces. It's another container that can access everything the others can. - Might need architecture of the pod to share volumes or other information. diff --git a/installation/getting-started-with-fluent-bit.md b/installation/getting-started-with-fluent-bit.md index 2f50d20f1..cf8bff880 100644 --- a/installation/getting-started-with-fluent-bit.md +++ b/installation/getting-started-with-fluent-bit.md @@ -56,6 +56,4 @@ Fluent Bit Sandbox Environment ## Enterprise Packages -Fluent Bit packages are also provided by [enterprise -providers](https://fluentbit.io/enterprise) for older end of life versions, Unix -systems, and additional support and features including aspects like CVE backporting. +Fluent Bit packages are also provided by [enterprise providers](https://fluentbit.io/enterprise) for older end of life versions, Unix systems, and additional support and features including aspects like CVE backporting. diff --git a/installation/kubernetes.md b/installation/kubernetes.md index 86afcf370..90f70570b 100644 --- a/installation/kubernetes.md +++ b/installation/kubernetes.md @@ -6,49 +6,33 @@ description: Kubernetes Production Grade Log Processor ![](<../.gitbook/assets/fluentbit\_kube\_logging (1).png>) -[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor -with full support for Kubernetes: +[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes: - Process Kubernetes containers logs from the file system or Systemd/Journald. - Enrich logs with Kubernetes Metadata. -- Centralize your logs in third party storage services like Elasticsearch, InfluxDB, - HTTP, and so on. +- Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, and so on. ## Concepts -Before getting started it's important to understand how Fluent Bit will be deployed. -Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run -on every node to collect logs from every pod. Fluent Bit is deployed as a -DaemonSet, which is a pod that runs on every node of the cluster. +Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster. -When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In -addition, Fluent Bit adds metadata to each entry using the -[Kubernetes](../pipeline/filters/kubernetes) filter plugin. +When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../pipeline/filters/kubernetes) filter plugin. -The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant -information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as -`pod_name`, `container_id`, and `container_name`, are retrieved locally from the log -file names. All of this is handled automatically, and no intervention is required from a -configuration aspect. +The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect. ## Installation -[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will -be available on every node of your Kubernetes cluster. +[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster. -The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm -Chart at . +The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at . ### Note for OpenShift -If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) -using the relevant option in the helm chart. +If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart. ### Installing with Helm Chart -[Helm](https://helm.sh) is a package manager for Kubernetes and lets you deploy -application packages into your running cluster. Fluent Bit is distributed using a Helm -chart found in the [Fluent Helm Charts repository](https://github.com/fluent/helm-charts). +[Helm](https://helm.sh) is a package manager for Kubernetes and lets you deploy application packages into your running cluster. Fluent Bit is distributed using a Helm chart found in the [Fluent Helm Charts repository](https://github.com/fluent/helm-charts). Use the following command to add the Fluent Helm charts repository @@ -56,9 +40,7 @@ Use the following command to add the Fluent Helm charts repository helm repo add fluent https://fluent.github.io/helm-charts ``` -To validate that the repository was added, run `helm search repo fluent` to -ensure the charts were added. The default chart can then be installed by running the -following command: +To validate that the repository was added, run `helm search repo fluent` to ensure the charts were added. The default chart can then be installed by running the following command: ```shell helm upgrade --install fluent-bit fluent/fluent-bit @@ -66,31 +48,17 @@ helm upgrade --install fluent-bit fluent/fluent-bit ### Default Values -The default chart values include configuration to read container logs. With Docker -parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an -Elasticsearch cluster. You can modify the -[included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) -to specify additional outputs, health checks, monitoring endpoints, or other -configuration options. +The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options. ## Details The default configuration of Fluent Bit ensures the following: -- Consume all containers logs from the running node and parse them with either - the `docker` or `cri` multi-line parser. -- Persist how far it got into each file it's tailing so if a pod is restarted it - picks up from where it left off. -- The Kubernetes filter adds Kubernetes metadata, specifically `labels` and - `annotations`. The filter only contacts the API Server when it can't find the - cached information, otherwise it uses the cache. -- The default backend in the configuration is Elasticsearch set by the - [Elasticsearch Output Plugin](../pipeline/outputs/elasticsearch.md). - It uses the Logstash format to ingest the logs. If you need a different `Index` - and `Type`, refer to the plugin option and update as needed. -- There is an option called `Retry_Limit`, which is set to `False`. If Fluent Bit - can't flush the records to Elasticsearch, it will retry indefinitely until it - succeeds. +- Consume all containers logs from the running node and parse them with either the `docker` or `cri` multi-line parser. +- Persist how far it got into each file it's tailing so if a pod is restarted it picks up from where it left off. +- The Kubernetes filter adds Kubernetes metadata, specifically `labels` and `annotations`. The filter only contacts the API Server when it can't find the cached information, otherwise it uses the cache. +- The default backend in the configuration is Elasticsearch set by the [Elasticsearch Output Plugin](../pipeline/outputs/elasticsearch.md). It uses the Logstash format to ingest the logs. If you need a different `Index` and `Type`, refer to the plugin option and update as needed. +- There is an option called `Retry_Limit`, which is set to `False`. If Fluent Bit can't flush the records to Elasticsearch, it will retry indefinitely until it succeeds. ## Windows deployment @@ -102,19 +70,15 @@ When deploying Fluent Bit to Kubernetes, there are three log files that you need - `C:\k\kubelet.err.log` - This is the error log file from kubelet daemon running on host. Retain this file - for future troubleshooting, including debugging deployment failures. + This is the error log file from kubelet daemon running on host. Retain this file for future troubleshooting, including debugging deployment failures. - `C:\var\log\containers\__-.log` - This is the main log file you need to watch. Configure Fluent Bit to follow this - file. It's a symlink to the Docker log file in `C:\ProgramData\`, with some - additional metadata on the file's name. + This is the main log file you need to watch. Configure Fluent Bit to follow this file. It's a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on the file's name. - `C:\ProgramData\Docker\containers\\.log` - This is the log file produced by Docker. Normally you don't directly read from this - file, but you need to make sure that this file is visible from Fluent Bit. + This is the log file produced by Docker. Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit. Typically, your deployment YAML contains the following volume configuration. @@ -156,7 +120,7 @@ parsers: time_key: time time_format: '%Y-%m-%dT%H:%M:%S.%L' time_keep: true - + pipeline: inputs: - name: tail @@ -166,17 +130,17 @@ pipeline: db: 'C:\\fluent-bit\\tail_docker.db' mem_buf_limit: 7MB refresh_interval: 10 - + - name: tail tag: kube.error path: 'C:\\k\\kubelet.err.log' db: 'C:\\fluent-bit\\tail_kubelet.db' - + filters: - name: kubernetes match: kube.* kube_url: 'https://kubernetes.default.svc.cluster.local:443' - + outputs: - name: stdout match: '*' @@ -229,16 +193,12 @@ parsers.conf: | ### Mitigate unstable network on Windows pods -Windows pods often lack working DNS immediately after boot -([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this -issue, `filter_kubernetes` provides a built-in mechanism to wait until the network -starts up: +Windows pods often lack working DNS immediately after boot ([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up: - `DNS_Retries`: Retries N times until the network start working (6) - `DNS_Wait_Time`: Lookup interval between network status checks (30) -By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough -for you, update the configuration as follows: +By default, Fluent Bit waits for three minutes (30 seconds x 6 times). If it's not enough for you, update the configuration as follows: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -264,4 +224,4 @@ for you, update the configuration as follows: ``` % endtab %} -{% endtabs %} \ No newline at end of file +{% endtabs %} diff --git a/installation/linux/README.md b/installation/linux/README.md index 700424251..7aac3e7e0 100644 --- a/installation/linux/README.md +++ b/installation/linux/README.md @@ -1,10 +1,8 @@ # Linux packages -The most secure option is to create the repositories according to the instructions -for your specific OS. +The most secure option is to create the repositories according to the instructions for your specific OS. -An installation script is provided for use with most Linux targets. -This will by default install the most recent version released. +An installation script is provided for use with most Linux targets. This will by default install the most recent version released. ```bash curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh @@ -14,9 +12,7 @@ This is a helper and should always be validated prior to use. ## GPG key updates -For the 1.9.0 and 1.8.15 releases and later, the GPG key -[has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure the new -key is added. +For the 1.9.0 and 1.8.15 releases and later, the GPG key [has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure the new key is added. The GPG Key fingerprint of the new key is: @@ -25,8 +21,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -34,10 +29,8 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](./../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](./../supported-platforms.md) to see which platforms are supported in each release. ## Migration to Fluent Bit -For version 1.9 and later, `td-agent-bit` is a deprecated package and is removed -after 1.9.9. The correct package name to use now is `fluent-bit`. +For version 1.9 and later, `td-agent-bit` is a deprecated package and is removed after 1.9.9. The correct package name to use now is `fluent-bit`. diff --git a/installation/linux/alma-rocky.md b/installation/linux/alma-rocky.md index 1f6fee917..05bbbfe1d 100644 --- a/installation/linux/alma-rocky.md +++ b/installation/linux/alma-rocky.md @@ -1,7 +1,6 @@ -# Rocky Linux and Alma Linux +# Rocky Linux and Alma Linux -Fluent Bit is distributed as the `fluent-bit` package and is available for the latest -versions of Rocky or Alma Linux now that CentOS Stream is tracking more recent dependencies. +Fluent Bit is distributed as the `fluent-bit` package and is available for the latest versions of Rocky or Alma Linux now that CentOS Stream is tracking more recent dependencies. Fluent Bit supports the following architectures: @@ -11,29 +10,21 @@ Fluent Bit supports the following architectures: ## Single line install -Fluent Bit provides an installation script to use for most Linux targets. -This will always install the most recently released version. +Fluent Bit provides an installation script to use for most Linux targets.This will always install the most recently released version. ```bash curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh ``` -This is a convenience helper and should always be validated prior to use. -Older versions of this install script will not support auto-detecting Rocky or Alma Linux. -The recommended secure deployment approach is to use the following instructions: +This is a convenience helper and should always be validated prior to use. Older versions of this install script will not support auto-detecting Rocky or Alma Linux. The recommended secure deployment approach is to use the following instructions: ## RHEL 9 -From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. -This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). -For OSS, we also provide RockyLinux and AlmaLinux repositories. -This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. -No RHEL 9 build is provided, it is expected to use one of the OSS variants listed. +From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). For OSS, we also provide RockyLinux and AlmaLinux repositories. This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed. ## Configure Yum -The `fluent-bit` is provided through a Yum repository. -To add the repository reference to your system: +The `fluent-bit` is provided through a Yum repository. To add the repository reference to your system: 1. In `/etc/yum.repos.d/`, add a new file called `fluent-bit.repo`. 1. Add the following content to the file - replace `almalinux` with `rockylinux` if required: @@ -48,8 +39,7 @@ To add the repository reference to your system: enabled=1 ``` -1. As a best practice, enable `gpgcheck` and `repo_gpgcheck` for security reasons. - Fluent Bit signs its repository metadata and all Fluent Bit packages. +1. As a best practice, enable `gpgcheck` and `repo_gpgcheck` for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages. ## Install @@ -78,7 +68,4 @@ $ systemctl status fluent-bit ... ``` -The default Fluent Bit configuration collect metrics of CPU usage and sends the -records to the standard output. You can see the outgoing data in your -`/var/log/messages` file. - +The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your `/var/log/messages` file. diff --git a/installation/linux/amazon-linux.md b/installation/linux/amazon-linux.md index 64d4519bc..331129aa6 100644 --- a/installation/linux/amazon-linux.md +++ b/installation/linux/amazon-linux.md @@ -2,8 +2,7 @@ ## Install on Amazon Linux -Fluent Bit is distributed as the `fluent-bit` package and is available for the latest -Amazon Linux 2 and Amazon Linux 2023. The following architectures are supported +Fluent Bit is distributed as the `fluent-bit` package and is available for the latest Amazon Linux 2 and Amazon Linux 2023. The following architectures are supported - x86_64 - aarch64 / arm64v8 @@ -12,21 +11,17 @@ Amazon Linux 2022 is no longer supported. ## Single line install -Fluent Bit provides an installation script to use for most Linux targets. -This will always install the most recently released version. +Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version. ```bash copy curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh ``` -This is a convenience helper and should always be validated prior to use. -The recommended secure deployment approach is to use the following instructions: +This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions: ## Configure Yum -The `fluent-bit` is provided through a Yum repository. To add the repository -reference to your system, add a new file called `fluent-bit.repo` in -`/etc/yum.repos.d/` with the following content: +The `fluent-bit` is provided through a Yum repository. To add the repository reference to your system, add a new file called `fluent-bit.repo` in `/etc/yum.repos.d/` with the following content: ### Amazon Linux 2 @@ -50,14 +45,11 @@ gpgkey=https://packages.fluentbit.io/fluentbit.key enabled=1 ``` -You should always enable `gpgcheck` for security reasons. All Fluent Bit packages -are signed. +You should always enable `gpgcheck` for security reasons. All Fluent Bit packages are signed. ### Updated key from March 2022 -For the 1.9.0 and 1.8.15 and later releases, the -[GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure -this new one is added. +For the 1.9.0 and 1.8.15 and later releases, the [GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure this new one is added. The GPG Key fingerprint of the new key is: @@ -66,8 +58,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -75,8 +66,7 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](../supported-platforms.md) to see which platforms are supported in each release. ### Install @@ -105,6 +95,4 @@ $ systemctl status fluent-bit ... ``` -The default Fluent Bit configuration collect metrics of CPU usage and sends the -records to the standard output. You can see the outgoing data in your -`/var/log/messages` file. +The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your `/var/log/messages` file. diff --git a/installation/linux/debian.md b/installation/linux/debian.md index 95f56c3d7..6e566d61f 100644 --- a/installation/linux/debian.md +++ b/installation/linux/debian.md @@ -1,7 +1,6 @@ # Debian -Fluent Bit is distributed as the `fluent-bit` package and is available for the latest -stable CentOS system. +Fluent Bit is distributed as the `fluent-bit` package and is available for the latest stable CentOS system. The following architectures are supported @@ -11,23 +10,19 @@ The following architectures are supported ## Single line install -Fluent Bit provides an installation script to use for most Linux targets. -This will always install the most recently released version. +Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version. ```bash copy curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh ``` -This is a convenience helper and should always be validated prior to use. -The recommended secure deployment approach is to use the following instructions: +This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions: ## Server GPG key -The first step is to add the Fluent Bit server GPG key to your keyring to ensure -you can get the correct signed packages. +The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages. -Follow the official -[Debian wiki guidance](https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP_Key_distribution). +Follow the official [Debian wiki guidance](https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP_Key_distribution). ```bash copy sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg' @@ -35,9 +30,7 @@ sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > / ### Updated key from March 2022 -For the 1.9.0 and 1.8.15 and later releases, the -[GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure -this new one is added. +For the 1.9.0 and 1.8.15 and later releases, the [GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure this new one is added. The GPG Key fingerprint of the new key is: @@ -46,8 +39,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -55,21 +47,17 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](../supported-platforms.md) to see which platforms are supported in each release. ## Update your sources lists For Debian, you must add the Fluent Bit APT server entry to your sources lists. - ```bash copy echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/${CODENAME} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list ``` -Replace _`CODENAME`_ with your specific -[Debian release name](https://wiki.debian.org/DebianReleases#Production\_Releases) -(for example: `bookworm` for Debian 12) +Replace _`CODENAME`_ with your specific [Debian release name](https://wiki.debian.org/DebianReleases#Production\_Releases) (for example: `bookworm` for Debian 12) ## Update your repositories database @@ -80,8 +68,7 @@ sudo apt-get update ``` {% hint style="info" %} -Fluent Bit recommends upgrading your system (`sudo apt-get upgrade`). This could -avoid potential issues with expired certificates. +Fluent Bit recommends upgrading your system (`sudo apt-get upgrade`). This could avoid potential issues with expired certificates. {% endhint %} ## Install Fluent Bit @@ -114,6 +101,4 @@ sudo service fluent-bit status ... ``` -The default Fluent Bit configuration collect metrics of CPU usage and sends the -records to the standard output. You can see the outgoing data in your -`/var/log/messages` file. +The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your `/var/log/messages` file. diff --git a/installation/linux/raspbian-raspberry-pi.md b/installation/linux/raspbian-raspberry-pi.md index 9df2c5f44..773c05947 100644 --- a/installation/linux/raspbian-raspberry-pi.md +++ b/installation/linux/raspbian-raspberry-pi.md @@ -1,8 +1,6 @@ # Raspbian and Raspberry Pi -Fluent Bit is distributed as the `fluent-bit` package and is available for the -Raspberry, specifically for [Raspbian](http://raspbian.org) distribution. The -following versions are supported: +Fluent Bit is distributed as the `fluent-bit` package and is available for the Raspberry, specifically for [Raspbian](http://raspbian.org) distribution. The following versions are supported: * Raspbian Bookworm (12) * Raspbian Bullseye (11) @@ -10,8 +8,7 @@ following versions are supported: ## Server GPG key -The first step is to add the Fluent Bit server GPG key to your keyring so you -can get FLuent Bit signed packages: +The first step is to add the Fluent Bit server GPG key to your keyring so you can get FLuent Bit signed packages: ```shell sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add - ' @@ -19,9 +16,7 @@ sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | sudo apt-key add ### Updated key from March 2022 -For the 1.9.0 and 1.8.15 and later releases, the -[GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure -this new one is added. +For the 1.9.0 and 1.8.15 and later releases, the [GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure this new one is added. The GPG Key fingerprint of the new key is: @@ -30,8 +25,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -39,13 +33,11 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](./../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](./../supported-platforms.md) to see which platforms are supported in each release. ## Update your sources lists -On Debian and derivative systems such as Raspbian, you need to add the Fluent Bit -APT server entry to your sources lists. +On Debian and derivative systems such as Raspbian, you need to add the Fluent Bit APT server entry to your sources lists. Add the following content at bottom of your `/etc/apt/sources.list` file. @@ -76,8 +68,7 @@ sudo apt-get update ``` {% hint style="info" %} -Fluent Bit recommends upgrading your system (`sudo apt-get upgrade`) to avoid -potential issues with expired certificates. +Fluent Bit recommends upgrading your system (`sudo apt-get upgrade`) to avoid potential issues with expired certificates. {% endhint %} ## Install Fluent Bit @@ -110,6 +101,4 @@ sudo service fluent-bit status ... ``` -The default configuration of Fluent Bit collects metrics for CPU usage and -sends the records to the standard output. You can see the outgoing data in your -`/var/log/syslog` file. +The default configuration of Fluent Bit collects metrics for CPU usage and sends the records to the standard output. You can see the outgoing data in your `/var/log/syslog` file. diff --git a/installation/linux/redhat-centos.md b/installation/linux/redhat-centos.md index d1ebdd71f..627f001f2 100644 --- a/installation/linux/redhat-centos.md +++ b/installation/linux/redhat-centos.md @@ -1,7 +1,6 @@ # Red Hat and CentOS -Fluent Bit is distributed as the `fluent-bit` package and is available for the latest -stable CentOS system. +Fluent Bit is distributed as the `fluent-bit` package and is available for the latest stable CentOS system. Fluent Bit supports the following architectures: @@ -9,20 +8,17 @@ Fluent Bit supports the following architectures: - `aarch64` - `arm64v8` -For CentOS 9 and later, Fluent Bit uses [CentOS Stream](https://www.centos.org/centos-stream/) -as the canonical base system. +For CentOS 9 and later, Fluent Bit uses [CentOS Stream](https://www.centos.org/centos-stream/) as the canonical base system. ## Single line install -Fluent Bit provides an installation script to use for most Linux targets. -This will always install the most recently released version. +Fluent Bit provides an installation script to use for most Linux targets. This will always install the most recently released version. ```bash curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh ``` -This is a convenience helper and should always be validated prior to use. -The recommended secure deployment approach is to use the following instructions: +This is a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions: ## CentOS 8 @@ -39,18 +35,13 @@ An alternative is to use Rocky or Alma Linux, which should be equivalent. ## RHEL/AlmaLinux/RockyLinux and CentOS 9 Stream -From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. -This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). -For OSS, we also provide RockyLinux and AlmaLinux repositories. +From CentOS 9 Stream onwards, the CentOS dependencies will update more often than downstream usage. This may mean that incompatible (more recent) versions are provided of certain dependencies (e.g. OpenSSL). For OSS, we also provide RockyLinux and AlmaLinux repositories. -Replace the `centos` string in Yum configuration below with `almalinux` or `rockylinux` to use those repositories instead. -This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. -No RHEL 9 build is provided, it is expected to use one of the OSS variants listed. +Replace the `centos` string in Yum configuration below with `almalinux` or `rockylinux` to use those repositories instead. This may be required for RHEL 9 as well which will no longer track equivalent CentOS 9 stream dependencies. No RHEL 9 build is provided, it is expected to use one of the OSS variants listed. ## Configure Yum -The `fluent-bit` is provided through a Yum repository. To add the repository -reference to your system: +The `fluent-bit` is provided through a Yum repository. To add the repository reference to your system: 1. In `/etc/yum.repos.d/`, add a new file called `fluent-bit.repo`. 1. Add the following content to the file: @@ -65,14 +56,11 @@ reference to your system: enabled=1 ``` -1. As a best practice, enable `gpgcheck` and `repo_gpgcheck` for security reasons. - Fluent Bit signs its repository metadata and all Fluent Bit packages. +1. As a best practice, enable `gpgcheck` and `repo_gpgcheck` for security reasons. Fluent Bit signs its repository metadata and all Fluent Bit packages. ### Updated key from March 2022 -For the 1.9.0 and 1.8.15 and later releases, the -[GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure -this new one is added. +For the 1.9.0 and 1.8.15 and later releases, the [GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure this new one is added. The GPG Key fingerprint of the new key is: @@ -81,8 +69,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -90,8 +77,7 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](../supported-platforms.md) to see which platforms are supported in each release. ### Install @@ -120,17 +106,13 @@ $ systemctl status fluent-bit ... ``` -The default Fluent Bit configuration collect metrics of CPU usage and sends the -records to the standard output. You can see the outgoing data in your -`/var/log/messages` file. +The default Fluent Bit configuration collect metrics of CPU usage and sends the records to the standard output. You can see the outgoing data in your `/var/log/messages` file. ## FAQ ### Yum install fails with a "404 - Page not found" error for the package mirror -The `fluent-bit.repo` file for the latest installations of Fluent Bit uses a -`$releasever` variable to determine the correct version of the package to install to -your system: +The `fluent-bit.repo` file for the latest installations of Fluent Bit uses a `$releasever` variable to determine the correct version of the package to install to your system: ```text [fluent-bit] @@ -139,13 +121,9 @@ baseurl = https://packages.fluentbit.io/centos/$releasever/$basearch/ ... ``` -Depending on your Red Hat distribution version, this variable can return a value -other than the OS major release version (for example, RHEL7 Server distributions return -`7Server` instead of `7`). The Fluent Bit package URL uses the major OS -release version, so any other value here will cause a 404. +Depending on your Red Hat distribution version, this variable can return a value other than the OS major release version (for example, RHEL7 Server distributions return `7Server` instead of `7`). The Fluent Bit package URL uses the major OS release version, so any other value here will cause a 404. -To resolve this issue, replace the `$releasever` variable with your system's OS major -release version. For example: +To resolve this issue, replace the `$releasever` variable with your system's OS major release version. For example: ```text [fluent-bit] @@ -159,7 +137,6 @@ enabled=1 ### Yum install fails with incompatible dependencies using CentOS 9+ -CentOS 9 onwards will no longer be compatible with RHEL 9 as it may track more recent dependencies. -Alternative AlmaLinux and RockyLinux repositories are available. +CentOS 9 onwards will no longer be compatible with RHEL 9 as it may track more recent dependencies. Alternative AlmaLinux and RockyLinux repositories are available. See the guidance above. diff --git a/installation/linux/ubuntu.md b/installation/linux/ubuntu.md index 16b709364..e98719acb 100644 --- a/installation/linux/ubuntu.md +++ b/installation/linux/ubuntu.md @@ -1,28 +1,22 @@ # Ubuntu -Fluent Bit is distributed as the `fluent-bit` package and is available for long-term -support releases of Ubuntu. The latest officially supported version is Noble Numbat -(24.04). +Fluent Bit is distributed as the `fluent-bit` package and is available for long-term support releases of Ubuntu. The latest officially supported version is Noble Numbat (24.04). ## Single line install -An installation script is provided for most Linux targets. -This will always install the most recent version released. +An installation script is provided for most Linux targets. This will always install the most recent version released. ```bash curl https://raw.githubusercontent.com/fluent/fluent-bit/master/install.sh | sh ``` -This is purely a convenience helper and should always be validated prior to use. -The recommended secure deployment approach is to use the following instructions. +This is purely a convenience helper and should always be validated prior to use. The recommended secure deployment approach is to use the following instructions. ## Server GPG key -The first step is to add the Fluent Bit server GPG key to your keyring to ensure -you can get the correct signed packages. +The first step is to add the Fluent Bit server GPG key to your keyring to ensure you can get the correct signed packages. -Follow the official -[Debian wiki guidance](https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP_Key_distribution). +Follow the official [Debian wiki guidance](https://wiki.debian.org/DebianRepository/UseThirdParty#OpenPGP_Key_distribution). ```bash sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > /usr/share/keyrings/fluentbit-keyring.gpg' @@ -30,9 +24,7 @@ sudo sh -c 'curl https://packages.fluentbit.io/fluentbit.key | gpg --dearmor > / ### Updated key from March 2022 -For releases 1.9.0 and 1.8.15 and later, the -[GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure -the new key is added. +For releases 1.9.0 and 1.8.15 and later, the [GPG key has been updated](https://packages.fluentbit.io/fluentbit.key). Ensure the new key is added. The GPG Key fingerprint of the new key is: @@ -41,8 +33,7 @@ C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD Fluentbit releases (Releases signing key) ``` -The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) -and might be required to install previous versions. +The previous key is [still available](https://packages.fluentbit.io/fluentbit-legacy.key) and might be required to install previous versions. The GPG Key fingerprint of the old key is: @@ -50,14 +41,11 @@ The GPG Key fingerprint of the old key is: F209 D876 2A60 CD49 E680 633B 4FF8 368B 6EA0 722A ``` -Refer to the [supported platform documentation](../supported-platforms.md) to see -which platforms are supported in each release. +Refer to the [supported platform documentation](../supported-platforms.md) to see which platforms are supported in each release. ## Update your sources lists -On Ubuntu, you need to add the Fluent Bit APT server entry to your sources lists. -Ensure `CODENAME` is set to your specific [Ubuntu release name](https://wiki.ubuntu.com/Releases). -For example, `focal` for Ubuntu 20.04. +On Ubuntu, you need to add the Fluent Bit APT server entry to your sources lists. Ensure `CODENAME` is set to your specific [Ubuntu release name](https://wiki.ubuntu.com/Releases). For example, `focal` for Ubuntu 20.04. ```bash echo "deb [signed-by=/usr/share/keyrings/fluentbit-keyring.gpg] https://packages.fluentbit.io/ubuntu/${CODENAME} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/fluent-bit.list @@ -72,14 +60,12 @@ sudo apt-get update ``` {% hint style="info" %} -Fluent Bit recommends upgrading your system to avoid potential issues -with expired certificates: +Fluent Bit recommends upgrading your system to avoid potential issues with expired certificates: `sudo apt-get upgrade` -If you receive the error `Certificate verification failed`, check if the package -`ca-certificates` is properly installed: +If you receive the error `Certificate verification failed`, check if the package `ca-certificates` is properly installed: `sudo apt-get install ca-certificates` {% endhint %} @@ -114,6 +100,4 @@ systemctl status fluent-bit ... ``` -The default configuration of `fluent-bit` is collecting metrics of CPU usage and -sending the records to the standard output. You can see the outgoing data in your -`/var/log/syslog` file. +The default configuration of `fluent-bit` is collecting metrics of CPU usage and sending the records to the standard output. You can see the outgoing data in your `/var/log/syslog` file. diff --git a/installation/macos.md b/installation/macos.md index 76dd015ab..34e75a7e8 100644 --- a/installation/macos.md +++ b/installation/macos.md @@ -1,7 +1,6 @@ # macOS -Fluent Bit is compatible with the latest Apple macOS software for x86_64 and -Apple Silicon architectures. +Fluent Bit is compatible with the latest Apple macOS software for x86_64 and Apple Silicon architectures. ## Installation packages @@ -9,8 +8,7 @@ Installation packages can be found [here](https://packages.fluentbit.io/macos/). ## Requirements -You must have [Homebrew](https://brew.sh/) installed in your system. -If it isn't present, install it with the following command: +You must have [Homebrew](https://brew.sh/) installed in your system. If it isn't present, install it with the following command: ```bash copy /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" @@ -18,8 +16,7 @@ If it isn't present, install it with the following command: ## Installing from Homebrew -The Fluent Bit package on Homebrew isn't officially supported, but should work for -basic use cases and testing. It can be installed using: +The Fluent Bit package on Homebrew isn't officially supported, but should work for basic use cases and testing. It can be installed using: ```bash copy brew install fluent-bit @@ -44,15 +41,13 @@ brew install git cmake openssl bison libyaml cd fluent-bit ``` - If you want to use a specific version, checkout to the proper tag. - For example, to use `v1.8.13`, use the command: + If you want to use a specific version, checkout to the proper tag. For example, to use `v1.8.13`, use the command: ```bash copy git checkout v1.8.13 ``` -1. To prepare the build system, you must expose certain environment variables so - Fluent Bit CMake build rules can pick the right libraries: +1. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries: ```bash copy export OPENSSL_ROOT_DIR=`brew --prefix openssl` @@ -65,16 +60,14 @@ brew install git cmake openssl bison libyaml cd build/ ``` -1. Build Fluent Bit. This example indicates to the build system the location - the final binaries and `config` files should be installed: +1. Build Fluent Bit. This example indicates to the build system the location the final binaries and `config` files should be installed: ```bash cmake -DFLB_DEV=on -DCMAKE_INSTALL_PREFIX=/opt/fluent-bit ../ make -j 16 ``` -1. Install Fluent Bit to the previously specified directory. - Writing to this directory requires root privileges. +1. Install Fluent Bit to the previously specified directory. Writing to this directory requires root privileges. ```bash sudo make install @@ -98,16 +91,14 @@ The binaries and configuration examples can be located at `/opt/fluent-bit/`. git checkout v1.9.2 ``` -1. To prepare the build system, you must expose certain environment variables so - Fluent Bit CMake build rules can pick the right libraries: +1. To prepare the build system, you must expose certain environment variables so Fluent Bit CMake build rules can pick the right libraries: ```bash copy export OPENSSL_ROOT_DIR=`brew --prefix openssl` export PATH=`brew --prefix bison`/bin:$PATH ``` -1. Create the specific macOS SDK target. For example, to specify macOS Big Sur - (11.3) SDK environment: +1. Create the specific macOS SDK target. For example, to specify macOS Big Sur (11.3) SDK environment: ```bash copy export MACOSX_DEPLOYMENT_TARGET=11.3 @@ -158,9 +149,7 @@ To make the access path easier to Fluent Bit binary, extend the `PATH` variable: export PATH=/opt/fluent-bit/bin:$PATH ``` -To test, try Fluent Bit by generating a test message using the -[Dummy input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/dummy) -which prints to the standard output interface every one second: +To test, try Fluent Bit by generating a test message using the [Dummy input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/dummy) which prints to the standard output interface every one second: ```bash copy fluent-bit -i dummy -o stdout -f 1 diff --git a/installation/requirements.md b/installation/requirements.md index fe4399b27..f71f5365d 100644 --- a/installation/requirements.md +++ b/installation/requirements.md @@ -1,20 +1,14 @@ # Requirements -[Fluent Bit](http://fluentbit.io) has very low CPU and memory consumption. It's -compatible with most x86-, x86_64-, arm32v7-, and arm64v8-based platforms. +[Fluent Bit](http://fluentbit.io) has very low CPU and memory consumption. It's compatible with most x86-, x86_64-, arm32v7-, and arm64v8-based platforms. The build process requires the following components: - Compiler: GCC or clang - CMake -- Flex and Bison: Required for - [Stream Processor](https://docs.fluentbit.io/manual/stream-processing/introduction) - or [Record Accessor](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor) +- Flex and Bison: Required for [Stream Processor](https://docs.fluentbit.io/manual/stream-processing/introduction) or [Record Accessor](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor) - Libyaml development headers and libraries -Core has no other dependencies. Some features depend on third-party components. -For example, output plugins with special backend libraries like Kafka include those -libraries in the main source code repository. +Core has no other dependencies. Some features depend on third-party components. For example, output plugins with special backend libraries like Kafka include those libraries in the main source code repository. -Fluent Bit is supported on Linux on IBM Z(s390x), but the WASM and LUA filter -plugins aren't. +Fluent Bit is supported on Linux on IBM Z(s390x), but the Wasm and Lua filter plugins aren't. diff --git a/installation/sources/build-and-install.md b/installation/sources/build-and-install.md index c30e2c115..a3cd98a2b 100644 --- a/installation/sources/build-and-install.md +++ b/installation/sources/build-and-install.md @@ -12,11 +12,9 @@ ## Prepare environment -If you already know how CMake works, you can skip this section and review the -available [build options](#general-options). +If you already know how CMake works, you can skip this section and review the available [build options](#general-options). -The following steps explain how to build and install the project with the default -options. +The following steps explain how to build and install the project with the default options. 1. Change to the `build/` directory inside the Fluent Bit sources: @@ -24,8 +22,7 @@ options. cd build/ ``` -1. Let [CMake](http://cmake.org) configure the project specifying where the root - path is located: +1. Let [CMake](http://cmake.org) configure the project specifying where the root path is located: ```bash cmake ../ @@ -136,9 +133,7 @@ Fluent Bit provides configurable options to CMake that can be enabled or disable ### Input plugins -Input plugins gather information from a specific source type like network interfaces, -some built-in metrics, or through a specific input device. The following input plugins -are available: +Input plugins gather information from a specific source type like network interfaces, some built-in metrics, or through a specific input device. The following input plugins are available: | Option | Description | Default | | :--- | :--- | :--- | @@ -172,8 +167,7 @@ are available: ### Filter plugins -Filter plugins let you modify, enrich or drop records. The following table describes -the filters available on this version: +Filter plugins let you modify, enrich or drop records. The following table describes the filters available on this version: | Option | Description | Default | | :--- | :--- | :--- | @@ -196,8 +190,7 @@ the filters available on this version: ### Output plugins -Output plugins let you flush the information to some external interface, service, or -terminal. The following table describes the output plugins available: +Output plugins let you flush the information to some external interface, service, or terminal. The following table describes the output plugins available: | Option | Description | Default | | :--- | :--- | :--- | @@ -233,8 +226,7 @@ terminal. The following table describes the output plugins available: ### Processor plugins -Processor plugins handle the events within the processor pipelines to allow -modifying, enriching, or dropping events. +Processor plugins handle the events within the processor pipelines to allow modifying, enriching, or dropping events. The following table describes the processors available: diff --git a/installation/sources/build-with-static-configuration.md b/installation/sources/build-with-static-configuration.md index bbd831d96..f078ac139 100644 --- a/installation/sources/build-with-static-configuration.md +++ b/installation/sources/build-with-static-configuration.md @@ -1,31 +1,18 @@ # Build with static configuration -[Fluent Bit](https://fluentbit.io) in normal operation mode is configurable through -[text files](/installation/configuration/file.md) -or using specific arguments in the command line. Although this is the ideal deployment -case, there are scenarios where a more restricted configuration is required. Static -configuration mode restricts configuration ability. +[Fluent Bit](https://fluentbit.io) in normal operation mode is configurable through [text files](/installation/configuration/file.md) or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability. -Static configuration mode includes a built-in configuration in the final binary of -Fluent Bit, disabling the usage of external files or flags at runtime. +Static configuration mode includes a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime. ## Get started ### Requirements -The following steps assume you are familiar with configuring Fluent Bit using text -files and you have experience building it from scratch as described in -[Build and Install](build-and-install.md). +The following steps assume you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in [Build and Install](build-and-install.md). #### Configuration Directory -In your file system, prepare a specific directory that will be used as an entry -point for the build system to lookup and parse the configuration files. This -directory must contain a minimum of one configuration file called -`fluent-bit.conf` containing the required -[SERVICE](/administration/configuring-fluent-bit/yaml/service-section.md), -[INPUT](/concepts/data-pipeline/input.md), and [OUTPUT](/concepts/data-pipeline/output.md) -sections. +In your file system, prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. This directory must contain a minimum of one configuration file called `fluent-bit.conf` containing the required [SERVICE](/administration/configuring-fluent-bit/yaml/service-section.md), [INPUT](/concepts/data-pipeline/input.md), and [OUTPUT](/concepts/data-pipeline/output.md) sections. As an example, create a new `fluent-bit.yaml` file or `fluent-bit.conf` file with the corresponding content below: @@ -41,7 +28,7 @@ service: pipeline: inputs: - name: cpu - + outputs: - name: stdout match: '*' @@ -68,8 +55,7 @@ pipeline: {% endtab %} {% endtabs %} -This configuration calculates CPU metrics from the running system and prints them -to the standard output interface. +This configuration calculates CPU metrics from the running system and prints them to the standard output interface. #### Build with custom configuration @@ -101,4 +87,4 @@ Copyright (C) Treasure Data [2018/10/19 15:32:31] [ info] [engine] started (pid=15186) [0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}] -``` \ No newline at end of file +``` diff --git a/installation/sources/download-source-code.md b/installation/sources/download-source-code.md index 729f3144c..579a04aac 100644 --- a/installation/sources/download-source-code.md +++ b/installation/sources/download-source-code.md @@ -4,9 +4,7 @@ You can download the most recent stable or development source code. ## Stable -For production systems, it's strongly suggested that you get the latest stable release -of the source code in either zip file or tarball file format from GitHub using the -following link pattern: +For production systems, it's strongly suggested that you get the latest stable release of the source code in either zip file or tarball file format from GitHub using the following link pattern: ```text https://github.com/fluent/fluent-bit/archive/refs/tags/v<release version>.tar.gz @@ -17,15 +15,12 @@ For example, for version 1.8.12 the link is: [https://github.com/fluent/fluent-b ## Development -If you want to contribute to Fluent Bit, you should use the most recent code. You can -get the development version from the Git repository: +If you want to contribute to Fluent Bit, you should use the most recent code. You can get the development version from the Git repository: ```bash git clone https://github.com/fluent/fluent-bit ``` -The `master` branch is where the development of Fluent Bit happens. -Development version users should expect issues when compiling or at run time. +The `master` branch is where the development of Fluent Bit happens. Development version users should expect issues when compiling or at run time. -Fluent Bit users are encouraged to help test every development version to ensure a -stable release. +Fluent Bit users are encouraged to help test every development version to ensure a stable release. diff --git a/installation/supported-platforms.md b/installation/supported-platforms.md index c5d2b7081..260759f1f 100644 --- a/installation/supported-platforms.md +++ b/installation/supported-platforms.md @@ -23,15 +23,10 @@ Fluent Bit supports the following operating systems and architectures: | Windows | [Windows Server 2019](windows.md) | x86_64, x86 | | | [Windows 10 1903](windows.md) | x86_64, x86 | -From an architecture support perspective, Fluent Bit is fully functional on x86_64, -Arm64v8, and Arm32v7 based processors. +From an architecture support perspective, Fluent Bit is fully functional on x86_64, Arm64v8, and Arm32v7 based processors. -Fluent Bit can work also on macOS and Berkeley Software Distribution (BSD) systems, -but not all plugins will be available on all platforms. +Fluent Bit can work also on macOS and Berkeley Software Distribution (BSD) systems, but not all plugins will be available on all platforms. -Official support is based on community demand. Fluent Bit might run on older operating -systems, but must be built from source, or using custom packages from -[enterprise providers](https://fluentbit.io/enterprise). +Official support is based on community demand. Fluent Bit might run on older operating systems, but must be built from source, or using custom packages from [enterprise providers](https://fluentbit.io/enterprise). -Fluent Bit is supported for Linux on IBM Z (s390x) environments with some -restrictions, but only container images are provided for these targets officially. +Fluent Bit is supported for Linux on IBM Z (s390x) environments with some restrictions, but only container images are provided for these targets officially. diff --git a/installation/upgrade-notes.md b/installation/upgrade-notes.md index 82be690a7..d897759a2 100644 --- a/installation/upgrade-notes.md +++ b/installation/upgrade-notes.md @@ -1,4 +1,4 @@ -# Upgrade Notes +# Upgrade notes The following article covers the relevant compatibility changes for users upgrading from previous Fluent Bit versions. diff --git a/installation/windows.md b/installation/windows.md index b61f9473f..a01494ddb 100644 --- a/installation/windows.md +++ b/installation/windows.md @@ -1,12 +1,8 @@ # Windows -Fluent Bit is distributed as the `fluent-bit` package for Windows and as a -[Windows container on Docker Hub](docker.md). Fluent Bit provides two Windows -installers: a `ZIP` archive and an `EXE` installer. +Fluent Bit is distributed as the `fluent-bit` package for Windows and as a [Windows container on Docker Hub](docker.md). Fluent Bit provides two Windows installers: a `ZIP` archive and an `EXE` installer. -Not all plugins are supported on Windows. The -[CMake configuration](https://github.com/fluent/fluent-bit/blob/master/cmake/windows-setup.cmake) -shows the default set of supported plugins. +Not all plugins are supported on Windows. The [CMake configuration](https://github.com/fluent/fluent-bit/blob/master/cmake/windows-setup.cmake) shows the default set of supported plugins. ## Configuration @@ -81,13 +77,11 @@ The following configuration is an example: ## Migration to Fluent Bit -For version 1.9 and later, `td-agent-bit` is a deprecated package and was removed -after 1.9.9. The correct package name to use now is `fluent-bit`. +For version 1.9 and later, `td-agent-bit` is a deprecated package and was removed after 1.9.9. The correct package name to use now is `fluent-bit`. ## Installation packages -The latest stable version is 4.0.4. -Each version is available from the following download URLs. +The latest stable version is 4.0.4. Each version is available from the following download URLs. | INSTALLERS | SHA256 CHECKSUMS | |----------- | ---------------- | @@ -98,8 +92,7 @@ Each version is available from the following download URLs. | [fluent-bit-4.0.4-winarm64.exe](https://packages.fluentbit.io/windows/fluent-bit-4.0.4-winarm64.exe) | [c70efad14418d7c5fb361581260cb82a1475b8196b35c3554aa5497eafb7e3ef](https://packages.fluentbit.io/windows/fluent-bit-4.0.4-winarm64.exe.sha256) | | [fluent-bit-4.0.4-winarm64.zip](https://packages.fluentbit.io/windows/fluent-bit-4.0.4-winarm64.zip) | [d6819f25005b4e0148ac06802e299d16991f65155b164b7a25b0a0ae0a8b5228](https://packages.fluentbit.io/windows/fluent-bit-4.0.4-winarm64.zip.sha256) | -These are now using the Github Actions built versions. Legacy AppVeyor builds are -still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated. +These are now using the Github Actions built versions. Legacy AppVeyor builds are still available (AMD 32/64 only) at releases.fluentbit.io but are deprecated. MSI installers are also available: @@ -115,11 +108,9 @@ Get-FileHash fluent-bit-4.0.4-win32.exe ## Installing from a ZIP archive -1. Download a ZIP archive. Choose the suitable installers for your 32-bit or 64-bit - environments. +1. Download a ZIP archive. Choose the suitable installers for your 32-bit or 64-bit environments. -1. Expand the ZIP archive. You can do this by clicking **Extract All** in Explorer - or `Expand-Archive` in PowerShell. +1. Expand the ZIP archive. You can do this by clicking **Extract All** in Explorer or `Expand-Archive` in PowerShell. ```shell Expand-Archive fluent-bit-4.0.4-win64.zip @@ -178,8 +169,7 @@ To halt the process, press `Control+C` in the terminal. 1. Download an EXE installer for the appropriate 32-bit or 64-bit build. 1. Double-click the EXE installer you've downloaded. The installation wizard starts. -1. Click **Next** and finish the installation. By default, Fluent Bit is installed - in `C:\Program Files\fluent-bit\`. +1. Click **Next** and finish the installation. By default, Fluent Bit is installed in `C:\Program Files\fluent-bit\`. ```shell & "C:\Program Files\fluent-bit\bin\fluent-bit.exe" -i dummy -o stdout @@ -187,10 +177,7 @@ To halt the process, press `Control+C` in the terminal. ### Installer options -The Windows installer is built by -[`CPack` using NSIS](https://cmake.org/cmake/help/latest/cpack_gen/nsis.html) -and supports the [default NSIS options](https://nsis.sourceforge.io/Docs/Chapter3.html#3.2.1) -for silent installation and install directory. +The Windows installer is built by [`CPack` using NSIS](https://cmake.org/cmake/help/latest/cpack_gen/nsis.html) and supports the [default NSIS options](https://nsis.sourceforge.io/Docs/Chapter3.html#3.2.1) for silent installation and install directory. To silently install to `C:\fluent-bit` directory here is an example: @@ -198,14 +185,11 @@ To silently install to `C:\fluent-bit` directory here is an example: /S /D=C:\fluent-bit ``` -The uninstaller also supports a silent uninstall using the same `/S` flag. -This can be used for provisioning with automation like Ansible, Puppet, and so on. +The uninstaller also supports a silent uninstall using the same `/S` flag. This can be used for provisioning with automation like Ansible, Puppet, and so on. ## Windows service support -Windows services are equivalent to daemons in UNIX (long-running background -processes). -For v1.5.0 and later, Fluent Bit has native support for Windows services. +Windows services are equivalent to daemons in UNIX (long-running background processes). For v1.5.0 and later, Fluent Bit has native support for Windows services. For example, you have the following installation layout: @@ -221,8 +205,7 @@ C:\fluent-bit\ └── fluent-bit.pdb ``` -To register Fluent Bit as a Windows service, execute the following command on -at a command prompt. A single space is required after `binpath=`. +To register Fluent Bit as a Windows service, execute the following command on at a command prompt. A single space is required after `binpath=`. ```shell sc.exe create fluent-bit binpath= "\fluent-bit\bin\fluent-bit.exe -c \fluent-bit\conf\fluent-bit.conf" @@ -298,13 +281,11 @@ Remove-Service fluent-bit ## Compile from Source -If you need to create a custom executable, use the following procedure to -compile Fluent Bit by yourself. +If you need to create a custom executable, use the following procedure to compile Fluent Bit by yourself. ### Preparation -1. Install Microsoft Visual C++ to compile Fluent Bit. You can install the minimum - toolkit using the following command: +1. Install Microsoft Visual C++ to compile Fluent Bit. You can install the minimum toolkit using the following command: ```shell wget -o vs.exe https://aka.ms/vs/16/release/vs_buildtools.exe @@ -313,8 +294,7 @@ start vs.exe 1. Choose `C++ Build Tools` and `C++ CMake tools for Windows` and wait until the process finishes. -1. Install flex and bison. One way to install them on Windows is to use - [winflexbison](https://github.com/lexxmark/winflexbison). +1. Install flex and bison. One way to install them on Windows is to use [winflexbison](https://github.com/lexxmark/winflexbison). ```shell wget -o winflexbison.zip https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip @@ -323,8 +303,7 @@ start vs.exe cp -Path C:\WinFlexBison\win_flex.exe C:\WinFlexBison\flex.exe ``` -1. Add the path `C:\WinFlexBison` to your systems environment variable `Path`. - [Here's how to do that](https://www.architectryan.com/2018/03/17/add-to-the-path-on-windows-10/). +1. Add the path `C:\WinFlexBison` to your systems environment variable `Path`. [Here's how to do that](https://www.architectryan.com/2018/03/17/add-to-the-path-on-windows-10/). 1. Install OpenSSL binaries, at least the library files and headers. @@ -337,12 +316,9 @@ start vs.exe ### Compilation -1. Open the **Start menu** on Windows and type `command Prompt for VS`. From the result - list, select the one that corresponds to your target system ( `x86` or `x64`). +1. Open the **Start menu** on Windows and type `command Prompt for VS`. From the result list, select the one that corresponds to your target system ( `x86` or `x64`). -1. Verify the installed OpenSSL library files match the selected target. You can - examine the library files by using the `dumpbin` command with the `/headers` - option . +1. Verify the installed OpenSSL library files match the selected target. You can examine the library files by using the `dumpbin` command with the `/headers` option. 1. Clone the source code of Fluent Bit. diff --git a/installation/yocto-embedded-linux.md b/installation/yocto-embedded-linux.md index 9de20762c..8c242543e 100644 --- a/installation/yocto-embedded-linux.md +++ b/installation/yocto-embedded-linux.md @@ -1,20 +1,15 @@ # Yocto embedded Linux -[Fluent Bit](https://fluentbit.io) source code provides BitBake recipes to configure, -build, and package the software for a Yocto-based image. Specific steps in the -usage of these recipes in your Yocto environment (Poky) is out of the scope of this -documentation. +[Fluent Bit](https://fluentbit.io) source code provides BitBake recipes to configure, build, and package the software for a Yocto-based image. Specific steps in the usage of these recipes in your Yocto environment (Poky) is out of the scope of this documentation. -Fluent Bit distributes two main recipes, one for testing/dev purposes and -one with the latest stable release. +Fluent Bit distributes two main recipes, one for testing/dev purposes and one with the latest stable release. | Version | Recipe | Description | | :--- | :--- | :--- | | `devel` | [fluent-bit\_git.bb](https://github.com/fluent/fluent-bit/blob/master/fluent-bit_git.bb) | Build Fluent Bit from Git master. Use for development and testing purposes only. | | `v1.8.11` | [fluent-bit\_1.8.11.bb](https://github.com/fluent/fluent-bit/blob/v1.8.11/fluent-bit_1.8.11.bb) | Build latest stable version of Fluent Bit. | -It's strongly recommended to always use the stable release of the Fluent Bit recipe -and not the one from Git master for production deployments. +It's strongly recommended to always use the stable release of the Fluent Bit recipe and not the one from Git master for production deployments. ## Fluent Bit and other architectures