diff --git a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
index ae5dc49cd..8626da2d0 100644
--- a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
+++ b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md
@@ -1,6 +1,6 @@
# Upstream servers
-Fluent Bit [output plugins](../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
+Fluent Bit [output plugins](../../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
An `Upstream` defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the `Upstream` feature. The following plugin has `Upstream` support:
diff --git a/administration/configuring-fluent-bit/multiline-parsing.md b/administration/configuring-fluent-bit/multiline-parsing.md
index f4d8a4ba4..613353443 100644
--- a/administration/configuring-fluent-bit/multiline-parsing.md
+++ b/administration/configuring-fluent-bit/multiline-parsing.md
@@ -18,7 +18,7 @@ Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific m
| Parser | Description |
| ------ | ----------- |
| `docker` | Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker. |
-| `cri` | Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries |
+| `cri` | Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries. |
| `go` | Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected. |
| `python` | Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected. |
| `java` | Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected. |
@@ -35,7 +35,7 @@ To understand which multiline parser type is required for your use case you have
| -------- | ----------- | ------- |
| `name` | Specify a unique name for the multiline parser definition. A good practice is to prefix the name with the word `multiline_` to avoid confusion with normal parser definitions. | _none_ |
| `type` | Set the multiline mode. Fluent Bit supports the type `regex`.| _none_ |
-| `parser` | Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message.
When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. | _none_ |
+| `parser` | Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. | _none_ |
| `key_content` | For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated. | _none_ |
| `flush_timeout` | Timeout in milliseconds to flush a non-terminated multiline buffer. | `5s` |
| `rule` | Configure a rule to match a multiline pattern. The rule has a [specific format](#rules-definition). Multiple rules can be defined. | _none_|
@@ -59,7 +59,7 @@ A rule specifies how to match a multiline pattern and perform the concatenation.
- regular expression pattern
- next state
-A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:
+A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:
{% tabs %}
{% tab title="parsers_multiline.yaml" %}
@@ -90,12 +90,12 @@ rule "cont" "/^\s+at.*/" "cont"
{% endtab %}
{% endtabs %}
-This example defines two rules. Each rule has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.
+This example defines two rules. Each rule has its own state name, regular expression patterns, and the next state name. Every field that composes a rule must be inside double quotes.
-The first rule of a state name must be `start_state`. The regex pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.
+The first rule of a state name must be `start_state`. The regular expression pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.
{% hint style="info" %}
-To simplify the configuration of regular expressions, you can use the [Rubular]((https://rubular.com/r/NDuyKwlTGOvq2g)) web site. This link uses the regex described in the previous example, plus a log line that matches the pattern:
+To simplify the configuration of regular expressions, you can use the [Rubular](https://rubular.com/r/NDuyKwlTGOvq2g) web site. This link uses the regular expression described in the previous example, plus a log line that matches the pattern:
{% endhint %}
#### Configuration example
diff --git a/administration/configuring-fluent-bit/unit-sizes.md b/administration/configuring-fluent-bit/unit-sizes.md
index 367f5d937..8f168cf37 100644
--- a/administration/configuring-fluent-bit/unit-sizes.md
+++ b/administration/configuring-fluent-bit/unit-sizes.md
@@ -1,12 +1,12 @@
# Unit sizes
-Some configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/tail.md), [Forward Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/forward.md) or generic properties like [Mem\_Buf\_Limit](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/configuring-fluent-bit/backpressure.md) use unit sizes.
+Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure) use unit sizes.
-[Fluent Bit](http://fluentbit.io) v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
+Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
| Suffix | Description | Example |
| :--- | :--- | :--- |
-| | When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of 32000 means 32000 bytes. |
-| `k`, `K`, `KB`, `kb` | Kilobyte: a unit of memory equal to 1,000 bytes. | 32k means 32000 bytes. |
-| `m`, `M`, `MB`, `mb` | Megabyte: a unit of memory equal to 1,000,000 bytes. | 1M means 1000000 bytes. |
-| `g`, `G`, `GB`, `gb` | Gigabyte: a unit of memory equal to 1,000,000,000 bytes. | 1G means 1000000000 bytes. |
+| | When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of `32000` means 32000 bytes. |
+| `k`, `K`, `KB`, `kb` | Kilobyte: a unit of memory equal to 1,000 bytes. | `32k` means 32000 bytes. |
+| `m`, `M`, `MB`, `mb` | Megabyte: a unit of memory equal to 1,000,000 bytes. | `1M` means 1000000 bytes. |
+| `g`, `G`, `GB`, `gb` | Gigabyte: a unit of memory equal to 1,000,000,000 bytes. | `1G` means 1000000000 bytes. |
diff --git a/administration/monitoring.md b/administration/monitoring.md
index dde8b59e9..bee5d0f83 100644
--- a/administration/monitoring.md
+++ b/administration/monitoring.md
@@ -62,7 +62,7 @@ pipeline:
{% endtab %}
{% endtabs %}
-Start Fluent bit with the corresponding configuration chosen above:
+Start Fluent bit with the corresponding configuration chosen previously:
```shell
# For YAML configuration.
@@ -381,15 +381,15 @@ When querying the related metrics, the aliases are returned instead of the plugi
You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics.
-The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.
+The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://github.com/banzaicloud)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.

### Alerts
-Sample alerts are available [here](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml).
+Sample alerts [are available](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml).
-## Health Check for Fluent Bit
+## Health check for Fluent Bit
Fluent bit supports the following configurations to set up the health check.
diff --git a/administration/troubleshooting.md b/administration/troubleshooting.md
index 404deadb1..15600a5a7 100644
--- a/administration/troubleshooting.md
+++ b/administration/troubleshooting.md
@@ -2,19 +2,19 @@
-- [Tap: generate events or records](troubleshooting.md#tap)
-- [Dump internals signal](troubleshooting#dump-internals-signal)
+- [Tap: generate events or records](#tap)
+- [Dump internals signal](#dump-internals-and-signal)
## Tap
Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.
-### Basic Tap example
+### Tap example
Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):
```shell
-$ docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
+docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
-Z, --enable-chunk-traceenable chunk tracing, it can be activated either through the http api or the command line
--trace-input input to start tracing on startup.
--trace-output output to use for tracing on startup.
@@ -27,7 +27,7 @@ If the `--enable-chunk-trace` option is present, your Fluent Bit version support
You can start Fluent Bit with tracing activated from the beginning by using the `trace-input` and `trace-output` properties:
```bash
-$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
+fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
Fluent Bit v2.1.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
@@ -338,7 +338,7 @@ This filter record is an example to explain the details of a Tap record:
- `plugin_alias`: If an alias is set this field will contain the alias set for a plugin.
- `records`: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.
-## Dump Internals / Signal
+## Dump Internals and signal
When the service is running, you can export [metrics](monitoring.md) to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.
@@ -436,7 +436,7 @@ Depending of the buffering strategy and limits imposed by configuration, some Ch
| | `size` | Amount of bytes used by the Chunk. |
| | `size err` | Number of Chunks in an error state where its size couldn't be retrieved. |
-### Storage Layer
+### Storage layer
Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The `Storage Layer` entry contains a total summary of Chunks registered by Fluent Bit:
diff --git a/concepts/data-pipeline/parser.md b/concepts/data-pipeline/parser.md
index 4e5cdbc03..6465c848f 100644
--- a/concepts/data-pipeline/parser.md
+++ b/concepts/data-pipeline/parser.md
@@ -26,7 +26,7 @@ The parser converts unstructured data to structured data. As an example, conside
192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
```
-This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](pipeline/parsers/regular-expression) is used, the log entry could be converted to:
+This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:
```javascript
{
diff --git a/installation/docker.md b/installation/docker.md
index 4f54726a5..77b296240 100644
--- a/installation/docker.md
+++ b/installation/docker.md
@@ -16,27 +16,31 @@ Use the following command to start Fluent Bit while using a configuration file:
{% tabs %}
{% tab title="fluent-bit.conf" %}
+
```shell
docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
cr.fluentbit.io/fluent/fluent-bit
```
+
{% endtab %}
{% tab title="fluent-bit.yaml" %}
+
```shell
docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
cr.fluentbit.io/fluent/fluent-bit \
-c /fluent-bit/etc/fluent-bit.yaml
+
```
+
{% endtab %}
{% endtabs %}
-
## Tags and versions
The following table describes the Linux container tags that are available on Docker Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository:
-| Tag(s) | Manifest Architectures | Description |
+| Tags | Manifest Architectures | Description |
| ------------ | ------------------------- | -------------------------------------------------------------- |
| 4.0.4-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
| 4.0.4 | x86_64, arm64v8, arm32v7, s390x | Release [v4.0.4](https://fluentbit.io/announcements/v4.0.4/) |
@@ -122,8 +126,6 @@ The following table describes the Linux container tags that are available on Doc
| 2.1.6 | x86_64, arm64v8, arm32v7 | Release [v2.1.6](https://fluentbit.io/announcements/v2.1.6/) |
| 2.1.5 | x86_64, arm64v8, arm32v7 | Release [v2.1.5](https://fluentbit.io/announcements/v2.1.5/) |
| 2.1.5-debug | x86_64, arm64v8, arm32v7 | Debug images |
-| 2.1.4 | x86_64, arm64v8, arm32v7 | Release [v2.1.4](https://fluentbit.io/announcements/v2.1.4/) |
-| 2.1.4-debug | x86_64, arm64v8, arm32v7 | Debug images |
| 2.1.3 | x86_64, arm64v8, arm32v7 | Release [v2.1.3](https://fluentbit.io/announcements/v2.1.3/) |
| 2.1.3-debug | x86_64, arm64v8, arm32v7 | Debug images |
| 2.1.2 | x86_64, arm64v8, arm32v7 | Release [v2.1.2](https://fluentbit.io/announcements/v2.1.2/) |
@@ -189,7 +191,7 @@ From a deployment perspective, there's no need to specify an architecture. The c
## Verify signed container images
-Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)):
+Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/quickstart/quickstart-cosign/)):
```shell
$ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6
diff --git a/installation/kubernetes.md b/installation/kubernetes.md
index 90f70570b..7fab6573c 100644
--- a/installation/kubernetes.md
+++ b/installation/kubernetes.md
@@ -4,8 +4,6 @@ description: Kubernetes Production Grade Log Processor
# Kubernetes
-.png>)
-
[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes:
- Process Kubernetes containers logs from the file system or Systemd/Journald.
@@ -16,17 +14,17 @@ description: Kubernetes Production Grade Log Processor
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
-When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../pipeline/filters/kubernetes) filter plugin.
+When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) filter plugin.
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
## Installation
-[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
+Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
-The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at .
+The recommended way to deploy Fluent Bit for Kubernetes is with the official [Helm Chart](https://github.com/fluent/helm-charts).
-### Note for OpenShift
+### OpenShift
If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.
@@ -46,7 +44,7 @@ To validate that the repository was added, run `helm search repo fluent` to ensu
helm upgrade --install fluent-bit fluent/fluent-bit
```
-### Default Values
+### Default values
The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
diff --git a/installation/linux/raspbian-raspberry-pi.md b/installation/linux/raspbian-raspberry-pi.md
index 773c05947..bc474d562 100644
--- a/installation/linux/raspbian-raspberry-pi.md
+++ b/installation/linux/raspbian-raspberry-pi.md
@@ -1,10 +1,10 @@
-# Raspbian and Raspberry Pi
+# Raspberry Pi
-Fluent Bit is distributed as the `fluent-bit` package and is available for the Raspberry, specifically for [Raspbian](http://raspbian.org) distribution. The following versions are supported:
+Fluent Bit is distributed as the `fluent-bit` package and is available for [Raspberry Pi](https://www.raspberrypi.com/software/operating-systems/). The following versions are supported:
-* Raspbian Bookworm (12)
-* Raspbian Bullseye (11)
-* Raspbian Buster (10)
+- Raspbian Bookworm (12)
+- Raspbian Bullseye (11)
+- Raspbian Buster (10)
## Server GPG key
diff --git a/installation/sources/build-with-static-configuration.md b/installation/sources/build-with-static-configuration.md
index f078ac139..fd3bd4894 100644
--- a/installation/sources/build-with-static-configuration.md
+++ b/installation/sources/build-with-static-configuration.md
@@ -1,6 +1,6 @@
# Build with static configuration
-[Fluent Bit](https://fluentbit.io) in normal operation mode is configurable through [text files](/installation/configuration/file.md) or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability.
+[Fluent Bit](https://fluentbit.io) in normal operation mode is configurable through text files or using specific arguments in the command line. Although this is the ideal deployment case, there are scenarios where a more restricted configuration is required. Static configuration mode restricts configuration ability.
Static configuration mode includes a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.
@@ -10,11 +10,11 @@ Static configuration mode includes a built-in configuration in the final binary
The following steps assume you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in [Build and Install](build-and-install.md).
-#### Configuration Directory
+#### Configuration directory
In your file system, prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. This directory must contain a minimum of one configuration file called `fluent-bit.conf` containing the required [SERVICE](/administration/configuring-fluent-bit/yaml/service-section.md), [INPUT](/concepts/data-pipeline/input.md), and [OUTPUT](/concepts/data-pipeline/output.md) sections.
-As an example, create a new `fluent-bit.yaml` file or `fluent-bit.conf` file with the corresponding content below:
+As an example, create a new `fluent-bit.yaml` file or `fluent-bit.conf` file:
{% tabs %}
{% tab title="fluent-bit.yaml" %}
diff --git a/local-testing/validating-your-data-and-structure.md b/local-testing/validating-your-data-and-structure.md
index e9c9b4921..f129cf477 100644
--- a/local-testing/validating-your-data-and-structure.md
+++ b/local-testing/validating-your-data-and-structure.md
@@ -4,7 +4,7 @@ Fluent Bit supports multiple sources and formats. In addition, it provides filte
Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.
-In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect.md) filter, which you can use to validate keys and values from your records and take action when an exception is found.
+In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect) filter, which you can use to validate keys and values from your records and take action when an exception is found.
A simplified view of the data processing pipeline is as follows:
@@ -20,8 +20,8 @@ IS --> Fil --> OD
Consider the following pipeline, which uses a JSON file as its data source and has two filters:
-- [Grep](../pipeline/filters/grep.md) to exclude certain records.
-- [Record Modifier](../pipeline/filters/record-modifier.md) to alter records' content by adding and removing specific keys.
+- [Grep](../pipeline/filters/grep) to exclude certain records.
+- [Record Modifier](../pipeline/filters/record-modifier) to alter records' content by adding and removing specific keys.
```mermaid
flowchart LR
diff --git a/pipeline/filters/checklist.md b/pipeline/filters/checklist.md
index 3e3a20aa4..0422115b3 100644
--- a/pipeline/filters/checklist.md
+++ b/pipeline/filters/checklist.md
@@ -1,6 +1,6 @@
# CheckList
-The CheckList plugin (introduced in version 1.8.4) looks up a value in a specified list to see if it exists. The plugin then allows the addition of a record to indicate if the value was found.
+The _CheckList_ plugin (introduced in version 1.8.4) looks up a value in a specified list to see if it exists. The plugin then allows the addition of a record to indicate if the value was found.
## Configuration parameters
diff --git a/pipeline/filters/grep.md b/pipeline/filters/grep.md
index 9942a7ff5..6db4b106f 100644
--- a/pipeline/filters/grep.md
+++ b/pipeline/filters/grep.md
@@ -16,7 +16,7 @@ The plugin supports the following configuration parameters:
| `Exclude` | `KEY REGEX` | Exclude records where the content of `KEY` matches the regular expression. |
| `Logical_Op` | `Operation` | Specify a logical operator: `AND`, `OR` or `legacy` (default). In `legacy` mode the behaviour is either `AND` or `OR` depending on whether the `grep` is including (uses `AND`) or excluding (uses OR). Available from 2.1 or higher. |
-### Record Accessor Enabled
+### Record Accessor enabled
Enable the [Record Accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values.
@@ -53,18 +53,18 @@ The following command loads the [tail](../../pipeline/inputs/tail) plugin and re
```yaml
service:
parsers_file: /path/to/parsers.conf
-
+
pipeline:
inputs:
- name: tail
path: lines.txt
parser: json
-
+
filters:
- name: grep
match: '*'
regex: log aa
-
+
outputs:
- name: stdout
match: '*'
@@ -95,7 +95,7 @@ pipeline:
{% endtab %}
{% endtabs %}
-The filter allows to use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions).
+The filter lets you use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions).
### Nested fields example
@@ -126,8 +126,8 @@ For example, to exclude records that match the nested field `kubernetes.labels.a
{% tab title="fluent-bit.yaml" %}
```yaml
-pipeline:
-
+pipeline:
+
filters:
- name: grep
match: '*'
@@ -161,7 +161,7 @@ The following example checks for a specific valid value for the key:
```yaml
pipeline:
-
+
filters:
# Use Grep to verify the contents of the iot_timestamp value.
# If the iot_timestamp key does not exist, this will fail
@@ -196,10 +196,10 @@ or is missing or empty, then it will be excluded.
If you want to set multiple `Regex` or `Exclude`, you must use the `legacy` mode. In this case, the `Exclude` must be first and you can have only one `Regex`.
If `Exclude` match, the string is blocked. You can have multiple `Exclude` entry.
-After, if there is no `Regex`, the ligne is send to the output.
-if there is a `Regex` and it match, the ligne is send to the output, else, it's blocked.
+After, if there is no `Regex`, the line is sent to the output.
+if there is a `Regex` and it match, the line is sent to the output, else, it's blocked.
-If you want to set multiple `Regex` or `Exclude`, you can use `Logical_Op` property to use logical conjuction or disjunction.
+If you want to set multiple `Regex` or `Exclude`, you can use `Logical_Op` property to use logical conjunction or disjunction.
If `Logical_Op` is set, setting both `Regex` and `Exclude` results in an error.
@@ -212,7 +212,7 @@ pipeline:
- name: dummy
dummy: '{"endpoint":"localhost", "value":"something"}'
tag: dummy
-
+
filters:
- name: grep
match: '*'
@@ -277,4 +277,4 @@ ______ _ _ ______ _ _ ___ _____
[2025/07/03 16:15:34] [ info] [sp] stream processor started
[0] dummy: [1674348410.558341857, {"endpoint"=>"localhost", "value"=>"something"}]
[0] dummy: [1674348411.546425499, {"endpoint"=>"localhost", "value"=>"something"}]
-```
\ No newline at end of file
+```
diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md
index 845b0a994..e7b3a209e 100644
--- a/pipeline/filters/kubernetes.md
+++ b/pipeline/filters/kubernetes.md
@@ -26,7 +26,7 @@ The plugin supports the following configuration parameters:
| Key | Description | Default |
| :--- | :--- | :--- |
| `Buffer_Size` | Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must conform to the [unit size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. A value of `0` results in no limit, and the buffer will expand as-needed. If pod specifications exceed the buffer limit, the API response is discarded when retrieving metadata, and some Kubernetes metadata will fail to be injected to the logs. | `32k` |
-| `Kube_URL` | API Server end-point | [https://kubernetes.default.svc:443](https://kubernetes.default.svc:443) |
+| `Kube_URL` | API Server end-point | `https://kubernetes.default.svc:443` |
| `Kube_CA_File` | CA certificate file | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` |
| `Kube_CA_Path` | Absolute path to scan for certificate files | _none_ |
| `Kube_Token_File` | Token file | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
@@ -51,7 +51,7 @@ The plugin supports the following configuration parameters:
| `DNS_Retries` | Number of DNS lookup retries until the network starts working. | `6` |
| `DNS_Wait_Time` | DNS lookup interval between network status checks. | `30` |
| `Use_Kubelet` | Optional feature flag to get metadata information from Kubelet instead of calling Kube Server API to enhance the log. This could mitigate the [Kube API heavy traffic issue for large cluster](kubernetes.md#optional-feature-using-kubelet-to-get-metadata). If used when any [Kubernetes Namespace Meta](#kubernetes-namespace-meta) fields are enabled, Kubelet will be used to fetch pod data, but namespace meta will still be fetched using the `Kube_URL` settings.| `Off` |
-| `Use_Tag_For_Meta` | When enabled, Kubernetes metadata (for example, `pod_name`, `container_name`, and `namespace_name`) will be extracted from the tag itself. Connection to Kubernetes API Server won't get established and API calls for metadata won't be made. See [Workflow of Tail + Kubernetes Filter](#workflow-of-tail--kubernetes-filter) and [Custom tag For enhanced filtering](#custom-tag-for-enhanced-filtering) to better understand metadata extraction from tags. | `Off` |
+| `Use_Tag_For_Meta` | When enabled, Kubernetes metadata (for example, `pod_name`, `container_name`, and `namespace_name`) will be extracted from the tag itself. Connection to Kubernetes API Server won't get established and API calls for metadata won't be made. See [Workflow of Tail + Kubernetes Filter](#workflow-of-tail-and-kubernetes-filter) and [Custom tag For enhanced filtering](#custom-tags-for-enhanced-filtering) to better understand metadata extraction from tags. | `Off` |
| `Kubelet_Port` | Kubelet port to use for HTTP requests. This only works when `Use_Kubelet` is set to `On`. | `10250` |
| `Kubelet_Host` | Kubelet host to use for HTTP requests. This only works when `Use_Kubelet` is set to `On`. | `127.0.0.1` |
| `Kube_Meta_Cache_TTL` | Configurable time-to-live for Kubernetes cached pod metadata. By default, it's set to `0` which means `TTL` for cache entries is disabled and cache entries are evicted at random when capacity is reached. To enable this option, set the number to a time interval. For example, set the value to `60` or `60s` and cache entries which have been created more than 60 seconds ago will be evicted. | `0` |
@@ -273,9 +273,7 @@ apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1
Rather than modify the original tag, the transformation creates a new representation for the filter to perform metadata lookup.
-Suggested change
-The new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal regular expression:
-The new value is used by the filter to lookup the pod name and namespace. For that purpose, it uses an internal regular expression:
+With this suggested change, the new value is used by the filter to lookup the pod name and namespace. For that purpose, it uses an internal regular expression:
```text
(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$
@@ -304,7 +302,7 @@ parsers:
- name: custom-tag
format: regex
regex: '^(?[^_]+)\.(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?.+)\.(?[a-z0-9]{64})'
-
+
pipeline:
inputs:
- name: tail
@@ -538,7 +536,7 @@ If you are in debug mode, you can see more:
## Troubleshooting
-Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is operating properly. The following section describes specific log messages you might receive.
+Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is operating properly. You might receive log messages like the following:
- You can't see metadata appended to your pods or other Kubernetes objects
@@ -590,4 +588,4 @@ Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is opera
## Credit
-The Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata Filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) written by [Jimmi Dyson](https://github.com/jimmidyson).
\ No newline at end of file
+The Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata Filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) written by [Jimmi Dyson](https://github.com/jimmidyson).
diff --git a/pipeline/filters/log_to_metrics.md b/pipeline/filters/log_to_metrics.md
index 7a4d0c41e..236c1c464 100644
--- a/pipeline/filters/log_to_metrics.md
+++ b/pipeline/filters/log_to_metrics.md
@@ -6,13 +6,13 @@ description: Generate metrics from logs

-The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a guage for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
+The _Log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a gauge for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
-This filter does not actually act as a record filter and therefore does not change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline.
+This filter doesn't actually act as a record filter and therefore doesn't change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline.
{% hint style="warning" %}
-This filter is an experimental feature and is not recommended for production use. Configuration parameters and other capabilities are subject to change without notice.
+This filter is an experimental feature and isn't recommended for production use. Configuration parameters and other capabilities are subject to change without notice.
{% endhint %}
@@ -28,9 +28,9 @@ The plugin supports the following configuration parameters:
| `metric_description` | Required. Sets a description for the metric. | |
| `bucket` | Required for mode `histogram`. Defines a bucket for histograms. | For example, `0.75` |
| `add_label` | Adds a custom label `NAME` and set the value to the value of `KEY`. | |
-| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. |
-| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. |
-| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [kubernetes](../pipeline/filters/kubernetes.md) filter plugin, which fills those fields. | |
+| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. |
+| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. |
+| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [Kubernetes](./kubernetes) filter plugin, which fills those fields. | |
| `Regex` | Includes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` |
| `Exclude` | Excludes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` |
| `Flush_Interval_Sec` | The interval for metrics emission, in seconds. If `Flush_Interval_Sec` and `Flush_Interval_Nsec` are either both unset or both set to `0`, the filter emits metrics immediately after each filter match. Otherwise, if either parameter is set to a non-zero value, the filter emits metrics at the specified interval. Longer intervals help lower resource consumption in high-load situations. Default value: `0`. | |
@@ -40,7 +40,7 @@ The plugin supports the following configuration parameters:
{% hint style="info" %}
-All examples below assume Prometheus is running on the local machine as shown in the Fluent Bit configurations.
+The following examples assume Prometheus is running on the local machine as shown in the Fluent Bit configurations.
{% endhint %}
@@ -263,7 +263,7 @@ log_metric_gauge_current_duration{namespace_name="default",pod_name="pod1",conta
In the resulting output, only one line is printed. Records from the first input plugin are ignored because they don't match the regular expression.
-This filter also lets you use multiple rules, which are applied in order. You can have as many `regex` and `exclude` entries as required (see [Grep](../pipeline/filters/grep.md) filter plugin).
+This filter also lets you use multiple rules, which are applied in order. You can have as many `regex` and `exclude` entries as required (see [Grep](./grep.md) filter plugin).
If you execute the example curl command multiple times, the example metric value remains at `60` because the messages generated by the Dummy plugin don't change. In a real-world scenario, the values would change and return to the last processed value.
diff --git a/pipeline/filters/multiline-stacktrace.md b/pipeline/filters/multiline-stacktrace.md
index 1f937f41a..a3017fdbc 100644
--- a/pipeline/filters/multiline-stacktrace.md
+++ b/pipeline/filters/multiline-stacktrace.md
@@ -1,6 +1,5 @@
---
-description: Concatenate multiline or stack trace log messages. Available on Fluent Bit >=
- v1.8.2.
+description: Concatenate multiline or stack trace log messages.
---
# Multiline
@@ -23,7 +22,7 @@ When using this filter:
This filter only performs buffering that persists across different Chunks when `Buffer` is enabled. Otherwise, the filter processes one chunk at a time and isn't suitable for most inputs which might send multiline messages in separate chunks.
-When buffering is enabled, the filter doesn't immediately emit messages it receives. It uses the `in_emitter` plugin, similar to the [Rewrite Tag filter](pipeline/filters/rewrite-tag.md), and emits messages once they're fully concatenated, or a timeout is reached.
+When buffering is enabled, the filter doesn't immediately emit messages it receives. It uses the `in_emitter` plugin, similar to the [Rewrite Tag filter](./rewrite-tag.md), and emits messages once they're fully concatenated, or a timeout is reached.
{% endhint %}
@@ -48,7 +47,7 @@ The plugin supports the following configuration parameters:
| `flush_ms` | Flush time for pending multiline records. Default: `2000`. |
| `emitter_name` | Name for the emitter input instance which re-emits the completed records at the beginning of the pipeline. |
| `emitter_storage.type` | The storage type for the emitter input instance. This option supports the values `memory` (default) and `filesystem`. |
-| `emitter_mem_buf_limit` | Set a limit on the amount of memory the emitter can consume if the outputs provide backpressure. The default for this limit is `10M`. The pipeline will pause once the buffer exceeds the value of this setting. or example, if the value is set to `10M` then the pipeline pauses if the buffer exceeds `10M`. The pipeline will remain paused until the output drains the buffer below the `10M` limit. |
+| `emitter_mem_buf_limit` | Set a limit on the amount of memory the emitter can consume if the outputs provide backpressure. The default for this limit is `10M`. The pipeline will pause once the buffer exceeds the value of this setting. or example, if the value is set to `10M` then the pipeline pauses if the buffer exceeds `10M`. The pipeline will remain paused until the output drains the buffer under the `10M` limit. |
## Configuration example
@@ -69,19 +68,19 @@ service:
flush: 1
log_level: info
parsers_file: parsers_multiline.yaml
-
+
pipeline:
inputs:
- name: tail
path: test.log
read_from_head: true
-
+
filters:
- name: multiline
match: '*'
multiline.key_content: log
multiline.parser: go,multiline-regex-test
-
+
outputs:
- name: stdout
match: '*'
@@ -319,7 +318,7 @@ Lines that don't match a pattern aren't considered as part of the multiline mess
## Docker partial message use case
-When Fluent Bit is consuming logs from a container runtime, such as Docker, these logs will be split when larger than a certain limit, usually 16KB. If your application emits a 100K log line, it will be split into seven partial messages. If you are using the [Fluentd Docker Log Driver](https://docs.docker.com/config/containers/logging/fluentd/) to send the logs to Fluent Bit, they might look like this:
+When Fluent Bit is consuming logs from a container runtime, such as Docker, these logs will be split when larger than a certain limit, usually 16 KB. If your application emits a 100K log line, it will be split into seven partial messages. If you are using the [Fluentd Docker Log Driver](https://docs.docker.com/config/containers/logging/fluentd/) to send the logs to Fluent Bit, they might look like this:
```text
{"source": "stdout", "log": "... omitted for brevity...", "partial_message": "true", "partial_id": "dc37eb08b4242c41757d4cd995d983d1cdda4589193755a22fcf47a638317da0", "partial_ordinal": "1", "partial_last": "false", "container_id": "a96998303938eab6087a7f8487ca40350f2c252559bc6047569a0b11b936f0f2", "container_name": "/hopeful_taussig"}]
@@ -354,4 +353,5 @@ pipeline:
{% endtab %}
{% endtabs %}
-The two options for `mode` are mutually exclusive in the filter. If you set the `mode` to `partial_message` then the `multiline.parser` option isn't allowed.
\ No newline at end of file
+The two options for `mode` are mutually exclusive in the filter. If you set the
+`mode` to `partial_message` then the `multiline.parser` option isn't allowed.
diff --git a/pipeline/filters/record-modifier.md b/pipeline/filters/record-modifier.md
index 925364467..ec5629db7 100644
--- a/pipeline/filters/record-modifier.md
+++ b/pipeline/filters/record-modifier.md
@@ -1,6 +1,6 @@
# Record modifier
-The _Record Modifier_ [filter](pipeline/filters.md) lets you append fields to a record, or exclude specific fields.
+The _Record modifier_ filter lets you append fields to a record, or exclude specific fields.
## Configuration parameters
diff --git a/pipeline/inputs/process.md b/pipeline/inputs/process.md
index 0fc790b2e..f8d685987 100644
--- a/pipeline/inputs/process.md
+++ b/pipeline/inputs/process.md
@@ -16,7 +16,7 @@ The plugin supports the following configuration parameters:
| `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` |
| `Fd` | If enabled, a number of `fd` is appended to each record. | `true` |
| `Mem` | If enabled, memory usage of the process is appended to each record. | `true` |
-| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading#inputs). | `false` |
## Get started
@@ -30,7 +30,7 @@ fluent-bit -i proc -p proc_name=crond -o stdout
### Configuration file
-In your main configuration file, append the following `Input` & `Output` sections:
+In your main configuration file, append the following sections:
{% tabs %}
{% tab title="fluent-bit.yaml" %}
@@ -83,4 +83,4 @@ $ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
[2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
...
-```
\ No newline at end of file
+```
diff --git a/pipeline/outputs/azure_kusto.md b/pipeline/outputs/azure_kusto.md
index ddc2284c2..deae7de6a 100644
--- a/pipeline/outputs/azure_kusto.md
+++ b/pipeline/outputs/azure_kusto.md
@@ -13,12 +13,12 @@ Create an Azure Data Explorer cluster in one of the following ways:
- [Create a free-tier cluster](https://dataexplorer.azure.com/freecluster)
- [Create a fully featured cluster](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
-## Ingest into Microsoft Fabric real time analytics: Creating an Eventhouse cluster and KQL database
+## Ingest into Microsoft Fabric real time analytics: create an Eventhouse cluster and KQL database
Create an Eventhouse cluster and a KQL database using the following steps:
-- [Create an Eventhouse cluster](https://docs.microsoft.com/en-us/azure/data-explorer/eventhouse/create-eventhouse-cluster)
-- [Create a KQL database](https://docs.microsoft.com/en-us/azure/data-explorer/eventhouse/create-database)
+- [Create an Eventhouse cluster](https://learn.microsoft.com/en-us/training/modules/query-data-kql-database-microsoft-fabric/)
+- [Create a KQL database](https://learn.microsoft.com/en-us/training/modules/query-data-kql-database-microsoft-fabric/)
## Create an Azure registered application
@@ -42,47 +42,47 @@ A table with the expected schema must exist in order for data to be ingested pro
.create table FluentBit (log:dynamic, tag:string, timestamp:datetime)
```
-## Optional - Creating an Ingestion Mapping
-
-By default, Kusto will insert incoming ingestions into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creating a [JSON ingestion mapping](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/mappings#json-mapping). The plugin can be configured to use an ingestion mapping via the `ingestion_mapping_reference` configuration key.
-
-## Configuration Parameters
-
-| Key | Description | Default |
-| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
-| tenant_id | _Required if `managed_identity_client_id` is not set_ - The tenant/domain ID of the AAD registered application. | |
-| client_id | _Required if `managed_identity_client_id` is not set_ - The client ID of the AAD registered application. | |
-| client_secret | _Required if `managed_identity_client_id` is not set_ - The client secret of the AAD registered application ([App Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret)). |
-| managed_identity_client_id | _Required if `tenant_id`, `client_id`, and `client_secret` are not set_ - The managed identity ID to authenticate with. Set to `SYSTEM` for system-assigned managed identity, or set to the MI client ID (GUID) for user-assigned managed identity. | |
-| ingestion_endpoint | _Required_ - The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net |
-| database_name | _Required_ - The database name. | |
-| table_name | _Required_ - The table name. | |
-| ingestion_mapping_reference | _Optional_ - The name of a [JSON ingestion mapping](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/mappings#json-mapping) that will be used to map the ingested payload into the table columns. | |
-| log_key | Key name of the log content. | `log` |
-| include_tag_key | If enabled, a tag is appended to output. The key name is used `tag_key` property. | `On` |
-| tag_key | The key name of tag. If `include_tag_key` is false, This property is ignored. | `tag` |
-| include_time_key | If enabled, a timestamp is appended to output. The key name is used `time_key` property. | `On` |
-| time_key | The key name of time. If `include_time_key` is false, This property is ignored. | `timestamp` |
-| ingestion_endpoint_connect_timeout | The connection timeout of various Kusto endpoints in seconds. | `60` |
-| compression_enabled | If enabled, sends compressed HTTP payload (gzip) to Kusto. | `true` |
-| ingestion_resources_refresh_interval | The ingestion resources refresh interval of Kusto endpoint in seconds.
-| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
-| buffering_enabled | _Optional_ - Enable buffering into disk before ingesting into Azure Kusto. | `Off` |
-| buffer_dir | _Optional_ - When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` |
-| upload_timeout | _Optional_ - When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` |
-| upload_file_size | _Optional_ - When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` |
-| azure_kusto_buffer_key | _Optional_ - When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` |
-| store_dir_limit_size | _Optional_ - When buffering is `On`, set the max size of the buffer directory. | `8GB` |
-| buffer_file_delete_early | _Optional_ - When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` |
-| unify_tag | _Optional_ - This creates a single buffer file when the buffering mode is `On`. | `On` |
-| blob_uri_length | _Optional_ - Set the length of generated blob URI before ingesting to Kusto. | `64` |
-| scheduler_max_retries | _Optional_ - When buffering is `On`, set the maximum number of retries for ingestion using the scheduler. | `3` |
-| delete_on_max_upload_error | _Optional_ - When buffering is `On`, whether to delete the buffer file on maximum upload errors. | `Off` |
-| io_timeout | _Optional_ - Configure the HTTP IO timeout for uploads. | `60s` |
-
-### Configuration File
-
-Get started quickly with this configuration file:
+## Optional - create an ingestion mapping
+
+By default, Kusto will insert incoming ingestion data into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creating a [JSON ingestion mapping](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/mappings#json-mapping). The plugin can be configured to use an ingestion mapping using the `ingestion_mapping_reference` configuration key.
+
+## Configuration parameters
+
+| Key | Description | Default |
+| --- | ----------- | ----------- |
+| `tenant_id` | Required if `managed_identity_client_id` isn't set. The tenant/domain ID of the AAD registered application. | _none_ |
+| `client_id` | Required if `managed_identity_client_id` isn't set. The client ID of the AAD registered application. | _none_ |
+| `client_secret` | Required if `managed_identity_client_id` isn't set. The client secret of the AAD registered application ([App Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret)). | _none_ |
+| `managed_identity_client_id` | Required if `tenant_id`, `client_id`, and `client_secret` aren't set. The managed identity ID to authenticate with. Set to `SYSTEM` for system-assigned managed identity, or set to the MI client ID (`GUID`) for user-assigned managed identity. | _none_ |
+| `ingestion_endpoint` | The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net`. | _none_ |
+| `database_name` | The database name. | _none_ |
+| `table_name` | The table name. | _none_ |
+| `ingestion_mapping_reference` | Optional. The name of a [JSON ingestion mapping](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/mappings#json-mapping) that will be used to map the ingested payload into the table columns. | _none_ |
+| `log_key` | Key name of the log content. | `log` |
+| `include_tag_key` | If enabled, a tag is appended to output. The key name is used `tag_key` property. | `On` |
+| `tag_key` | The key name of tag. If `include_tag_key` is false, This property is ignored. | `tag` |
+| `include_time_key` | If enabled, a timestamp is appended to output. The key name is used `time_key` property. | `On` |
+| `time_key` | The key name of time. If `include_time_key` is false, This property is ignored. | `timestamp` |
+| `ingestion_endpoint_connect_timeout` | The connection timeout of various Kusto endpoints in seconds. | `60` |
+| `compression_enabled` | If enabled, sends compressed HTTP payload (gzip) to Kusto. | `true` |
+| `ingestion_resources_refresh_interval` | The ingestion resources refresh interval of Kusto endpoint in seconds. | _none_ |
+| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
+| `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` |
+| `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` |
+| `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` |
+| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` |
+| `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` |
+| `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` |
+| `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` |
+| `unify_tag` | Optional. This creates a single buffer file when the buffering mode is `On`. | `On` |
+| `blob_uri_length` | Optional. Set the length of generated blob URI before ingesting to Kusto. | `64` |
+| `scheduler_max_retries` | Optional. When buffering is `On`, set the maximum number of retries for ingestion using the scheduler. | `3` |
+| `delete_on_max_upload_error` | Optional. When buffering is `On`, whether to delete the buffer file on maximum upload errors. | `Off` |
+| `io_timeout` | Optional. Configure the HTTP IO timeout for uploads. | `60s` |
+
+### Configuration file
+
+Get started with this configuration file:
{% tabs %}
{% tab title="fluent-bit.yaml" %}
@@ -162,7 +162,7 @@ pipeline:
## Troubleshooting
-### 403 Forbidden
+### `403 Forbidden`
If you get a `403 Forbidden` error response, make sure that:
diff --git a/pipeline/outputs/bigquery.md b/pipeline/outputs/bigquery.md
index 47eb3be57..78b225798 100644
--- a/pipeline/outputs/bigquery.md
+++ b/pipeline/outputs/bigquery.md
@@ -14,7 +14,7 @@ Fluent Bit streams data into an existing BigQuery table using a service account
1. To stream data into BigQuery, you must create a [Google Cloud service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) for Fluent Bit.
1. Create a BigQuery dataset.
- Fluent Bit doesn't create datasets for your data, so you must [create the dataset]((https://cloud.google.com/bigquery/docs/datasets)) ahead of time. You must also grant the service account `WRITER` permission on the dataset.
+ Fluent Bit doesn't create datasets for your data, so you must [create the dataset](https://cloud.google.com/bigquery/docs/datasets) ahead of time. You must also grant the service account `WRITER` permission on the dataset.
Within the dataset you must create a table for the data to reside in. Use the following instructions for creating your table. Pay close attention to the schema, as it must match the schema of your output JSON. Unfortunately, because BigQuery doesn't allow dots in field names, you must use a filter to change the fields for many of the standard inputs (for example, `mem` or `cpu`).
1. [Create a BigQuery table](https://cloud.google.com/bigquery/docs/tables).
diff --git a/pipeline/outputs/cloudwatch.md b/pipeline/outputs/cloudwatch.md
index a384bb5d9..3075479d5 100644
--- a/pipeline/outputs/cloudwatch.md
+++ b/pipeline/outputs/cloudwatch.md
@@ -29,7 +29,7 @@ See [AWS credentials](https://docs.fluentbit.io/manual/administration/aws-creden
| `log_group_class` | Optional. Specifies the log storage class for new log groups when `auto_create_group` is set to `true`. You can't modify the storage class of existing log groups. Allowed values: `STANDARD`, `INFREQUENT_ACCESS`. Default: `STANDARD`. |
| `log_retention_days` | If set to a number greater than zero, and newly create log group's retention policy is set to this many days. Allowed values: [`1`, `3`, `5`, `7`, `14`, `30`, `60`, `90`, `120`, `150`, `180`, `365`, `400`, `545`, `731`, `1827`, `3653`] |
| `endpoint` | Specify a custom endpoint for the CloudWatch Logs API. |
-| `metric_namespace` | An optional string representing the CloudWatch namespace for the metrics. See the [Metrics tutorial](metrics-tutorial) section for a full configuration. |
+| `metric_namespace` | An optional string representing the CloudWatch namespace for the metrics. See the [Metrics tutorial](#metrics-tutorial) section for a full configuration. |
| `metric_dimensions` | A list of lists containing the dimension keys that will be applied to all metrics. The values within a dimension set must be members on the root-node. For more information about dimensions, see [Dimension](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_Dimension.html) and [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension). In the Fluent Bit configurations, `metric_dimensions` is a comma and semicolon separated string. If you have only one list of dimensions, put the values as a comma separated string. If you want to put list of lists, use the list as semicolon separated strings. For example, if you set the value as `dimension_1,dimension_2;dimension_3`, Fluent Bit converts it as `[[dimension_1, dimension_2],[dimension_3]]`. |
| `sts_endpoint` | Specify a custom STS endpoint for the AWS STS API. |
| `profile` | Option to specify an AWS Profile for credentials. Defaults to `default`. |
diff --git a/pipeline/outputs/kinesis.md b/pipeline/outputs/kinesis.md
index 807ef98b2..af9db2495 100644
--- a/pipeline/outputs/kinesis.md
+++ b/pipeline/outputs/kinesis.md
@@ -10,7 +10,7 @@ This is the documentation for the core Fluent Bit Kinesis plugin written in C. I
This `kinesis_streams` plugin always uses a random partition key when uploading records to Kinesis through the [PutRecords API](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html).
-For information about how AWS credentials are fetched, see [AWS credentials](../administration/aws-credentials).
+For information about how AWS credentials are fetched, see [AWS credentials](../../administration/aws-credentials).
## Configuration parameters
diff --git a/pipeline/outputs/logdna.md b/pipeline/outputs/logdna.md
index 8304bf131..6d18fb061 100644
--- a/pipeline/outputs/logdna.md
+++ b/pipeline/outputs/logdna.md
@@ -18,7 +18,7 @@ This plugin uses the following configuration parameters:
| `tags` | A list of comma-separated strings to group records in LogDNA and simplify the query with filters. | _none_ |
| `file` | Optional name of a file being monitored. This value is only set if the record doesn't contain a reference to it. | _none_ |
| `app` | Name of the application. This value is automatically discovered on each record. If no value is found, the default value is used. | `Fluent Bit` |
-| `workers` | The number of [workers](../administration/multithreading#outputs) to perform flush operations for this output. | `0` |
+| `workers` | The number of [workers](../../administration/multithreading#outputs) to perform flush operations for this output. | `0` |
## Data discovery and enrichment
diff --git a/pipeline/outputs/nats.md b/pipeline/outputs/nats.md
index fbf2a9038..65d865f6d 100644
--- a/pipeline/outputs/nats.md
+++ b/pipeline/outputs/nats.md
@@ -8,7 +8,7 @@ The _NATS_ output plugin lets you flush your records into a [NATS Server](https:
| --- | ----------- | ------- |
| `host` | The IP address or hostname of the NATS server. | `127.0.0.1` |
| `port` | The TCP port of the target NATS server. | `4222` |
-| `workers` | The number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
+| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
{% hint style="info" %}
diff --git a/pipeline/outputs/observe.md b/pipeline/outputs/observe.md
index c04c6aa29..25f16b804 100644
--- a/pipeline/outputs/observe.md
+++ b/pipeline/outputs/observe.md
@@ -1,6 +1,6 @@
# Observe
-Use the [HTTP output plugin](../pipeline/outputs/http) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
+Use the [HTTP output plugin](./http) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
## Configuration parameters
diff --git a/pipeline/outputs/prometheus-remote-write.md b/pipeline/outputs/prometheus-remote-write.md
index a2925bd25..ceba7ab04 100644
--- a/pipeline/outputs/prometheus-remote-write.md
+++ b/pipeline/outputs/prometheus-remote-write.md
@@ -196,7 +196,7 @@ pipeline:
### Coralogix
-With [Coralogix Metrics](https://coralogix.com/platform/metrics/) you might need to customize the URI. Additionally, you will make use of the header key with Coralogix private key.
+With [Coralogix Metrics](https://coralogix.com/platform/) you might need to customize the URI. Additionally, you will make use of the header key with Coralogix private key.
{% tabs %}
{% tab title="fluent-bit.yaml" %}
diff --git a/pipeline/outputs/s3.md b/pipeline/outputs/s3.md
index 9aca877a4..b4d8ec4dd 100644
--- a/pipeline/outputs/s3.md
+++ b/pipeline/outputs/s3.md
@@ -19,7 +19,7 @@ for details about fetching AWS credentials.
{% hint style="warning" %}
-The [Prometheus success/retry/error metrics values](administration/monitoring.md) output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers acknowlege this feature gap, and you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
+The [Prometheus success/retry/error metrics values](../../administration/monitoring.md) output by the built-in HTTP server in Fluent Bit are meaningless for S3 output. S3 has its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers acknowledge this feature gap, and you can [track issue progress on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
{% endhint %}
@@ -36,7 +36,7 @@ The [Prometheus success/retry/error metrics values](administration/monitoring.md
| `upload_timeout` | When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to `60m` to upload a new file every hour. | `10m`|
| `store_dir` | Directory to locally buffer data before sending. When using multipart uploads, data buffers until reaching the `upload_chunk_size`. S3 stores metadata about in progress multipart uploads in this directory, allowing pending uploads to be completed if Fluent Bit stops and restarts. It stores the current `$INDEX` value if enabled in the S3 key format so the `$INDEX` keeps incrementing from its previous value after Fluent Bit restarts. | `/tmp/fluent-bit/s3` |
| `store_dir_limit_size` | Size limit for disk usage in S3. Limit theS3 buffers in the `store_dir` to limit disk usage. Use `store_dir_limit_size` instead of `storage.total_limit_size` which can be used for other plugins | `0` (unlimited) |
-| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#allowing-a-file-extension-in-the-amazon-s3-key-format-with-usduuid). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` |
+| `s3_key_format` | Format string for keys in S3. This option supports a UUID, strftime time formatters, a syntax for selecting parts of the Fluent log tag using a syntax inspired by the `rewrite_tag` filter. Add `$UUID` in the format string to insert a random string. Add `$INDEX` in the format string to insert an integer that increments each upload. The `$INDEX` value saves in the `store_dir`. Add `$TAG` in the format string to insert the full log tag. Add `$TAG[0]` to insert the first part of the tag in theS3 key. The tag is split into parts using the characters specified with the `s3_key_format_tag_delimiters` option. Add the extension directly after the last piece of the format string to insert a key suffix. To specify a key suffix in `use_put_object` mode, you must specify `$UUID`. See [S3 Key Format](#s3-key-format-and-tag-delimiters). Time in `s3_key` is the timestamp of the first record in the S3 file. | `/fluent-bit-logs/$TAG/%Y/%m/%d/%H/%M/%S` |
| `s3_key_format_tag_delimiters` | A series of characters used to split the tag into parts for use with `s3_key_format`. option. | `.` |
| `static_file_path` | Disables behavior where UUID string appends to the end of the S3 key name when `$UUID` isn't provided in `s3_key_format`. `$UUID`, time formatters, `$TAG`, and other dynamic key formatters all work as expected while this feature is set to true. | `false` |
| `use_put_object` | Use the S3 `PutObject` API instead of the multipart upload API. When enabled, the key extension is only available when `$UUID` is specified in `s3_key_format`. If `$UUID` isn't included, a random string appends format string and the key extension can't be customized. | `false` |
@@ -370,8 +370,7 @@ On shutdown, S3 output attempts to complete all pending uploads. If an upload fa
[MinIO](https://min.io/) is a high-performance, S3 compatible object storage and you can build your app with S3 capability without S3.
-The following example runs [a MinIO server](https://docs.min.io/docs/minio-quickstart-guide.html)
-at `localhost:9000`, and create a bucket of `your-bucket`.
+The following example runs a MinIO server at `localhost:9000`, and creates a bucket of `your-bucket`.
Example:
diff --git a/pipeline/outputs/splunk.md b/pipeline/outputs/splunk.md
index 8a9873bb5..f905392ad 100644
--- a/pipeline/outputs/splunk.md
+++ b/pipeline/outputs/splunk.md
@@ -29,7 +29,7 @@ Content and Splunk metadata (fields) handling configuration properties:
| Key | Description | Default |
|:--- |:----------- |:------- |
-| `splunk_send_raw` | When enabled, the record keys and values are set in the top level of the map instead of under the event key. See [Sending Raw Events](sending-raw-events) to configure this option. | `off` |
+| `splunk_send_raw` | When enabled, the record keys and values are set in the top level of the map instead of under the event key. See [Sending Raw Events](#sending-raw-events) to configure this option. | `off` |
| `event_key` | Specify the key name that will be used to send a single value as part of the record. | _none_ |
| `event_host` | Specify the key name that contains the host value. This option allows a record accessors pattern. | _none_ |
| `event_source` | Set the source value to assign to the event data. | _none_ |
diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md
index 20a070f97..34bd744b0 100644
--- a/pipeline/outputs/stackdriver.md
+++ b/pipeline/outputs/stackdriver.md
@@ -267,10 +267,7 @@ An upstream connection error means Fluent Bit wasn't able to reach Google servic
[2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection
```
-This is due to a network issue in the environment where Fluent Bit is running. Make sure that the Host, Container or Pod can reach the following Google end-points:
-
-- [https://www.googleapis.com](https://www.googleapis.com)
-- [https://logging.googleapis.com](https://logging.googleapis.com)
+This is due to a network issue in the environment where Fluent Bit is running. Ensure that the Host, Container, or Pod can reach the Google API endpoints.
{% hint style="warning" %}
diff --git a/pipeline/outputs/standard-output.md b/pipeline/outputs/standard-output.md
index 8bb9b111f..4535d1714 100644
--- a/pipeline/outputs/standard-output.md
+++ b/pipeline/outputs/standard-output.md
@@ -1,10 +1,6 @@
# Standard output
-
-
-The _standard output_ output plugin prints ingested data to standard output.
-
-
+The _standard output_ plugin prints ingested data to standard output.
## Configuration parameters
@@ -21,10 +17,10 @@ The _standard output_ output plugin prints ingested data to standard output.
fluent-bit -i cpu -o stdout -v
```
-This example directs the plugin to gather [CPU](https://github.com/fluent/fluent-bit-docs/tree/ddc1cf3d996966b9db39f8784596c8b7132b4d5b/pipeline/input/cpu.md) usage metrics and print them out to the standard output in a human-readable way:
+This example directs the plugin to gather [CPU](../inputs/cpu-metrics) usage metrics and print them out to the standard output in a human-readable way:
```shell
-$ fluent-bit -i cpu -o stdout -p format=msgpack -v
+fluent-bit -i cpu -o stdout -p format=msgpack -v
...
[0] cpu.0: [1475898721, {"cpu_p"=>0.500000, "user_p"=>0.250000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>0.000000, "cpu3.p_system"=>1.000000}]
diff --git a/pipeline/parsers/configuring-parser.md b/pipeline/parsers/configuring-parser.md
index 0231a40b6..c6673d964 100644
--- a/pipeline/parsers/configuring-parser.md
+++ b/pipeline/parsers/configuring-parser.md
@@ -33,7 +33,7 @@ Multiple parsers can be defined and each section has it own properties. The foll
| `Format` | Specifies the format of the parser. Possible options: [`json`](json.md), [`regex`](regular-expression.md), [`ltsv`](ltsv.md), or [`logfmt`](logfmt.md). |
| `Regex` | Required for parsers with the `regex` format. Specifies the Ruby regular expression for parsing and composing the structured message. |
| `Time_Key` | If the log entry provides a field with a timestamp, this option specifies the name of that field. |
-| `Time_Format` | Specifies the format of the time field so it can be recognized and analyzed properly. Fluent Bit uses `strptime(3)` to parse time. See the [strptime documentation](https://linux.die.net/man/3/strptime) for available modifiers. The `%L` field descriptor is supported for fractional seconds. |
+| `Time_Format` | Specifies the format of the time field so it can be recognized and analyzed properly. Fluent Bit uses `strptime(3)` to parse time. See the [`strptime` documentation](https://linux.die.net/man/3/strptime) for available modifiers. The `%L` field descriptor is supported for fractional seconds. |
| `Time_Offset` | Specifies a fixed UTC time offset (such as `-0600` or `+0200`) for local dates. |
| `Time_Keep` | If enabled, when a time key is recognized and parsed, the parser will keep the original time key. If disabled, the parser will drop the original time field. |
| `Time_System_timezone` | If there is no time zone (`%z`) specified in the given `Time_Format`, enabling this option will make the parser detect and use the system's configured time zone. The configured time zone is detected from the [`TZ` environment variable](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html). |
@@ -45,7 +45,7 @@ Multiple parsers can be defined and each section has it own properties. The foll
## Parsers configuration file
-All parsers must be defined in a parsers file (see below for examples), not in the Fluent Bit global configuration file. The parsers file exposes all parsers available that can be used by the input plugins that are aware of this feature. A parsers file can have multiple entries, like so:
+All parsers must be defined in a parsers file, not in the Fluent Bit global configuration file. The parsers file exposes all parsers available that can be used by the input plugins that are aware of this feature. A parsers file can have multiple entries, like so:
{% tabs %}
{% tab title="parsers.yaml" %}
@@ -95,7 +95,7 @@ For more information about the parsers available, refer to the [default parsers
## Time resolution and fractional seconds
-Time resolution and its format supported are handled by using the [strftime\(3\)](http://man7.org/linux/man-pages/man3/strftime.3.html) libc system function.
+Time resolution and its format supported are handled by using the [strftime\(3\)](http://man7.org/linux/man-pages/man3/strftime.3.html) `libc` system function.
In addition, Fluent Bit extends its time resolution to support fractional seconds like `017-05-17T15:44:31**.187512963**Z`. The `%L` format option for `Time_Format` is provided as a way to indicate that content must be interpreted as fractional seconds.
@@ -203,7 +203,7 @@ The following time zone abbreviations are supported.
{% hint style="info" %}
-These are single-letter UTC offset designators. `J` (Juliet) represents local time and is not included. `Z` represents Zulu Time, as listed in the [Universal time zones](#universal-time-zones) list.
+These are single-letter UTC offset designators. `J` (Juliet) represents local time and isn't included. `Z` represents Zulu Time, as listed in the [Universal time zones](#universal-time-zones) list.
{% endhint %}
@@ -232,4 +232,4 @@ These are single-letter UTC offset designators. `J` (Juliet) represents local ti
| `V` | `-09:00` | `-32400` | no | Victor Time Zone |
| `W` | `-10:00` | `-36000` | no | Whiskey Time Zone |
| `X` | `-11:00` | `-43200` | no | X-ray Time Zone |
-| `Y` | `-12:00` | `-46800` | no | Yankee Time Zone |
\ No newline at end of file
+| `Y` | `-12:00` | `-46800` | no | Yankee Time Zone |
diff --git a/pipeline/processors/README.md b/pipeline/processors/README.md
index 388b55880..94b5e546a 100644
--- a/pipeline/processors/README.md
+++ b/pipeline/processors/README.md
@@ -6,7 +6,7 @@ Additionally, filters can be implemented in a way that mimics the behavior of pr
{% hint style="info" %}
-Only [YAML configuration files](../administration/configuring-fluent-bit/yaml/README.md) support processors.
+Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/configuration-file) support processors.
{% endhint %}
diff --git a/vale-styles/FluentBit/Acronyms.yml b/vale-styles/FluentBit/Acronyms.yml
index cceba0895..0d24b68b7 100644
--- a/vale-styles/FluentBit/Acronyms.yml
+++ b/vale-styles/FluentBit/Acronyms.yml
@@ -8,6 +8,7 @@ first: '\b([A-Z]{3,5})\b'
second: '(?:\b[A-Z][a-z]+ )+\(([A-Z]{3,5})\)'
# ... with the exception of these:
exceptions:
+ - AAD
- ACL
- ANSI
- API
@@ -30,6 +31,7 @@ exceptions:
- DNS
- DPI
- DPPS
+ - DST
- ECR
- ECS
- EKS
@@ -109,6 +111,7 @@ exceptions:
- TLS
- TRE
- TODO
+ - UAE
- UDP
- URI
- URL
diff --git a/vale-styles/FluentBit/Headings.yml b/vale-styles/FluentBit/Headings.yml
index d9e78a0d4..a6ebec4ea 100644
--- a/vale-styles/FluentBit/Headings.yml
+++ b/vale-styles/FluentBit/Headings.yml
@@ -25,9 +25,11 @@ exceptions:
- AWS MSK IAM
- AWS IAM
- Azure
+ - Azure Data Explorer
- Azure Logs Ingestion API
- Azure Log Analytics
- BuildKite
+ - CheckList
- CircleCI
- CLI
- CloudWatch
@@ -55,6 +57,7 @@ exceptions:
- Google Cloud
- Google Cloud BigQuery
- Google Cloud Platform
+ - GPG
- Grafana
- Grafana Cloud
- Graylog Extended Log Format
@@ -67,7 +70,9 @@ exceptions:
- Kafka Producer
- Kafka REST
- Kinesis
+ - KQL
- Kubernetes
+ - Kusto
- LaunchDarkly
- Libpq
- Linux
@@ -76,6 +81,7 @@ exceptions:
- LTSV
- macOS
- Marketplace
+ - Microsoft Fabric
- MongoDB
- NATS
- New Relic
@@ -91,6 +97,7 @@ exceptions:
- PostgreSQL
- Prometheus
- PromQL
+ - Raspberry Pi
- REPL
- ServiceMonitor
- SignalFx