Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
d320db1
Parser: fix 404:
esmerel Jul 24, 2025
d1217c2
static config: fix 404 and vale issues
esmerel Jul 24, 2025
6f90896
raspberry: fix 404 and vale issues
esmerel Jul 24, 2025
855dc7f
install: docker: fix 404 and vale issues
esmerel Jul 24, 2025
f09e345
install: kubernetes: fix 404 and vale issues
esmerel Jul 24, 2025
8d5290f
install: record-accessor: fix 404
esmerel Jul 24, 2025
5ca09d1
configure: unit sizes:: fix 404
esmerel Jul 24, 2025
00c70cf
configure: multiline:: fix 404
esmerel Jul 24, 2025
c002e9e
install: docker: fix 404
esmerel Jul 24, 2025
9d87969
admin: monitoring: fix 404
esmerel Jul 24, 2025
4363e52
admin: troubleshooting: fix 404
esmerel Jul 24, 2025
a6a1886
testing:validate : fix 404
esmerel Jul 24, 2025
783f23a
pipeline: input: fix 404
esmerel Jul 24, 2025
f95e4c1
parsers: configuring: fix 404
esmerel Jul 24, 2025
3936a04
: fix 404
esmerel Jul 24, 2025
6ce8391
: fix 404
esmerel Jul 24, 2025
e672a1c
filters: kubernetes: fix 404 and vale issues
esmerel Jul 24, 2025
59dc654
filters: kubernetes: fix 404 and vale issues
esmerel Jul 24, 2025
90b1fd5
filters: log to metrics: fix 404 and vale issues
esmerel Jul 24, 2025
efd395f
filters: multiline statrace: fix 404 and vale issues
esmerel Jul 24, 2025
e240d1f
outputs: metrics tutorial: fix 404 and vale issues
esmerel Jul 24, 2025
9312bbe
outputs: kinesis: fix 404 and vale issues
esmerel Jul 24, 2025
bdce112
outputs: s3: fix 404 and vale issues
esmerel Jul 24, 2025
11afd34
outputs: bigquery: fix 404 and vale issues
esmerel Jul 24, 2025
311977f
pipeline: output: azure kusto: style and 404
esmerel Jul 24, 2025
fa3fc94
pipeline: output: logdna: 404
esmerel Jul 24, 2025
a96cdce
pipeline: output: nats: 404
esmerel Jul 24, 2025
52d5660
pipeline: output: observe: 404
esmerel Jul 24, 2025
d454f12
pipeline: output: prom remote write: 404
esmerel Jul 24, 2025
beb9847
pipeline: output: splunk: 404
esmerel Jul 24, 2025
3e311df
pipeline: output: stackdriver: removing google api links because they…
esmerel Jul 24, 2025
5f0b7cb
pipeline: output: stackdriver: removing google api links because they…
esmerel Jul 24, 2025
7187b08
pipeline: output: stdout: 404
esmerel Jul 24, 2025
e00aa2a
install: kube: 404
esmerel Jul 24, 2025
a46b445
Update administration/troubleshooting.md
esmerel Jul 24, 2025
2451e96
Merge branch 'master' into lynettemiles/sc-143126/check-for-links-to-…
esmerel Jul 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Upstream servers

Fluent Bit [output plugins](../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
Fluent Bit [output plugins](../../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.

An `Upstream` defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the `Upstream` feature. The following plugin has `Upstream` support:

Expand Down
12 changes: 6 additions & 6 deletions administration/configuring-fluent-bit/multiline-parsing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific m
| Parser | Description |
| ------ | ----------- |
| `docker` | Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker. |
| `cri` | Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries |
| `cri` | Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries. |
| `go` | Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected. |
| `python` | Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected. |
| `java` | Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected. |
Expand All @@ -35,7 +35,7 @@ To understand which multiline parser type is required for your use case you have
| -------- | ----------- | ------- |
| `name` | Specify a unique name for the multiline parser definition. A good practice is to prefix the name with the word `multiline_` to avoid confusion with normal parser definitions. | _none_ |
| `type` | Set the multiline mode. Fluent Bit supports the type `regex`.| _none_ |
| `parser` | Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. <br /> When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. | _none_ |
| `parser` | Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. | _none_ |
| `key_content` | For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated. | _none_ |
| `flush_timeout` | Timeout in milliseconds to flush a non-terminated multiline buffer. | `5s` |
| `rule` | Configure a rule to match a multiline pattern. The rule has a [specific format](#rules-definition). Multiple rules can be defined. | _none_|
Expand All @@ -59,7 +59,7 @@ A rule specifies how to match a multiline pattern and perform the concatenation.
- regular expression pattern
- next state

A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:
A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:

{% tabs %}
{% tab title="parsers_multiline.yaml" %}
Expand Down Expand Up @@ -90,12 +90,12 @@ rule "cont" "/^\s+at.*/" "cont"
{% endtab %}
{% endtabs %}

This example defines two rules. Each rule has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.
This example defines two rules. Each rule has its own state name, regular expression patterns, and the next state name. Every field that composes a rule must be inside double quotes.

The first rule of a state name must be `start_state`. The regex pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.
The first rule of a state name must be `start_state`. The regular expression pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.

{% hint style="info" %}
To simplify the configuration of regular expressions, you can use the [Rubular]((https://rubular.com/r/NDuyKwlTGOvq2g)) web site. This link uses the regex described in the previous example, plus a log line that matches the pattern:
To simplify the configuration of regular expressions, you can use the [Rubular](https://rubular.com/r/NDuyKwlTGOvq2g) web site. This link uses the regular expression described in the previous example, plus a log line that matches the pattern:
{% endhint %}

#### Configuration example
Expand Down
12 changes: 6 additions & 6 deletions administration/configuring-fluent-bit/unit-sizes.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Unit sizes

Some configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/tail.md), [Forward Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/forward.md) or generic properties like [Mem\_Buf\_Limit](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/configuring-fluent-bit/backpressure.md) use unit sizes.
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure) use unit sizes.

[Fluent Bit](http://fluentbit.io) v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:

| Suffix | Description | Example |
| :--- | :--- | :--- |
| | When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of 32000 means 32000 bytes. |
| `k`, `K`, `KB`, `kb` | Kilobyte: a unit of memory equal to 1,000 bytes. | 32k means 32000 bytes. |
| `m`, `M`, `MB`, `mb` | Megabyte: a unit of memory equal to 1,000,000 bytes. | 1M means 1000000 bytes. |
| `g`, `G`, `GB`, `gb` | Gigabyte: a unit of memory equal to 1,000,000,000 bytes. | 1G means 1000000000 bytes. |
| | When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of `32000` means 32000 bytes. |
| `k`, `K`, `KB`, `kb` | Kilobyte: a unit of memory equal to 1,000 bytes. | `32k` means 32000 bytes. |
| `m`, `M`, `MB`, `mb` | Megabyte: a unit of memory equal to 1,000,000 bytes. | `1M` means 1000000 bytes. |
| `g`, `G`, `GB`, `gb` | Gigabyte: a unit of memory equal to 1,000,000,000 bytes. | `1G` means 1000000000 bytes. |
8 changes: 4 additions & 4 deletions administration/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ pipeline:
{% endtab %}
{% endtabs %}

Start Fluent bit with the corresponding configuration chosen above:
Start Fluent bit with the corresponding configuration chosen previously:

```shell
# For YAML configuration.
Expand Down Expand Up @@ -381,15 +381,15 @@ When querying the related metrics, the aliases are returned instead of the plugi

You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics.

The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.
The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://github.com/banzaicloud)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.

![dashboard](/.gitbook/assets/dashboard.png)

### Alerts

Sample alerts are available [here](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml).
Sample alerts [are available](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml).

## Health Check for Fluent Bit
## Health check for Fluent Bit

Fluent bit supports the following configurations to set up the health check.

Expand Down
14 changes: 7 additions & 7 deletions administration/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=759ddb3d-b363-4ee6-91fa-21025259767a" />

- [Tap: generate events or records](troubleshooting.md#tap)
- [Dump internals signal](troubleshooting#dump-internals-signal)
- [Tap: generate events or records](#tap)
- [Dump internals signal](#dump-internals-and-signal)

## Tap

Tap can be used to generate events or records detailing what messages pass through Fluent Bit, at what time and what filters affect them.

### Basic Tap example
### Tap example

Ensure that the container image supports Fluent Bit Tap (available in Fluent Bit 2.0+):

```shell
$ docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
docker run --rm -ti fluent/fluent-bit:latest --help | grep trace
-Z, --enable-chunk-traceenable chunk tracing, it can be activated either through the http api or the command line
--trace-input input to start tracing on startup.
--trace-output output to use for tracing on startup.
Expand All @@ -27,7 +27,7 @@ If the `--enable-chunk-trace` option is present, your Fluent Bit version support
You can start Fluent Bit with tracing activated from the beginning by using the `trace-input` and `trace-output` properties:

```bash
$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
Fluent Bit v2.1.8
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
Expand Down Expand Up @@ -338,7 +338,7 @@ This filter record is an example to explain the details of a Tap record:
- `plugin_alias`: If an alias is set this field will contain the alias set for a plugin.
- `records`: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.

## Dump Internals / Signal
## Dump Internals and signal

When the service is running, you can export [metrics](monitoring.md) to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.

Expand Down Expand Up @@ -436,7 +436,7 @@ Depending of the buffering strategy and limits imposed by configuration, some Ch
| | `size` | Amount of bytes used by the Chunk. |
| | `size err` | Number of Chunks in an error state where its size couldn't be retrieved. |

### Storage Layer
### Storage layer

Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The `Storage Layer` entry contains a total summary of Chunks registered by Fluent Bit:

Expand Down
2 changes: 1 addition & 1 deletion concepts/data-pipeline/parser.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The parser converts unstructured data to structured data. As an example, conside
192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
```

This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](pipeline/parsers/regular-expression) is used, the log entry could be converted to:
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:

```javascript
{
Expand Down
12 changes: 7 additions & 5 deletions installation/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,27 +16,31 @@ Use the following command to start Fluent Bit while using a configuration file:

{% tabs %}
{% tab title="fluent-bit.conf" %}

```shell
docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
cr.fluentbit.io/fluent/fluent-bit
```

{% endtab %}

{% tab title="fluent-bit.yaml" %}

```shell
docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
cr.fluentbit.io/fluent/fluent-bit \
-c /fluent-bit/etc/fluent-bit.yaml

```

{% endtab %}
{% endtabs %}


## Tags and versions

The following table describes the Linux container tags that are available on Docker Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository:

| Tag(s) | Manifest Architectures | Description |
| Tags | Manifest Architectures | Description |
| ------------ | ------------------------- | -------------------------------------------------------------- |
| 4.0.4-debug | x86_64, arm64v8, arm32v7, s390x | Debug images |
| 4.0.4 | x86_64, arm64v8, arm32v7, s390x | Release [v4.0.4](https://fluentbit.io/announcements/v4.0.4/) |
Expand Down Expand Up @@ -122,8 +126,6 @@ The following table describes the Linux container tags that are available on Doc
| 2.1.6 | x86_64, arm64v8, arm32v7 | Release [v2.1.6](https://fluentbit.io/announcements/v2.1.6/) |
| 2.1.5 | x86_64, arm64v8, arm32v7 | Release [v2.1.5](https://fluentbit.io/announcements/v2.1.5/) |
| 2.1.5-debug | x86_64, arm64v8, arm32v7 | Debug images |
| 2.1.4 | x86_64, arm64v8, arm32v7 | Release [v2.1.4](https://fluentbit.io/announcements/v2.1.4/) |
| 2.1.4-debug | x86_64, arm64v8, arm32v7 | Debug images |
| 2.1.3 | x86_64, arm64v8, arm32v7 | Release [v2.1.3](https://fluentbit.io/announcements/v2.1.3/) |
| 2.1.3-debug | x86_64, arm64v8, arm32v7 | Debug images |
| 2.1.2 | x86_64, arm64v8, arm32v7 | Release [v2.1.2](https://fluentbit.io/announcements/v2.1.2/) |
Expand Down Expand Up @@ -189,7 +191,7 @@ From a deployment perspective, there's no need to specify an architecture. The c

## Verify signed container images

Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)):
Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/quickstart/quickstart-cosign/)):

```shell
$ cosign verify --key "https://packages.fluentbit.io/fluentbit-cosign.pub" fluent/fluent-bit:2.0.6
Expand Down
12 changes: 5 additions & 7 deletions installation/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ description: Kubernetes Production Grade Log Processor

# Kubernetes

![](<../.gitbook/assets/fluentbit\_kube\_logging (1).png>)

[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes:

- Process Kubernetes containers logs from the file system or Systemd/Journald.
Expand All @@ -16,17 +14,17 @@ description: Kubernetes Production Grade Log Processor

Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.

When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../pipeline/filters/kubernetes) filter plugin.
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) filter plugin.

The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.

## Installation

[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.

The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at <https://github.com/fluent/helm-charts>.
The recommended way to deploy Fluent Bit for Kubernetes is with the official [Helm Chart](https://github.com/fluent/helm-charts).

### Note for OpenShift
### OpenShift

If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.

Expand All @@ -46,7 +44,7 @@ To validate that the repository was added, run `helm search repo fluent` to ensu
helm upgrade --install fluent-bit fluent/fluent-bit
```

### Default Values
### Default values

The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.

Expand Down
10 changes: 5 additions & 5 deletions installation/linux/raspbian-raspberry-pi.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Raspbian and Raspberry Pi
# Raspberry Pi

Fluent Bit is distributed as the `fluent-bit` package and is available for the Raspberry, specifically for [Raspbian](http://raspbian.org) distribution. The following versions are supported:
Fluent Bit is distributed as the `fluent-bit` package and is available for [Raspberry Pi](https://www.raspberrypi.com/software/operating-systems/). The following versions are supported:

* Raspbian Bookworm (12)
* Raspbian Bullseye (11)
* Raspbian Buster (10)
- Raspbian Bookworm (12)
- Raspbian Bullseye (11)
- Raspbian Buster (10)

## Server GPG key

Expand Down
Loading