diff --git a/pipeline/inputs/collectd.md b/pipeline/inputs/collectd.md
index 0ddfd82ea..534207e53 100644
--- a/pipeline/inputs/collectd.md
+++ b/pipeline/inputs/collectd.md
@@ -6,12 +6,12 @@ The _Collectd_ input plugin lets you receive datagrams from the `collectd` servi
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Listen` | Set the address to listen to. | `0.0.0.0` |
-| `Port` | Set the port to listen to. | `25826` |
-| `TypesDB` | Set the data specification file. | `/usr/share/collectd/types.db` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:-----------|:--------------------------------------------------------------------------------------------------------|:-------------------------------|
+| `Listen` | Set the address to listen to. | `0.0.0.0` |
+| `Port` | Set the port to listen to. | `25826` |
+| `TypesDB` | Set the data specification file. | `/usr/share/collectd/types.db` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Configuration examples
@@ -22,15 +22,15 @@ Here is a basic configuration example:
```yaml
pipeline:
- inputs:
- - name: collectd
- listen: 0.0.0.0
- port: 25826
- typesdb: '/user/share/collectd/types.db,/etc/collectd/custom.db'
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: collectd
+ listen: 0.0.0.0
+ port: 25826
+ typesdb: '/user/share/collectd/types.db,/etc/collectd/custom.db'
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -38,14 +38,14 @@ pipeline:
```text
[INPUT]
- Name collectd
- Listen 0.0.0.0
- Port 25826
- TypesDB /usr/share/collectd/types.db,/etc/collectd/custom.db
+ Name collectd
+ Listen 0.0.0.0
+ Port 25826
+ TypesDB /usr/share/collectd/types.db,/etc/collectd/custom.db
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/cpu-metrics.md b/pipeline/inputs/cpu-metrics.md
index 819f57f1a..2542dcac5 100644
--- a/pipeline/inputs/cpu-metrics.md
+++ b/pipeline/inputs/cpu-metrics.md
@@ -6,30 +6,30 @@ The following tables describe the information generated by the plugin. The follo
The CPU metrics plugin creates metrics that are log-based, such as JSON payload. For Prometheus-based metrics, see the _Node Exporter Metrics_ input plugin.
-| Key | Description |
-| :--- | :--- |
-| `cpu_p` | CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system. |
-| `user_p` | CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system. |
-| `system_p` | CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system. |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
+| Key | Description |
+|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `cpu_p` | CPU usage of the overall system, this value is the summation of time spent on user and kernel space. The result takes in consideration the numbers of CPU cores in the system. |
+| `user_p` | CPU usage in User mode, for short it means the CPU usage by user space programs. The result of this value takes in consideration the numbers of CPU cores in the system. |
+| `system_p` | CPU usage in Kernel mode, for short it means the CPU usage by the Kernel. The result of this value takes in consideration the numbers of CPU cores in the system. |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
In addition to the keys reported in the previous table, a similar content is created per CPU core. The cores are listed from `0` to `N` as the Kernel reports:
-| Key | Description |
-| :--- | :--- |
-| `cpuN.p_cpu` | Represents the total CPU usage by core `N`. |
-| `cpuN.p_user` | Total CPU spent in user mode or user space programs associated to this core. |
-| `cpuN.p_system` | Total CPU spent in system or kernel mode associated to this core. |
+| Key | Description |
+|:----------------|:-----------------------------------------------------------------------------|
+| `cpuN.p_cpu` | Represents the total CPU usage by core `N`. |
+| `cpuN.p_user` | Total CPU spent in user mode or user space programs associated to this core. |
+| `cpuN.p_system` | Total CPU spent in system or kernel mode associated to this core. |
## Configuration parameters
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Interval_Sec` | Polling interval in seconds. | `1` |
-| `Interval_NSec | Polling interval in nanoseconds` | `0` |
-| `PID` | Specify the `ID` (`PID`) of a running process in the system. By default, the plugin monitors the whole system but if this option is set, it will only monitor the given process ID. | _none_ |
+| Key | Description | Default |
+|:---------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `Interval_Sec` | Polling interval in seconds. | `1` |
+| `Interval_NSec | Polling interval in nanoseconds` | `0` |
+| `PID` | Specify the `ID` (`PID`) of a running process in the system. By default, the plugin monitors the whole system but if this option is set, it will only monitor the given process ID. | _none_ |
## Get started
@@ -46,17 +46,12 @@ build/bin/fluent-bit -i cpu -t my_cpu -o stdout -m '*'
The command returns results similar to the following:
```text
-Fluent Bit v1.x.x
-* Copyright (C) 2019-2020 The Fluent Bit Authors
-* Copyright (C) 2015-2018 Treasure Data
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2019/09/02 10:46:29] [ info] starting engine
+...
[0] [1452185189, {"cpu_p"=>7.00, "user_p"=>5.00, "system_p"=>2.00, "cpu0.p_cpu"=>10.00, "cpu0.p_user"=>8.00, "cpu0.p_system"=>2.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>4.00, "cpu1.p_system"=>2.00}]
[1] [1452185190, {"cpu_p"=>6.50, "user_p"=>5.00, "system_p"=>1.50, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>7.00, "cpu1.p_user"=>5.00, "cpu1.p_system"=>2.00}]
[2] [1452185191, {"cpu_p"=>7.50, "user_p"=>5.00, "system_p"=>2.50, "cpu0.p_cpu"=>7.00, "cpu0.p_user"=>3.00, "cpu0.p_system"=>4.00, "cpu1.p_cpu"=>6.00, "cpu1.p_user"=>6.00, "cpu1.p_system"=>0.00}]
[3] [1452185192, {"cpu_p"=>4.50, "user_p"=>3.50, "system_p"=>1.00, "cpu0.p_cpu"=>6.00, "cpu0.p_user"=>5.00, "cpu0.p_system"=>1.00, "cpu1.p_cpu"=>5.00, "cpu1.p_user"=>3.00, "cpu1.p_system"=>2.00}]
+...
```
As described previously, the CPU input plugin gathers the overall usage every one second and flushed the information to the output on the fifth second. This example uses the `stdout` plugin to demonstrate the output records. In a real use-case you might want to flush this information to some central aggregator such as [Fluentd](http://fluentd.org) or [Elasticsearch](http://elastic.co).
@@ -71,13 +66,13 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: cpu
- tag: my_cpu
+ inputs:
+ - name: cpu
+ tag: my_cpu
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -85,12 +80,12 @@ pipeline:
```shell
[INPUT]
- Name cpu
- Tag my_cpu
+ Name cpu
+ Tag my_cpu
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/disk-io-metrics.md b/pipeline/inputs/disk-io-metrics.md
index 12ed82dbe..4ed5cb5c7 100644
--- a/pipeline/inputs/disk-io-metrics.md
+++ b/pipeline/inputs/disk-io-metrics.md
@@ -8,12 +8,12 @@ The _Disk I/O metrics_ plugin creates metrics that are log-based, such as JSON p
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Interval_Sec` | Polling interval (seconds). | `1` |
-| `Interval_NSec` | Polling interval (nanosecond). | `0` |
-| `Dev_Name` | Device name to limit the target (for example, `sda`). If not set, `in_disk` gathers information from all of disks and partitions. | all disks |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------|:----------------------------------------------------------------------------------------------------------------------------------|:----------|
+| `Interval_Sec` | Polling interval (seconds). | `1` |
+| `Interval_NSec` | Polling interval (nanosecond). | `0` |
+| `Dev_Name` | Device name to limit the target (for example, `sda`). If not set, `in_disk` gathers information from all of disks and partitions. | all disks |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -30,17 +30,12 @@ fluent-bit -i disk -o stdout
Which returns information like the following:
```text
-Fluent Bit v1.x.x
-* Copyright (C) 2019-2020 The Fluent Bit Authors
-* Copyright (C) 2015-2018 Treasure Data
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2017/01/28 16:58:16] [ info] [engine] started
+...
[0] disk.0: [1485590297, {"read_size"=>0, "write_size"=>0}]
[1] disk.0: [1485590298, {"read_size"=>0, "write_size"=>0}]
[2] disk.0: [1485590299, {"read_size"=>0, "write_size"=>0}]
[3] disk.0: [1485590300, {"read_size"=>0, "write_size"=>11997184}]
+...
```
### Configuration file
@@ -52,15 +47,15 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: disk
- tag: disk
- interval_sec: 1
- interval_nsec: 0
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: disk
+ tag: disk
+ interval_sec: 1
+ interval_nsec: 0
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -68,14 +63,14 @@ pipeline:
```text
[INPUT]
- Name disk
- Tag disk
- Interval_Sec 1
- Interval_NSec 0
+ Name disk
+ Tag disk
+ Interval_Sec 1
+ Interval_NSec 0
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/docker-events.md b/pipeline/inputs/docker-events.md
index 8e7f45a80..e51ac4330 100644
--- a/pipeline/inputs/docker-events.md
+++ b/pipeline/inputs/docker-events.md
@@ -6,15 +6,15 @@ The _Docker events_ input plugin uses the Docker API to capture server events. A
This plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Unix_Path` | The docker socket Unix path. | `/var/run/docker.sock` |
-| `Buffer_Size` | The size of the buffer used to read docker events in bytes. | `8192` |
-| `Parser` | Specify the name of a parser to interpret the entry as a structured message. | _none_ |
-| `Key` | When a message is unstructured (no parser applied), it's appended as a string under the key name `message`. | `message` |
-| `Reconnect.Retry_limits`| The maximum number of retries allowed. The plugin tries to reconnect with docker socket when `EOF` is detected. | `5` |
-| `Reconnect.Retry_interval`| The retry interval in seconds. | `1` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:---------------------------|:----------------------------------------------------------------------------------------------------------------|:-----------------------|
+| `Unix_Path` | The docker socket Unix path. | `/var/run/docker.sock` |
+| `Buffer_Size` | The size of the buffer used to read docker events in bytes. | `8192` |
+| `Parser` | Specify the name of a parser to interpret the entry as a structured message. | _none_ |
+| `Key` | When a message is unstructured (no parser applied), it's appended as a string under the key name `message`. | `message` |
+| `Reconnect.Retry_limits` | The maximum number of retries allowed. The plugin tries to reconnect with docker socket when `EOF` is detected. | `5` |
+| `Reconnect.Retry_interval` | The retry interval in seconds. | `1` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
### Command line
@@ -33,12 +33,12 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: docker_events
+ inputs:
+ - name: docker_events
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -46,11 +46,11 @@ pipeline:
```text
[INPUT]
- Name docker_events
+ Name docker_events
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/docker-metrics.md b/pipeline/inputs/docker-metrics.md
index aee5f9f6a..13fcdae98 100644
--- a/pipeline/inputs/docker-metrics.md
+++ b/pipeline/inputs/docker-metrics.md
@@ -6,13 +6,13 @@ The _Docker_ input plugin you collect Docker container metrics, including memory
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| ------------ | ----------------------------------------------- | ------- |
-| `Interval_Sec` | Polling interval in seconds | `1` |
-| `Include` | A space-separated list of containers to include. | _none_ |
-| `Exclude` | A space-separated list of containers to exclude. | _none_ |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
-| `path.containers` | Used to specify the container directory if Docker is configured with a custom `data-root` directory. | `/var/lib/docker/containers` |
+| Key | Description | Default |
+|-------------------|---------------------------------------------------------------------------------------------------------|------------------------------|
+| `Interval_Sec` | Polling interval in seconds | `1` |
+| `Include` | A space-separated list of containers to include. | _none_ |
+| `Exclude` | A space-separated list of containers to exclude. | _none_ |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| `path.containers` | Used to specify the container directory if Docker is configured with a custom `data-root` directory. | `/var/lib/docker/containers` |
If you set neither `Include` nor `Exclude`, the plugin will try to get metrics from all running containers.
@@ -25,13 +25,13 @@ The following example configuration collects metrics from two docker instances (
```yaml
pipeline:
- inputs:
- - name: docker
- include: 6bab19c3a0f9 14159be4ca2c
+ inputs:
+ - name: docker
+ include: 6bab19c3a0f9 14159be4ca2c
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -39,11 +39,12 @@ pipeline:
```text
[INPUT]
- Name docker
- Include 6bab19c3a0f9 14159be4ca2c
+ Name docker
+ Include 6bab19c3a0f9 14159be4ca2c
+
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/dummy.md b/pipeline/inputs/dummy.md
index bbab31dd6..9ff3b7aec 100644
--- a/pipeline/inputs/dummy.md
+++ b/pipeline/inputs/dummy.md
@@ -6,19 +6,19 @@ The _Dummy_ input plugin, generates dummy events. Use this plugin for testing, d
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :----------------- | :---------- | :------ |
-| `Dummy` | Dummy JSON record. | `{"message":"dummy"}` |
-| `Metadata` | Dummy JSON metadata. | `{}` |
-| `Start_time_sec` | Dummy base timestamp, in seconds. | `0` |
-| `Start_time_nsec` | Dummy base timestamp, in nanoseconds. | `0` |
-| `Rate` | Rate at which messages are generated expressed in how many times per second. | `1` |
-| `Interval_sec` | Set time interval, in seconds, at which every message is generated. If set, `Rate` configuration is ignored. | `0` |
-| `Interval_nsec` | Set time interval, in nanoseconds, at which every message is generated. If set, `Rate` configuration is ignored. | `0` |
-| `Samples` | If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops. | _none_ |
-| `Copies` | Number of messages to generate each time messages generate. | `1` |
-| `Flush_on_startup` | If set to `true`, the first dummy event is generated at startup. | `false` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:-------------------|:------------------------------------------------------------------------------------------------------------------------|:----------------------|
+| `Dummy` | Dummy JSON record. | `{"message":"dummy"}` |
+| `Metadata` | Dummy JSON metadata. | `{}` |
+| `Start_time_sec` | Dummy base timestamp, in seconds. | `0` |
+| `Start_time_nsec` | Dummy base timestamp, in nanoseconds. | `0` |
+| `Rate` | Rate at which messages are generated expressed in how many times per second. | `1` |
+| `Interval_sec` | Set time interval, in seconds, at which every message is generated. If set, `Rate` configuration is ignored. | `0` |
+| `Interval_nsec` | Set time interval, in nanoseconds, at which every message is generated. If set, `Rate` configuration is ignored. | `0` |
+| `Samples` | If set, the events number will be limited. For example, if Samples=3, the plugin generates only three events and stops. | _none_ |
+| `Copies` | Number of messages to generate each time messages generate. | `1` |
+| `Flush_on_startup` | If set to `true`, the first dummy event is generated at startup. | `false` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -35,13 +35,10 @@ fluent-bit -i dummy -o stdout
which returns results like the following:
```text
-Fluent Bit v2.x.x
-* Copyright (C) 2015-2022 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
+...
[0] dummy.0: [[1686451466.659962491, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1686451467.659679509, {}], {"message"=>"dummy"}]
+...
```
### Configuration file
@@ -53,13 +50,13 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: dummy
- dummy: '{"message": "custom dummy"}'
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: dummy
+ dummy: '{"message": "custom dummy"}'
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -67,12 +64,12 @@ pipeline:
```text
[INPUT]
- Name dummy
- Dummy {"message": "custom dummy"}
+ Name dummy
+ Dummy {"message": "custom dummy"}
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/ebpf.md b/pipeline/inputs/ebpf.md
index 68df61579..08a4644c9 100644
--- a/pipeline/inputs/ebpf.md
+++ b/pipeline/inputs/ebpf.md
@@ -20,7 +20,7 @@ To enable `in_ebpf`, ensure the following dependencies are installed on your sys
### Installing dependencies on Ubuntu
-```bash
+```shell
sudo apt update
sudo apt install libbpf-dev linux-tools-common cmake
```
@@ -31,7 +31,7 @@ To enable the `in_ebpf` plugin, follow these steps to build Fluent Bit from sour
1. Clone the Fluent Bit repository:
- ```bash
+ ```shell
git clone https://github.com/fluent/fluent-bit.git
cd fluent-bit
```
@@ -40,7 +40,7 @@ To enable the `in_ebpf` plugin, follow these steps to build Fluent Bit from sour
Create a build directory and run `cmake` with the `-DFLB_IN_EBPF=On` flag to enable the `in_ebpf` plugin:
- ```bash
+ ```shell
mkdir build
cd build
cmake .. -DFLB_IN_EBPF=On
@@ -48,7 +48,7 @@ To enable the `in_ebpf` plugin, follow these steps to build Fluent Bit from sour
1. Compile the source:
- ```bash
+ ```shell
make
```
@@ -56,12 +56,12 @@ To enable the `in_ebpf` plugin, follow these steps to build Fluent Bit from sour
Run Fluent Bit with elevated permissions (for example, `sudo`). Loading eBPF programs requires root access or appropriate privileges.
- ```bash
+ ```shell
# For YAML configuration.
- sudo ./bin/fluent-bit --config fluent-bit.yaml
+ sudo fluent-bit --config fluent-bit.yaml
# For classic configuration.
- sudo ./bin/fluent-bit --config fluent-bit.conf
+ sudo fluent-bit --config fluent-bit.conf
```
## Configuration example
@@ -73,12 +73,12 @@ Here's a basic example of how to configure the plugin:
```yaml
pipeline:
- inputs:
- - name: ebpf
- trace:
- - trace_signal
- - trace_malloc
- - trace_bind
+ inputs:
+ - name: ebpf
+ trace:
+ - trace_signal
+ - trace_malloc
+ - trace_bind
```
{% endtab %}
@@ -86,10 +86,10 @@ pipeline:
```text
[INPUT]
- Name ebpf
- Trace trace_signal
- Trace trace_malloc
- Trace trace_bind
+ Name ebpf
+ Trace trace_signal
+ Trace trace_malloc
+ Trace trace_bind
```
{% endtab %}
diff --git a/pipeline/inputs/elasticsearch.md b/pipeline/inputs/elasticsearch.md
index 79e38f960..3e15f0557 100644
--- a/pipeline/inputs/elasticsearch.md
+++ b/pipeline/inputs/elasticsearch.md
@@ -6,15 +6,15 @@ The _Elasticsearch_ input plugin handles both Elasticsearch and OpenSearch Bulk
The plugin supports the following configuration parameters:
-| Key | Description | Default value |
-| :--- | :--- | :--- |
-| `buffer_max_size` | Set the maximum size of buffer. | `4M` |
-| `buffer_chunk_size` | Set the buffer chunk size. | `512K` |
-| `tag_key` | Specify a key name for extracting as a tag. | `NULL` |
-| `meta_key` | Specify a key name for meta information. | "@meta" |
-| `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" |
-| `version` | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default value |
+|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:--------------|
+| `buffer_max_size` | Set the maximum size of buffer. | `4M` |
+| `buffer_chunk_size` | Set the buffer chunk size. | `512K` |
+| `tag_key` | Specify a key name for extracting as a tag. | `NULL` |
+| `meta_key` | Specify a key name for meta information. | "@meta" |
+| `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" |
+| `version` | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients.
Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing".
@@ -41,14 +41,14 @@ In your configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: elasticsearch
- listen: 0.0.0.0
- port: 9200
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: elasticsearch
+ listen: 0.0.0.0
+ port: 9200
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -56,13 +56,13 @@ pipeline:
```text
[INPUT]
- name elasticsearch
- listen 0.0.0.0
- port 9200
+ name elasticsearch
+ listen 0.0.0.0
+ port 9200
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
@@ -76,16 +76,16 @@ For large bulk ingestion, you might have to increase buffer size using the `buff
```yaml
pipeline:
- inputs:
- - name: elasticsearch
- listen: 0.0.0.0
- port: 9200
- buffer_max_size: 20M
- buffer_chunk_size: 5M
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: elasticsearch
+ listen: 0.0.0.0
+ port: 9200
+ buffer_max_size: 20M
+ buffer_chunk_size: 5M
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -93,15 +93,15 @@ pipeline:
```text
[INPUT]
- name elasticsearch
- listen 0.0.0.0
- port 9200
- buffer_max_size 20M
- buffer_chunk_size 5M
+ name elasticsearch
+ listen 0.0.0.0
+ port 9200
+ buffer_max_size 20M
+ buffer_chunk_size 5M
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
diff --git a/pipeline/inputs/exec-wasi.md b/pipeline/inputs/exec-wasi.md
index 231e3a524..ab4e73bf4 100644
--- a/pipeline/inputs/exec-wasi.md
+++ b/pipeline/inputs/exec-wasi.md
@@ -6,18 +6,18 @@ The _Exec Wasi_ input plugin lets you execute Wasm programs that are WASI target
The plugin supports the following configuration parameters:
-| Key | Description |
-| :--- | :--- |
-| `WASI_Path` | The location of a Wasm program file. |
-| `Parser` | Specify the name of a parser to interpret the entry as a structured message. |
-| `Accessible_Paths` | Specify the allowed list of paths to be able to access paths from WASM programs. |
-| `Interval_Sec` | Polling interval (seconds). |
-| `Interval_NSec` | Polling interval (nanosecond). |
-| `Wasm_Heap_Size` | Size of the heap size of Wasm execution. Review [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
-| `Wasm_Stack_Size` | Size of the stack size of Wasm execution. Review [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
-| `Buf_Size` | Size of the buffer See [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
-| `Oneshot` | Only run once at startup. This allows collection of data precedent to the Fluent Bit startup (Boolean, default: `false`). |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
+| Key | Description |
+|:-------------------|:---------------------------------------------------------------------------------------------------------------------------------------------|
+| `WASI_Path` | The location of a Wasm program file. |
+| `Parser` | Specify the name of a parser to interpret the entry as a structured message. |
+| `Accessible_Paths` | Specify the allowed list of paths to be able to access paths from WASM programs. |
+| `Interval_Sec` | Polling interval (seconds). |
+| `Interval_NSec` | Polling interval (nanosecond). |
+| `Wasm_Heap_Size` | Size of the heap size of Wasm execution. Review [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
+| `Wasm_Stack_Size` | Size of the stack size of Wasm execution. Review [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
+| `Buf_Size` | Size of the buffer See [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
+| `Oneshot` | Only run once at startup. This allows collection of data precedent to the Fluent Bit startup (Boolean, default: `false`). |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
## Configuration examples
diff --git a/pipeline/inputs/exec.md b/pipeline/inputs/exec.md
index 5b3a615fd..ad86b3b6e 100644
--- a/pipeline/inputs/exec.md
+++ b/pipeline/inputs/exec.md
@@ -18,17 +18,17 @@ The debug images use the same binaries so even though they have a shell, there i
The plugin supports the following configuration parameters:
-| Key | Description |
-| :--- | :--- |
-| `Command` | The command to execute, passed to [popen](https://man7.org/linux/man-pages/man3/popen.3.html) without any additional escaping or processing. Can include pipelines, redirection, command-substitution, or other information. |
-| `Parser` | Specify the name of a parser to interpret the entry as a structured message. |
-| `Interval_Sec` | Polling interval (seconds). |
-| `Interval_NSec` | Polling interval (nanosecond). |
-| `Buf_Size` | Size of the buffer. See [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
-| `Oneshot` | Only run once at startup. This allows collection of data precedent to Fluent Bit startup (Boolean, default: `false`). |
-| `Exit_After_Oneshot` | Exit as soon as the one-shot command exits. This allows the `exec` plugin to be used as a wrapper for another command, sending the target command's output to any Fluent Bit sink, then exits. (Boolean, default: `false`). |
+| Key | Description |
+|:----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `Command` | The command to execute, passed to [popen](https://man7.org/linux/man-pages/man3/popen.3.html) without any additional escaping or processing. Can include pipelines, redirection, command-substitution, or other information. |
+| `Parser` | Specify the name of a parser to interpret the entry as a structured message. |
+| `Interval_Sec` | Polling interval (seconds). |
+| `Interval_NSec` | Polling interval (nanosecond). |
+| `Buf_Size` | Size of the buffer. See [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
+| `Oneshot` | Only run once at startup. This allows collection of data precedent to Fluent Bit startup (Boolean, default: `false`). |
+| `Exit_After_Oneshot` | Exit as soon as the one-shot command exits. This allows the `exec` plugin to be used as a wrapper for another command, sending the target command's output to any Fluent Bit sink, then exits. (Boolean, default: `false`). |
| `Propagate_Exit_Code` | When exiting due to `Exit_After_Oneshot`, cause Fluent Bit to exit with the exit code of the command exited by this plugin. Follows [shell conventions for exit code propagation](https://www.gnu.org/software/bash/manual/html_node/Exit-Status.html). (Boolean, default: `false`). |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
## Get started
@@ -45,13 +45,7 @@ fluent-bit -i exec -p 'command=ls /var/log' -o stdout
which should return something like the following:
```text
-Fluent Bit v1.x.x
-* Copyright (C) 2019-2020 The Fluent Bit Authors
-* Copyright (C) 2015-2018 Treasure Data
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2018/03/21 17:46:49] [ info] [engine] started
+...
[0] exec.0: [1521622010.013470159, {"exec"=>"ConsoleKit"}]
[1] exec.0: [1521622010.013490313, {"exec"=>"Xorg.0.log"}]
[2] exec.0: [1521622010.013492079, {"exec"=>"Xorg.0.log.old"}]
@@ -59,6 +53,7 @@ Fluent Bit v1.x.x
[4] exec.0: [1521622010.013494707, {"exec"=>"anaconda.log"}]
[5] exec.0: [1521622010.013496016, {"exec"=>"anaconda.program.log"}]
[6] exec.0: [1521622010.013497225, {"exec"=>"anaconda.storage.log"}]
+...
```
### Configuration file
@@ -70,18 +65,18 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: exec
- tag: exec_ls
- command: ls /var/log
- interval_sec: 1
- interval_nsec: 0
- buf_size: 8mb
- oneshot: false
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: exec
+ tag: exec_ls
+ command: ls /var/log
+ interval_sec: 1
+ interval_nsec: 0
+ buf_size: 8mb
+ oneshot: false
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -89,17 +84,17 @@ pipeline:
```text
[INPUT]
- Name exec
- Tag exec_ls
- Command ls /var/log
- Interval_Sec 1
- Interval_NSec 0
- Buf_Size 8mb
- Oneshot false
+ Name exec
+ Tag exec_ls
+ Command ls /var/log
+ Interval_Sec 1
+ Interval_NSec 0
+ Buf_Size 8mb
+ Oneshot false
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -114,17 +109,17 @@ To use Fluent Bit with the `exec` plugin to wrap another command, use the `Exit_
```yaml
pipeline:
- inputs:
- - name: exec
- tag: exec_oneshot_demo
- command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
- oneshot: true
- exit_after_oneshot: true
- propagate_exit_code: true
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: exec
+ tag: exec_oneshot_demo
+ command: 'for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1'
+ oneshot: true
+ exit_after_oneshot: true
+ propagate_exit_code: true
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -132,16 +127,16 @@ pipeline:
```text
[INPUT]
- Name exec
- Tag exec_oneshot_demo
- Command for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
- Oneshot true
- Exit_After_Oneshot true
- Propagate_Exit_Code true
+ Name exec
+ Tag exec_oneshot_demo
+ Command for s in $(seq 1 10); do echo "count: $s"; sleep 1; done; exit 1
+ Oneshot true
+ Exit_After_Oneshot true
+ Propagate_Exit_Code true
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -150,6 +145,7 @@ pipeline:
Fluent Bit will output:
```text
+...
[0] exec_oneshot_demo: [[1681702172.950574027, {}], {"exec"=>"count: 1"}]
[1] exec_oneshot_demo: [[1681702173.951663666, {}], {"exec"=>"count: 2"}]
[2] exec_oneshot_demo: [[1681702174.953873724, {}], {"exec"=>"count: 3"}]
@@ -160,6 +156,7 @@ Fluent Bit will output:
[7] exec_oneshot_demo: [[1681702179.961715745, {}], {"exec"=>"count: 8"}]
[8] exec_oneshot_demo: [[1681702180.963924140, {}], {"exec"=>"count: 9"}]
[9] exec_oneshot_demo: [[1681702181.965852990, {}], {"exec"=>"count: 10"}]
+...
```
then exits with exit code 1.
@@ -168,7 +165,7 @@ Translation of command exit codes to Fluent Bit exit code follows [the usual she
### Parsing command output
-By default the `exec` plugin emits one message per command output line, with a single field `exec` containing the full message. Use the `Parser` directive to specify the name of a parser configuration to use to process the command input.
+By default, the `exec` plugin emits one message per command output line, with a single field `exec` containing the full message. Use the `Parser` directive to specify the name of a parser configuration to use to process the command input.
### Security concerns
diff --git a/pipeline/inputs/fluentbit-metrics.md b/pipeline/inputs/fluentbit-metrics.md
index b3af62b5b..4424780c1 100644
--- a/pipeline/inputs/fluentbit-metrics.md
+++ b/pipeline/inputs/fluentbit-metrics.md
@@ -14,11 +14,11 @@ Metrics collected with Node Exporter Metrics flow through a separate pipeline fr
## Configuration
-| Key | Description | Default |
-| --------------- | --------------------------------------------------------------------------| --------- |
-| `scrape_interval` | The rate at which metrics are collected from the host operating system. | `2` seconds |
-| `scrape_on_start` | Scrape metrics upon start, use to avoid waiting for `scrape_interval` for the first round of metrics. | `false` |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|-------------------|---------------------------------------------------------------------------------------------------------|-------------|
+| `scrape_interval` | The rate at which metrics are collected from the host operating system. | `2` seconds |
+| `scrape_on_start` | Scrape metrics upon start, use to avoid waiting for `scrape_interval` for the first round of metrics. | `false` |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -31,20 +31,20 @@ In the following configuration file, the input plugin `node_exporter_metrics` co
```yaml
service:
- flush: 1
- log_level: info
+ flush: 1
+ log_level: info
pipeline:
- inputs:
- - name: fluentbit_metrics
- tag: internal_metrics
- scrape_interval: 2
-
- outputs:
- - name: prometheus_exporter
- match: internal_metrics
- host: 0.0.0.0
- port: 2021
+ inputs:
+ - name: fluentbit_metrics
+ tag: internal_metrics
+ scrape_interval: 2
+
+ outputs:
+ - name: prometheus_exporter
+ match: internal_metrics
+ host: 0.0.0.0
+ port: 2021
```
{% endtab %}
@@ -61,20 +61,19 @@ pipeline:
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
- flush 1
- log_level info
+ flush 1
+ log_level info
[INPUT]
- name fluentbit_metrics
- tag internal_metrics
- scrape_interval 2
+ name fluentbit_metrics
+ tag internal_metrics
+ scrape_interval 2
[OUTPUT]
- name prometheus_exporter
- match internal_metrics
- host 0.0.0.0
- port 2021
-
+ name prometheus_exporter
+ match internal_metrics
+ host 0.0.0.0
+ port 2021
```
{% endtab %}
diff --git a/pipeline/inputs/forward.md b/pipeline/inputs/forward.md
index d7b8ae721..3190bbdbf 100644
--- a/pipeline/inputs/forward.md
+++ b/pipeline/inputs/forward.md
@@ -7,21 +7,21 @@ This plugin implements the input service to listen for Forward messages.
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-|:----|:------------| :------ |
-| `Listen` | Listener network interface. | `0.0.0.0` |
-| `Port` | TCP port to listen for incoming connections. | `24224` |
-| `Unix_Path` | Specify the path to Unix socket to receive a Forward message. If set, `Listen` and `Port` are ignored. | _none_ |
-| `Unix_Perm` | Set the permission of the Unix socket file. If `Unix_Path` isn't set, this parameter is ignored. | _none_ |
-| `Buffer_Max_Size` | Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `6144000` |
+| Key | Description | Default |
+|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|
+| `Listen` | Listener network interface. | `0.0.0.0` |
+| `Port` | TCP port to listen for incoming connections. | `24224` |
+| `Unix_Path` | Specify the path to Unix socket to receive a Forward message. If set, `Listen` and `Port` are ignored. | _none_ |
+| `Unix_Perm` | Set the permission of the Unix socket file. If `Unix_Path` isn't set, this parameter is ignored. | _none_ |
+| `Buffer_Max_Size` | Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `6144000` |
| `Buffer_Chunk_Size` | By default the buffer to store the incoming Forward messages, don't allocate the maximum memory allowed, instead it allocate memory when it's required. The rounds of allocations are set by `Buffer_Chunk_Size`. The value must be according to the [Unit Size ](../../administration/configuring-fluent-bit/unit-sizes.md)specification. | `1024000` |
-| `Tag_Prefix` | Prefix incoming tag with the defined value.| _none_ |
-| `Tag` | Override the tag of the forwarded events with the defined value.| _none_ |
-| `Shared_Key` | Shared key for secure forward authentication. | _none_ |
-| `Empty_Shared_Key` | Use this option to connect to Fluentd with a zero-length shared key. | `false` |
-| `Self_Hostname` | Hostname for secure forward authentication. | _none_ |
-| `Security.Users` | Specify the username and password pairs for secure forward authentication. | |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| `Tag_Prefix` | Prefix incoming tag with the defined value. | _none_ |
+| `Tag` | Override the tag of the forwarded events with the defined value. | _none_ |
+| `Shared_Key` | Shared key for secure forward authentication. | _none_ |
+| `Empty_Shared_Key` | Use this option to connect to Fluentd with a zero-length shared key. | `false` |
+| `Self_Hostname` | Hostname for secure forward authentication. | _none_ |
+| `Security.Users` | Specify the username and password pairs for secure forward authentication. | |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -87,7 +87,7 @@ pipeline:
In Fluent Bit v3 or later, `in_forward` can handle secure forward protocol.
-For using user-password authentication, specify `security.users` at least an one-pair.
+For using user-password authentication, specify `security.users` in at least a one-pair.
For using shared key, specify `shared_key` in both of forward output and forward input.
`self_hostname` isn't able to specify with the same hostname between fluent servers and clients.
@@ -96,19 +96,19 @@ For using shared key, specify `shared_key` in both of forward output and forward
```yaml
pipeline:
- inputs:
- - name: forward
- listen: 0.0.0.0
- port: 24224
- buffer_chunk_size: 1M
- buffer_max_size: 6M
- security.users: fluentbit changeme
- shared_key: secret
- self_hostname: flb.server.local
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: forward
+ listen: 0.0.0.0
+ port: 24224
+ buffer_chunk_size: 1M
+ buffer_max_size: 6M
+ security.users: fluentbit changeme
+ shared_key: secret
+ self_hostname: flb.server.local
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -116,18 +116,18 @@ pipeline:
```text
[INPUT]
- Name forward
- Listen 0.0.0.0
- Port 24224
- Buffer_Chunk_Size 1M
- Buffer_Max_Size 6M
- Security.Users fluentbit changeme
- Shared_Key secret
- Self_Hostname flb.server.local
+ Name forward
+ Listen 0.0.0.0
+ Port 24224
+ Buffer_Chunk_Size 1M
+ Buffer_Max_Size 6M
+ Security.Users fluentbit changeme
+ Shared_Key secret
+ Self_Hostname flb.server.local
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -150,10 +150,7 @@ fluent-bit -i forward -o stdout
In [Fluent Bit](http://fluentbit.io) you should see the following output:
```text
-Fluent-Bit v0.9.0
-Copyright (C) Treasure Data
-
-[2016/10/07 21:49:40] [ info] [engine] started
-[2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
+...
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
+...
```
\ No newline at end of file
diff --git a/pipeline/inputs/head.md b/pipeline/inputs/head.md
index 5e94b0deb..25cac72f1 100644
--- a/pipeline/inputs/head.md
+++ b/pipeline/inputs/head.md
@@ -6,17 +6,17 @@ The _Head_ input plugin reads events from the head of a file. Its behavior is si
The plugin supports the following configuration parameters:
-| Key | Description |
-| :-- | :---------- |
-| `File` | Absolute path to the target file. For example: `/proc/uptime`. |
-| `Buf_Size` | Buffer size to read the file. |
-| `Interval_Sec` | Polling interval (seconds). |
-| `Interval_NSec` | Polling interval (nanoseconds). |
-| `Add_Path` | If enabled, the path is appended to each records. Default: `false`. |
-| `Key` | Rename a key. Default: `head`. |
-| `Lines` | Line number to read. If the number N is set, `in_head` reads first N lines like `head(1) -n`. |
-| `Split_line` | If enabled, `in_head` generates key-value pair per line. |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
+| Key | Description |
+|:----------------|:--------------------------------------------------------------------------------------------------------------------------|
+| `File` | Absolute path to the target file. For example: `/proc/uptime`. |
+| `Buf_Size` | Buffer size to read the file. |
+| `Interval_Sec` | Polling interval (seconds). |
+| `Interval_NSec` | Polling interval (nanoseconds). |
+| `Add_Path` | If enabled, the path is appended to each records. Default: `false`. |
+| `Key` | Rename a key. Default: `head`. |
+| `Lines` | Line number to read. If the number N is set, `in_head` reads first N lines like `head(1) -n`. |
+| `Split_line` | If enabled, `in_head` generates key-value pair per line. |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). Default: `false`. |
### Split line mode
diff --git a/pipeline/inputs/health.md b/pipeline/inputs/health.md
index bf4ed8fb6..2501fe80c 100644
--- a/pipeline/inputs/health.md
+++ b/pipeline/inputs/health.md
@@ -6,16 +6,16 @@ The _Health_ input plugin lets you check how healthy a TCP server is. It checks
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Host` | Name of the target host or IP address. | _none_ |
-| `Port` | TCP port where to perform the connection request. | _none_ |
-| `Interval_Sec` | Interval in seconds between the service checks.| `1` |
-| `Internal_Nsec` | Specify a nanoseconds interval for service checks. Works in conjunction with the `Interval_Sec` configuration key. | `0` |
-| `Alert` | If enabled, it generates messages if the target TCP service is down. | `false` |
-| `Add_Host` | If enabled, hostname is appended to each records. | `false` |
-| `Add_Port` | If enabled, port number is appended to each records. | `false` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------|:-------------------------------------------------------------------------------------------------------------------|:--------|
+| `Host` | Name of the target host or IP address. | _none_ |
+| `Port` | TCP port where to perform the connection request. | _none_ |
+| `Interval_Sec` | Interval in seconds between the service checks. | `1` |
+| `Internal_Nsec` | Specify a nanoseconds interval for service checks. Works in conjunction with the `Interval_Sec` configuration key. | `0` |
+| `Alert` | If enabled, it generates messages if the target TCP service is down. | `false` |
+| `Add_Host` | If enabled, hostname is appended to each records. | `false` |
+| `Add_Port` | If enabled, port number is appended to each records. | `false` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -38,16 +38,16 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: health
- host: 127.0.0.1
- port: 80
- interval_sec: 1
- interval_nsec: 0
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: health
+ host: 127.0.0.1
+ port: 80
+ interval_sec: 1
+ interval_nsec: 0
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -55,15 +55,15 @@ pipeline:
```text
[INPUT]
- Name health
- Host 127.0.0.1
- Port 80
- Interval_Sec 1
- Interval_NSec 0
+ Name health
+ Host 127.0.0.1
+ Port 80
+ Interval_Sec 1
+ Interval_NSec 0
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -76,30 +76,10 @@ Once Fluent Bit is running, you will see some random values in the output interf
```shell
$ fluent-bit -i health -p host=127.0.0.1 -p port=80 -o stdout
-Fluent Bit v4.0.0
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/06/30 16:12:06] [ info] [fluent bit] version=4.0.0, commit=3a91b155d6, pid=91577
-[2025/06/30 16:12:06] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/06/30 16:12:06] [ info] [simd ] disabled
-[2025/06/30 16:12:06] [ info] [cmetrics] version=0.9.9
-[2025/06/30 16:12:06] [ info] [ctraces ] version=0.6.2
-[2025/06/30 16:12:06] [ info] [input:health:health.0] initializing
-[2025/06/30 16:12:06] [ info] [input:health:health.0] storage_strategy='memory' (memory only)
-[2025/06/30 16:12:06] [ info] [sp] stream processor started
-[2025/06/30 16:12:06] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] health.0: [1624145988.305640385, {"alive"=>true}]
[1] health.0: [1624145989.305575360, {"alive"=>true}]
[2] health.0: [1624145990.306498573, {"alive"=>true}]
[3] health.0: [1624145991.305595498, {"alive"=>true}]
+...
```
\ No newline at end of file
diff --git a/pipeline/inputs/http.md b/pipeline/inputs/http.md
index 7dc7c1f20..7256c34c8 100644
--- a/pipeline/inputs/http.md
+++ b/pipeline/inputs/http.md
@@ -5,16 +5,16 @@ The _HTTP_ input plugin lets Fluent Bit open an HTTP port that you can then rout
## Configuration parameters
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `listen` | The address to listen on. | `0.0.0.0` |
-| `port` | The port for Fluent Bit to listen on. | `9880` |
-| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
-| `buffer_max_size` | Specify the maximum buffer size in KB to receive a JSON message. | `4M` |
-| `buffer_chunk_size` | This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. | `512K` |
-| `successful_response_code` | Allows setting successful response code. Supported values: `200`, `201`, and `204` | `201` |
-| `success_header` | Add an HTTP header key/value pair on success. Multiple headers can be set. For example, `X-Custom custom-answer` | _none_ |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|-----------|
+| `listen` | The address to listen on. | `0.0.0.0` |
+| `port` | The port for Fluent Bit to listen on. | `9880` |
+| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
+| `buffer_max_size` | Specify the maximum buffer size in KB to receive a JSON message. | `4M` |
+| `buffer_chunk_size` | This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. | `512K` |
+| `successful_response_code` | Allows setting successful response code. Supported values: `200`, `201`, and `204` | `201` |
+| `success_header` | Add an HTTP header key/value pair on success. Multiple headers can be set. For example, `X-Custom custom-answer` | _none_ |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
### TLS / SSL
@@ -47,14 +47,14 @@ curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application
```yaml
pipeline:
- inputs:
- - name: http
- listen: 0.0.0.0
- port: 8888
-
- outputs:
- - name: stdout
- match: app.log
+ inputs:
+ - name: http
+ listen: 0.0.0.0
+ port: 8888
+
+ outputs:
+ - name: stdout
+ match: app.log
```
{% endtab %}
@@ -62,13 +62,13 @@ pipeline:
```text
[INPUT]
- name http
- listen 0.0.0.0
- port 8888
+ name http
+ listen 0.0.0.0
+ port 8888
[OUTPUT]
- name stdout
- match app.log
+ name stdout
+ match app.log
```
{% endtab %}
@@ -87,14 +87,14 @@ curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application
```yaml
pipeline:
- inputs:
- - name: http
- listen: 0.0.0.0
- port: 8888
-
- outputs:
- - name: stdout
- match: http.0
+ inputs:
+ - name: http
+ listen: 0.0.0.0
+ port: 8888
+
+ outputs:
+ - name: stdout
+ match: http.0
```
{% endtab %}
@@ -102,13 +102,13 @@ pipeline:
```text
[INPUT]
- name http
- listen 0.0.0.0
- port 8888
+ name http
+ listen 0.0.0.0
+ port 8888
[OUTPUT]
- name stdout
- match http.0
+ name stdout
+ match http.0
```
{% endtab %}
@@ -131,15 +131,15 @@ curl -d '{"key1":"value1","key2":"value2"}' -XPOST -H "content-type: application
```yaml
pipeline:
- inputs:
- - name: http
- listen: 0.0.0.0
- port: 8888
- tag_key: key1
-
- outputs:
- - name: stdout
- match: value1
+ inputs:
+ - name: http
+ listen: 0.0.0.0
+ port: 8888
+ tag_key: key1
+
+ outputs:
+ - name: stdout
+ match: value1
```
{% endtab %}
@@ -147,14 +147,14 @@ pipeline:
```text
[INPUT]
- name http
- listen 0.0.0.0
- port 8888
- tag_key key1
+ name http
+ listen 0.0.0.0
+ port 8888
+ tag_key key1
[OUTPUT]
- name stdout
- match value1
+ name stdout
+ match value1
```
{% endtab %}
@@ -169,11 +169,11 @@ The `success_header` parameter lets you set multiple HTTP headers on success. Th
```yaml
pipeline:
- inputs:
- - name: http
- success_header:
- - X-Custom custom-answer
- - X-Another another-answer
+ inputs:
+ - name: http
+ success_header:
+ - X-Custom custom-answer
+ - X-Another another-answer
```
{% endtab %}
@@ -181,9 +181,9 @@ pipeline:
```text
[INPUT]
- name http
- success_header X-Custom custom-answer
- success_header X-Another another-answer
+ name http
+ success_header X-Custom custom-answer
+ success_header X-Another another-answer
```
{% endtab %}
@@ -202,14 +202,14 @@ curl -d @app.log -XPOST -H "content-type: application/json" http://localhost:888
```yaml
pipeline:
- inputs:
- - name: http
- listen: 0.0.0.0
- port: 8888
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: http
+ listen: 0.0.0.0
+ port: 8888
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -217,13 +217,13 @@ pipeline:
```text
[INPUT]
- name http
- listen 0.0.0.0
- port 8888
+ name http
+ listen 0.0.0.0
+ port 8888
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
diff --git a/pipeline/inputs/kafka.md b/pipeline/inputs/kafka.md
index 0645f00ad..d59fa78fc 100644
--- a/pipeline/inputs/kafka.md
+++ b/pipeline/inputs/kafka.md
@@ -8,17 +8,17 @@ This plugin uses the official [librdkafka C library](https://github.com/edenhill
## Configuration parameters
-| Key | Description | default |
-| :--- | :--- | :--- |
-| `brokers` | Single or multiple list of Kafka Brokers. For example: `192.168.1.3:9092`, `192.168.1.4:9092`. | _none_ |
-| `topics` | Single entry or list of comma-separated topics (`,`) that Fluent Bit will subscribe to. | _none_ |
-| `format` | Serialization format of the messages. If set to `json`, the payload will be parsed as JSON. | _none_ |
-| `client_id` | Client id passed to librdkafka. | _none_ |
-| `group_id` | Group id passed to librdkafka. | `fluent-bit` |
-| `poll_ms` | Kafka brokers polling interval in milliseconds. | `500` |
-| `Buffer_Max_Size` | Specify the maximum size of buffer per cycle to poll Kafka messages from subscribed topics. To increase throughput, specify larger size. | `4M` |
-| `rdkafka.{property}` | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) | _none_ |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | default |
+|:---------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-------------|
+| `brokers` | Single or multiple list of Kafka Brokers. For example: `192.168.1.3:9092`, `192.168.1.4:9092`. | _none_ |
+| `topics` | Single entry or list of comma-separated topics (`,`) that Fluent Bit will subscribe to. | _none_ |
+| `format` | Serialization format of the messages. If set to `json`, the payload will be parsed as JSON. | _none_ |
+| `client_id` | Client id passed to librdkafka. | _none_ |
+| `group_id` | Group id passed to librdkafka. | `fluent-bit` |
+| `poll_ms` | Kafka brokers polling interval in milliseconds. | `500` |
+| `Buffer_Max_Size` | Specify the maximum size of buffer per cycle to poll Kafka messages from subscribed topics. To increase throughput, specify larger size. | `4M` |
+| `rdkafka.{property}` | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) | _none_ |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -41,15 +41,15 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: kafka
- brokers: 192.168.1.3:9092
- topics: some-topic
- poll_ms: 100
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: kafka
+ brokers: 192.168.1.3:9092
+ topics: some-topic
+ poll_ms: 100
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -57,14 +57,14 @@ pipeline:
```text
[INPUT]
- Name kafka
- Brokers 192.168.1.3:9092
- Topics some-topic
- poll_ms 100
+ Name kafka
+ Brokers 192.168.1.3:9092
+ Topics some-topic
+ poll_ms 100
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -79,23 +79,23 @@ The Fluent Bit source repository contains a full example of using Fluent Bit to
```yaml
pipeline:
- inputs:
- - name: kafka
- brokers: kafka-broker:9092
- topics: fb-source
- poll_ms: 100
- format: json
-
- filters:
- - name: lua
- match: '*'
- script: kafka.lua
- call: modify_kafka_message
-
- outputs:
- - name: kafka
- brokers: kafka-broker:9092
- topics: fb-sink
+ inputs:
+ - name: kafka
+ brokers: kafka-broker:9092
+ topics: fb-source
+ poll_ms: 100
+ format: json
+
+ filters:
+ - name: lua
+ match: '*'
+ script: kafka.lua
+ call: modify_kafka_message
+
+ outputs:
+ - name: kafka
+ brokers: kafka-broker:9092
+ topics: fb-sink
```
{% endtab %}
@@ -103,22 +103,22 @@ pipeline:
```text
[INPUT]
- Name kafka
- brokers kafka-broker:9092
- topics fb-source
- poll_ms 100
- format json
+ Name kafka
+ brokers kafka-broker:9092
+ topics fb-source
+ poll_ms 100
+ format json
[FILTER]
- Name lua
- Match *
- script kafka.lua
- call modify_kafka_message
+ Name lua
+ Match *
+ script kafka.lua
+ call modify_kafka_message
[OUTPUT]
- Name kafka
- brokers kafka-broker:9092
- topics fb-sink
+ Name kafka
+ brokers kafka-broker:9092
+ topics fb-sink
```
{% endtab %}
@@ -160,10 +160,10 @@ If you are compiling Fluent Bit from source, ensure the following requirements a
### Configuration Parameters
-| Property | Description | Type | Required |
-|---------------------------|-----------------------------------------------------|---------|-------------------------------|
-| `aws_msk_iam` | Enable AWS MSK IAM authentication | Boolean | No (default: false) |
-| `aws_msk_iam_cluster_arn` | Full ARN of the MSK cluster for region extraction | String | Yes (if `aws_msk_iam` is true)|
+| Property | Description | Type | Required |
+|---------------------------|---------------------------------------------------|---------|--------------------------------|
+| `aws_msk_iam` | Enable AWS MSK IAM authentication | Boolean | No (default: false) |
+| `aws_msk_iam_cluster_arn` | Full ARN of the MSK cluster for region extraction | String | Yes (if `aws_msk_iam` is true) |
### Configuration Example
@@ -190,20 +190,20 @@ The AWS credentials used by Fluent Bit must have permission to connect to your M
```json
{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "VisualEditor0",
- "Effect": "Allow",
- "Action": [
- "kafka-cluster:*",
- "kafka-cluster:DescribeCluster",
- "kafka-cluster:ReadData",
- "kafka-cluster:DescribeTopic",
- "kafka-cluster:Connect"
- ],
- "Resource": "*"
- }
- ]
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": [
+ "kafka-cluster:*",
+ "kafka-cluster:DescribeCluster",
+ "kafka-cluster:ReadData",
+ "kafka-cluster:DescribeTopic",
+ "kafka-cluster:Connect"
+ ],
+ "Resource": "*"
+ }
+ ]
}
```
\ No newline at end of file
diff --git a/pipeline/inputs/kernel-logs.md b/pipeline/inputs/kernel-logs.md
index d9feeec89..aba622fe3 100644
--- a/pipeline/inputs/kernel-logs.md
+++ b/pipeline/inputs/kernel-logs.md
@@ -4,10 +4,10 @@ The _kmsg_ input plugin reads the Linux Kernel log buffer from the beginning. It
## Configuration parameters
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Prio_Level` | The log level to filter. The kernel log is dropped if its priority is more than `prio_level`. Allowed values are `0`-`8`. `8` means all logs are saved. | `8` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:-------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `Prio_Level` | The log level to filter. The kernel log is dropped if its priority is more than `prio_level`. Allowed values are `0`-`8`. `8` means all logs are saved. | `8` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -22,28 +22,7 @@ fluent-bit -i kmsg -t kernel -o stdout -m '*'
Which returns output similar to:
```text
-Fluent Bit v4.0.0
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/06/30 16:12:06] [ info] [fluent bit] version=4.0.0, commit=3a91b155d6, pid=91577
-[2025/06/30 16:12:06] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/06/30 16:12:06] [ info] [simd ] disabled
-[2025/06/30 16:12:06] [ info] [cmetrics] version=0.9.9
-[2025/06/30 16:12:06] [ info] [ctraces ] version=0.6.2
-[2025/06/30 16:12:06] [ info] [input:health:health.0] initializing
-[2025/06/30 16:12:06] [ info] [input:health:health.0] storage_strategy='memory' (memory only)
-[2025/06/30 16:12:06] [ info] [sp] stream processor started
-[2025/06/30 16:12:06] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] kernel: [1463421823, {"priority"=>3, "sequence"=>1814, "sec"=>11706, "usec"=>732233, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[1] kernel: [1463421823, {"priority"=>3, "sequence"=>1815, "sec"=>11706, "usec"=>732300, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
[2] kernel: [1463421829, {"priority"=>3, "sequence"=>1816, "sec"=>11712, "usec"=>729728, "msg"=>"ERROR @wl_cfg80211_get_station : Wrong Mac address, mac = 34:a8:4e:d3:40:ec profile =20:3a:07:9e:4a:ac"}]
@@ -62,13 +41,13 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: kmsg
- tag: kernel
+ inputs:
+ - name: kmsg
+ tag: kernel
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -76,12 +55,12 @@ pipeline:
```text
[INPUT]
- Name kmsg
- Tag kernel
+ Name kmsg
+ Tag kernel
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/kubernetes-events.md b/pipeline/inputs/kubernetes-events.md
index 976db1812..ad49ad2ba 100644
--- a/pipeline/inputs/kubernetes-events.md
+++ b/pipeline/inputs/kubernetes-events.md
@@ -9,23 +9,23 @@ Kubernetes exports events through the API server. This input plugin lets you ret
## Configuration
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `db` | Set a database file to keep track of recorded Kubernetes events. | _none_ |
-| `db.sync` | Set a database sync method. Accepted values: `extra`, `full`, `normal`, `off`. | `normal` |
-| `interval_sec` | Set the reconnect interval (seconds). | `0` |
-| `interval_nsec` | Set the reconnect interval (sub seconds: nanoseconds). | `500000000` |
-| `kube_url` | API Server endpoint. | `https://kubernetes.default.svc` |
-| `kube_ca_file` | Kubernetes TLS CA file. | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` |
-| `kube_ca_path` | Kubernetes TLS ca path. | _none_ |
-| `kube_token_file` | Kubernetes authorization token file. | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
-| `kube_token_ttl` | Kubernetes token time to live, until it's read again from the token file. | `10m` |
-| `kube_request_limit` | Kubernetes limit parameter for events query, no limit applied when set to `0`. | `0` |
-| `kube_retention_time` | Kubernetes retention time for events. | `1h` |
-| `kube_namespace` | Kubernetes namespace to query events from. | `all` |
-| `tls.debug` | Debug level between `0` (nothing) and `4` (every detail). | `0` |
-| `tls.verify` | Enable or disable verification of TLS peer certificate. | `On` |
-| `tls.vhost` | Set optional TLS virtual host. | _none_ |
+| Key | Description | Default |
+|-----------------------|--------------------------------------------------------------------------------|--------------------------------------------------------|
+| `db` | Set a database file to keep track of recorded Kubernetes events. | _none_ |
+| `db.sync` | Set a database sync method. Accepted values: `extra`, `full`, `normal`, `off`. | `normal` |
+| `interval_sec` | Set the reconnect interval (seconds). | `0` |
+| `interval_nsec` | Set the reconnect interval (sub seconds: nanoseconds). | `500000000` |
+| `kube_url` | API Server endpoint. | `https://kubernetes.default.svc` |
+| `kube_ca_file` | Kubernetes TLS CA file. | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` |
+| `kube_ca_path` | Kubernetes TLS ca path. | _none_ |
+| `kube_token_file` | Kubernetes authorization token file. | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
+| `kube_token_ttl` | Kubernetes token time to live, until it's read again from the token file. | `10m` |
+| `kube_request_limit` | Kubernetes limit parameter for events query, no limit applied when set to `0`. | `0` |
+| `kube_retention_time` | Kubernetes retention time for events. | `1h` |
+| `kube_namespace` | Kubernetes namespace to query events from. | `all` |
+| `tls.debug` | Debug level between `0` (nothing) and `4` (every detail). | `0` |
+| `tls.verify` | Enable or disable verification of TLS peer certificate. | `On` |
+| `tls.vhost` | Set optional TLS virtual host. | _none_ |
In Fluent Bit 3.1 or later, this plugin uses a Kubernetes watch stream instead of polling. In versions earlier than 3.1, the interval parameters are used for reconnecting the Kubernetes watch stream.
@@ -49,18 +49,18 @@ In the following configuration file, the Kubernetes events plugin collects event
```yaml
service:
- flush: 1
- log_level: info
+ flush: 1
+ log_level: info
pipeline:
- inputs:
- - name: kubernetes_events
- tag: k8s_events
- kube_url: https://kubernetes.default.svc
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: kubernetes_events
+ tag: k8s_events
+ kube_url: https://kubernetes.default.svc
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -68,17 +68,17 @@ pipeline:
```text
[SERVICE]
- flush 1
- log_level info
+ flush 1
+ log_level info
[INPUT]
- name kubernetes_events
- tag k8s_events
- kube_url https://kubernetes.default.svc
+ name kubernetes_events
+ tag k8s_events
+ kube_url https://kubernetes.default.svc
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
diff --git a/pipeline/inputs/memory-metrics.md b/pipeline/inputs/memory-metrics.md
index 1ece3cbfd..e59662622 100644
--- a/pipeline/inputs/memory-metrics.md
+++ b/pipeline/inputs/memory-metrics.md
@@ -17,33 +17,12 @@ fluent-bit -i mem -t memory -o stdout -m '*'
Which outputs information similar to:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] memory: [[1751381087.225589224, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381088.228411537, {}], {"Mem.total"=>3986708, "Mem.used"=>560708, "Mem.free"=>3426000, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381089.225600084, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
[0] memory: [[1751381090.228345064, {}], {"Mem.total"=>3986708, "Mem.used"=>561480, "Mem.free"=>3425228, "Swap.total"=>0, "Swap.used"=>0, "Swap.free"=>0}]
+...
```
## Threading
@@ -60,13 +39,13 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: mem
- tag: memory
+ inputs:
+ - name: mem
+ tag: memory
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -74,12 +53,12 @@ pipeline:
```text
[INPUT]
- Name mem
- Tag memory
+ Name mem
+ Tag memory
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/mqtt.md b/pipeline/inputs/mqtt.md
index 8ec7acf9f..fd08f7384 100644
--- a/pipeline/inputs/mqtt.md
+++ b/pipeline/inputs/mqtt.md
@@ -6,12 +6,12 @@ The _MQTT_ input plugin retrieves messages and data from MQTT control packets ov
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :---------- | :------------------------------------------------------------- | :------ |
-| `Listen` | Listener network interface. | `0.0.0.0` |
-| `Port` | TCP port where listening for connections. | `1883` |
-| `Payload_Key` | Specify the key where the payload key/value will be preserved. | _none_ |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:--------------|:--------------------------------------------------------------------------------------------------------|:----------|
+| `Listen` | Listener network interface. | `0.0.0.0` |
+| `Port` | TCP port where listening for connections. | `1883` |
+| `Payload_Key` | Specify the key where the payload key/value will be preserved. | _none_ |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -30,30 +30,9 @@ fluent-bit -i mqtt -t data -o stdout -m '*'
Returns a response like the following:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] data: [1463775773, {"topic"=>"some/topic", "key1"=>123, "key2"=>456}]
+...
```
The following command line will send a message to the MQTT input plugin:
@@ -71,15 +50,15 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: mqtt
- tag: data
- listen: 0.0.0.0
- port: 1883
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: mqtt
+ tag: data
+ listen: 0.0.0.0
+ port: 1883
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -87,14 +66,14 @@ pipeline:
```text
[INPUT]
- Name mqtt
- Tag data
- Listen 0.0.0.0
- Port 1883
+ Name mqtt
+ Tag data
+ Listen 0.0.0.0
+ Port 1883
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/network-io-metrics.md b/pipeline/inputs/network-io-metrics.md
index 1ccf19ecd..f36aad3ec 100644
--- a/pipeline/inputs/network-io-metrics.md
+++ b/pipeline/inputs/network-io-metrics.md
@@ -8,14 +8,14 @@ The Network I/O metrics plugin creates metrics that are log-based, such as JSON
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Interface` | Specify the network interface to monitor. For example, `eth0`. | _none_ |
-| `Interval_Sec` | Polling interval (seconds). | `1` |
-| `Interval_NSec` | Polling interval (nanosecond). | `0` |
-| `Verbose` | If true, gather metrics precisely. | `false` |
-| `Test_At_Init` | If true, testing if the network interface is valid at initialization. | `false` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------|:--------------------------------------------------------------------------------------------------------|:--------|
+| `Interface` | Specify the network interface to monitor. For example, `eth0`. | _none_ |
+| `Interval_Sec` | Polling interval (seconds). | `1` |
+| `Interval_NSec` | Polling interval (nanosecond). | `0` |
+| `Verbose` | If true, gather metrics precisely. | `false` |
+| `Test_At_Init` | If true, testing if the network interface is valid at initialization. | `false` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -32,33 +32,12 @@ fluent-bit -i netif -p interface=eth0 -o stdout
Which returns something the following:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] netif.0: [1499524459.001698260, {"eth0.rx.bytes"=>89769869, "eth0.rx.packets"=>73357, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>4256474, "eth0.tx.packets"=>24293, "eth0.tx.errors"=>0}]
[1] netif.0: [1499524460.002541885, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[2] netif.0: [1499524461.001142161, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
[3] netif.0: [1499524462.002612971, {"eth0.rx.bytes"=>98, "eth0.rx.packets"=>1, "eth0.rx.errors"=>0, "eth0.tx.bytes"=>98, "eth0.tx.packets"=>1, "eth0.tx.errors"=>0}]
+...
```
### Configuration file
@@ -70,16 +49,16 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: netif
- tag: netif
- interval_sec: 1
- interval_nsec: 0
- interface: eth0
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: netif
+ tag: netif
+ interval_sec: 1
+ interval_nsec: 0
+ interface: eth0
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -87,15 +66,15 @@ pipeline:
```text
[INPUT]
- Name netif
- Tag netif
- Interval_Sec 1
- Interval_NSec 0
- Interface eth0
+ Name netif
+ Tag netif
+ Interval_Sec 1
+ Interval_NSec 0
+ Interface eth0
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/nginx.md b/pipeline/inputs/nginx.md
index 1de72081a..3e8e6f0e8 100644
--- a/pipeline/inputs/nginx.md
+++ b/pipeline/inputs/nginx.md
@@ -6,13 +6,13 @@ The _NGINX Exporter metrics_ input plugin scrapes metrics from the NGINX stub st
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Host` | Name of the target host or IP address. | `localhost` |
-| `Port` | Port of the target Nginx service to connect to. | `80` |
-| `Status_URL` | The URL of the stub status Handler. | `/status` |
-| `Nginx_Plus` | Turn on NGINX plus mode. | `true` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:-------------|:--------------------------------------------------------------------------------------------------------|:------------|
+| `Host` | Name of the target host or IP address. | `localhost` |
+| `Port` | Port of the target Nginx service to connect to. | `80` |
+| `Status_URL` | The URL of the stub status Handler. | `/status` |
+| `Nginx_Plus` | Turn on NGINX plus mode. | `true` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -20,17 +20,17 @@ NGINX must be configured with a location that invokes the stub status handler. H
```text
server {
- listen 80;
- listen [::]:80;
- server_name localhost;
- location / {
- root /usr/share/nginx/html;
- index index.html index.htm;
- }
- // configure the stub status handler.
- location /status {
- stub_status;
- }
+ listen 80;
+ listen [::]:80;
+ server_name localhost;
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ }
+ // configure the stub status handler.
+ location /status {
+ stub_status;
+ }
}
```
@@ -41,19 +41,19 @@ NGINX Plus.
```text
server {
- listen 80;
- listen [::]:80;
- server_name localhost;
-
- # enable /api/ location with appropriate access control in order
- # to make use of NGINX Plus API
- #
- location /api/ {
- api write=on;
- # configure to allow requests from the server running fluent-bit
- allow 192.168.1.*;
- deny all;
- }
+ listen 80;
+ listen [::]:80;
+ server_name localhost;
+
+ # enable /api/ location with appropriate access control in order
+ # to make use of NGINX Plus API
+ #
+ location /api/ {
+ api write=on;
+ # configure to allow requests from the server running fluent-bit
+ allow 192.168.1.*;
+ deny all;
+ }
}
```
@@ -81,16 +81,16 @@ In your main configuration file append the following:
```yaml
pipeline:
- inputs:
- - name: nginx_metrics
- nginx_plus: off
- host: 127.0.0.1
- port: 80
- status_URL: /status
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: nginx_metrics
+ nginx_plus: off
+ host: 127.0.0.1
+ port: 80
+ status_URL: /status
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -98,15 +98,15 @@ pipeline:
```text
[INPUT]
- Name nginx_metrics
- Nginx_Plus off
- Host 127.0.0.1
- Port 80
- Status_URL /status
+ Name nginx_metrics
+ Nginx_Plus off
+ Host 127.0.0.1
+ Port 80
+ Status_URL /status
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -119,17 +119,16 @@ And for NGINX Plus API:
```yaml
pipeline:
- inputs:
- - name: nginx_metrics
- nginx_plus: on
- host: 127.0.0.1
- port: 80
- status_URL: /api
+ inputs:
+ - name: nginx_metrics
+ nginx_plus: on
+ host: 127.0.0.1
+ port: 80
+ status_URL: /api
-
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -137,15 +136,15 @@ pipeline:
```text
[INPUT]
- Name nginx_metrics
- Nginx_Plus on
- Host 127.0.0.1
- Port 80
- Status_URL /api
+ Name nginx_metrics
+ Nginx_Plus on
+ Host 127.0.0.1
+ Port 80
+ Status_URL /api
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -162,29 +161,7 @@ fluent-bit -i nginx_metrics -p host=127.0.0.1 -p nginx_plus=off -o stdout -p mat
Which should return something like the following:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
2021-10-14T19:37:37.228691854Z nginx_connections_accepted = 788253884
2021-10-14T19:37:37.228691854Z nginx_connections_handled = 788253884
2021-10-14T19:37:37.228691854Z nginx_http_requests_total = 42045501
@@ -193,6 +170,7 @@ ______ _ _ ______ _ _ ___ _____
2021-10-14T19:37:37.228691854Z nginx_connections_writing = 1
2021-10-14T19:37:37.228691854Z nginx_connections_waiting = 2008
2021-10-14T19:37:35.229919621Z nginx_up = 1
+...
```
## Exported metrics
diff --git a/pipeline/inputs/node-exporter-metrics.md b/pipeline/inputs/node-exporter-metrics.md
index 1c9fb56f5..c96f27f74 100644
--- a/pipeline/inputs/node-exporter-metrics.md
+++ b/pipeline/inputs/node-exporter-metrics.md
@@ -17,7 +17,7 @@ This plugin is generally supported on Linux-based operating systems, with macOS
## Configuration
-`scrape_interval` sets the default for all scrapes. To set granular scrape intervals, set the specific interval. For example, `collector.cpu.scrape_interval`. When using a granular scrape interval, if a value greater than `0` is used, it overrides the global default. Otherwise the global default is used.
+`scrape_interval` sets the default for all scrapes. To set granular scrape intervals, set the specific interval. For example, `collector.cpu.scrape_interval`. When using a granular scrape interval, if a value greater than `0` is used, it overrides the global default. Otherwise, the global default is used.
The plugin top-level `scrape_interval` setting is the global default. Any custom settings for individual `scrape_intervals` override that specific metric scraping interval.
@@ -27,36 +27,36 @@ Overridden intervals only change the collection interval, not the interval for p
For example, if the global interval is set to `5` and an override interval of `60` is used, the published metrics will be reported every five seconds. However, the specific collector will stay the same for 60 seconds until it's collected again.
-This helps with downsampling when collecting metrics.
-
-| Key | Description | Default |
-| --------------- | ---------------------------------------------------------------------- | --------- |
-| `scrape_interval` | The rate in seconds at which metrics are collected from the host operating system. | `5` |
-| `path.procfs` | The mount point used to collect process information and metrics. | `/proc/` |
-| `path.sysfs` | The path in the filesystem used to collect system metrics. | `/sys/` |
-| `collector.cpu.scrape_interval` | The rate in seconds at which `cpu` metrics are collected from the host operating system. | `0` |
-| `collector.cpufreq.scrape_interval` | The rate in seconds at which `cpufreq` metrics are collected from the host operating system. | `0` |
-| `collector.meminfo.scrape_interval` | The rate in seconds at which `meminfo` metrics are collected from the host operating system. | `0` |
-| `collector.diskstats.scrape_interval` | The rate in seconds at which `diskstats` metrics are collected from the host operating system. | `0` |
-| `collector.filesystem.scrape_interval` | The rate in seconds at which `filesystem` metrics are collected from the host operating system. | `0` |
-| `collector.uname.scrape_interval` | The rate in seconds at which `uname` metrics are collected from the host operating system. | `0` |
-| `collector.stat.scrape_interval` | The rate in seconds at which `stat` metrics are collected from the host operating system. | `0` |
-| `collector.time.scrape_interval` | The rate in seconds at which `time` metrics are collected from the host operating system. | `0` |
-| `collector.loadavg.scrape_interval` | The rate in seconds at which `loadavg` metrics are collected from the host operating system. | `0` |
-| `collector.vmstat.scrape_interval` | The rate in seconds at which `vmstat` metrics are collected from the host operating system. | `0` |
-| `collector.thermal_zone.scrape_interval` | The rate in seconds at which `thermal_zone` metrics are collected from the host operating system. | `0` |
-| `collector.filefd.scrape_interval` | The rate in seconds at which `filefd` metrics are collected from the host operating system. | `0` |
-| `collector.nvme.scrape_interval` | The rate in seconds at which `nvme` metrics are collected from the host operating system. | `0` |
-| `collector.processes.scrape_interval` | The rate in seconds at which system level `process` metrics are collected from the host operating system. | `0` |
-| `metrics` | Specify which metrics are collected from the host operating system. These metrics depend on `/procfs` or `/sysfs`. The actual values of metrics will be read from `/proc` or `/sys` when needed. `cpu`, `cpufreq`, `meminfo`, `diskstats`, `filesystem`, `stat`, `loadavg`, `vmstat`, `netdev`, and `filefd` depend on `procfs`. `cpufreq` metrics depend on `sysfs`. | `"cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,filefd"` |
-| `filesystem.ignore_mount_point_regex` | Specify the regular expression for the `mount` points to prevent collection of/ignore. | `^/(dev\|proc\|run/credentials/.+\|sys\|var/lib/docker/.+\|var/lib/containers/storage/.+)($\|/)` |
-| `filesystem.ignore_filesystem_type_regex` | Specify the regular expression for the `filesystem` types to prevent collection of or ignore. | `^(autofs\|binfmt_misc\|bpf\|cgroup2?\|configfs\|debugfs\|devpts\|devtmpfs\|fusectl\|hugetlbfs\|iso9660\|mqueue\|nsfs\|overlay\|proc\|procfs\|pstore\|rpc_pipefs\|securityfs\|selinuxfs\|squashfs\|sysfs\|tracefs)$` |
-| `diskstats.ignore_device_regex` | Specify the regular expression for the` diskstats` to prevent collection of/ignore. | `^(ram\|loop\|fd\|(h\|s\|v\|xv)d[a-z]\|nvme\\d+n\\d+p)\\d+$` |
-| `systemd_service_restart_metrics` | Determines if the collector will include service restart metrics. | false |
-| `systemd_unit_start_time_metrics` | Determines if the collector will include unit start time metrics. | false |
-| `systemd_include_service_task_metrics` | Determines if the collector will include service task metrics. | false |
-| `systemd_include_pattern` | Regular expression to determine which units are included in the metrics produced by the `systemd` collector. | Not applied unless explicitly set. |
-| `systemd_exclude_pattern` | Regular expression to determine which units are excluded in the metrics produced by the `systemd` collector. | `.+\\.(automount\|device\|mount\|scope\|slice)"` |
+This helps with down-sampling when collecting metrics.
+
+| Key | Description | Default |
+|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `scrape_interval` | The rate in seconds at which metrics are collected from the host operating system. | `5` |
+| `path.procfs` | The mount point used to collect process information and metrics. | `/proc/` |
+| `path.sysfs` | The path in the filesystem used to collect system metrics. | `/sys/` |
+| `collector.cpu.scrape_interval` | The rate in seconds at which `cpu` metrics are collected from the host operating system. | `0` |
+| `collector.cpufreq.scrape_interval` | The rate in seconds at which `cpufreq` metrics are collected from the host operating system. | `0` |
+| `collector.meminfo.scrape_interval` | The rate in seconds at which `meminfo` metrics are collected from the host operating system. | `0` |
+| `collector.diskstats.scrape_interval` | The rate in seconds at which `diskstats` metrics are collected from the host operating system. | `0` |
+| `collector.filesystem.scrape_interval` | The rate in seconds at which `filesystem` metrics are collected from the host operating system. | `0` |
+| `collector.uname.scrape_interval` | The rate in seconds at which `uname` metrics are collected from the host operating system. | `0` |
+| `collector.stat.scrape_interval` | The rate in seconds at which `stat` metrics are collected from the host operating system. | `0` |
+| `collector.time.scrape_interval` | The rate in seconds at which `time` metrics are collected from the host operating system. | `0` |
+| `collector.loadavg.scrape_interval` | The rate in seconds at which `loadavg` metrics are collected from the host operating system. | `0` |
+| `collector.vmstat.scrape_interval` | The rate in seconds at which `vmstat` metrics are collected from the host operating system. | `0` |
+| `collector.thermal_zone.scrape_interval` | The rate in seconds at which `thermal_zone` metrics are collected from the host operating system. | `0` |
+| `collector.filefd.scrape_interval` | The rate in seconds at which `filefd` metrics are collected from the host operating system. | `0` |
+| `collector.nvme.scrape_interval` | The rate in seconds at which `nvme` metrics are collected from the host operating system. | `0` |
+| `collector.processes.scrape_interval` | The rate in seconds at which system level `process` metrics are collected from the host operating system. | `0` |
+| `metrics` | Specify which metrics are collected from the host operating system. These metrics depend on `/procfs` or `/sysfs`. The actual values of metrics will be read from `/proc` or `/sys` when needed. `cpu`, `cpufreq`, `meminfo`, `diskstats`, `filesystem`, `stat`, `loadavg`, `vmstat`, `netdev`, and `filefd` depend on `procfs`. `cpufreq` metrics depend on `sysfs`. | `"cpu,cpufreq,meminfo,diskstats,filesystem,uname,stat,time,loadavg,vmstat,netdev,filefd"` |
+| `filesystem.ignore_mount_point_regex` | Specify the regular expression for the `mount` points to prevent collection of/ignore. | `^/(dev\|proc\|run/credentials/.+\|sys\|var/lib/docker/.+\|var/lib/containers/storage/.+)($\|/)` |
+| `filesystem.ignore_filesystem_type_regex` | Specify the regular expression for the `filesystem` types to prevent collection of or ignore. | `^(autofs\|binfmt_misc\|bpf\|cgroup2?\|configfs\|debugfs\|devpts\|devtmpfs\|fusectl\|hugetlbfs\|iso9660\|mqueue\|nsfs\|overlay\|proc\|procfs\|pstore\|rpc_pipefs\|securityfs\|selinuxfs\|squashfs\|sysfs\|tracefs)$` |
+| `diskstats.ignore_device_regex` | Specify the regular expression for the` diskstats` to prevent collection of/ignore. | `^(ram\|loop\|fd\|(h\|s\|v\|xv)d[a-z]\|nvme\\d+n\\d+p)\\d+$` |
+| `systemd_service_restart_metrics` | Determines if the collector will include service restart metrics. | false |
+| `systemd_unit_start_time_metrics` | Determines if the collector will include unit start time metrics. | false |
+| `systemd_include_service_task_metrics` | Determines if the collector will include service task metrics. | false |
+| `systemd_include_pattern` | Regular expression to determine which units are included in the metrics produced by the `systemd` collector. | Not applied unless explicitly set. |
+| `systemd_exclude_pattern` | Regular expression to determine which units are excluded in the metrics produced by the `systemd` collector. | `.+\\.(automount\|device\|mount\|scope\|slice)"` |
## Collectors available
@@ -64,24 +64,24 @@ The following table describes the available collectors as part of this plugin. T
The Version column specifies the Fluent Bit version where the collector is available.
-| Name | Description | Operating system | Version |
-| ---- | ----------- | ---------------- | ------- |
-| `cpu` | Exposes CPU statistics. | Linux, macOS | 1.8 |
-| `cpufreq` | Exposes CPU frequency statistics. | Linux | 1.8 |
-| `diskstats` | Exposes disk I/O statistics. | Linux, macOS | 1.8 |
-| `filefd` | Exposes file descriptor statistics from `/proc/sys/fs/file-nr`. | Linux | 1.8.2 |
-| `filesystem` | Exposes filesystem statistics from `/proc/*/mounts`. | Linux | 2.0.9 |
-| `loadavg` | Exposes load average. | Linux, macOS | 1.8 |
-| `meminfo` | Exposes memory statistics. | Linux, macOS | 1.8 |
-| `netdev` | Exposes network interface statistics such as bytes transferred. | Linux, macOS | 1.8.2 |
-| `stat` | Exposes various statistics from `/proc/stat`. This includes boot time, forks, and interruptions. | Linux | 1.8 |
-| `time` | Exposes the current system time. | Linux | v1.8 |
-| `uname` | Exposes system information as provided by the `uname` system call. | Linux, macOS | 1.8 |
-| `vmstat` | Exposes statistics from `/proc/vmstat`. | Linux | 1.8.2 |
-| `systemd collector` | Exposes statistics from `systemd`.| Linux | 2.1.3 |
-| `thermal_zone` | Expose thermal statistics from `/sys/class/thermal/thermal_zone/*` | Linux | 2.2.1 |
-| `nvme` | Exposes `nvme` statistics from `/proc`. | Linux | 2.2.0 |
-| `processes` | Exposes processes statistics from `/proc`. | Linux | 2.2.0 |
+| Name | Description | Operating system | Version |
+|---------------------|--------------------------------------------------------------------------------------------------|------------------|---------|
+| `cpu` | Exposes CPU statistics. | Linux, macOS | 1.8 |
+| `cpufreq` | Exposes CPU frequency statistics. | Linux | 1.8 |
+| `diskstats` | Exposes disk I/O statistics. | Linux, macOS | 1.8 |
+| `filefd` | Exposes file descriptor statistics from `/proc/sys/fs/file-nr`. | Linux | 1.8.2 |
+| `filesystem` | Exposes filesystem statistics from `/proc/*/mounts`. | Linux | 2.0.9 |
+| `loadavg` | Exposes load average. | Linux, macOS | 1.8 |
+| `meminfo` | Exposes memory statistics. | Linux, macOS | 1.8 |
+| `netdev` | Exposes network interface statistics such as bytes transferred. | Linux, macOS | 1.8.2 |
+| `stat` | Exposes various statistics from `/proc/stat`. This includes boot time, forks, and interruptions. | Linux | 1.8 |
+| `time` | Exposes the current system time. | Linux | v1.8 |
+| `uname` | Exposes system information as provided by the `uname` system call. | Linux, macOS | 1.8 |
+| `vmstat` | Exposes statistics from `/proc/vmstat`. | Linux | 1.8.2 |
+| `systemd collector` | Exposes statistics from `systemd`. | Linux | 2.1.3 |
+| `thermal_zone` | Expose thermal statistics from `/sys/class/thermal/thermal_zone/*` | Linux | 2.2.1 |
+| `nvme` | Exposes `nvme` statistics from `/proc`. | Linux | 2.2.0 |
+| `processes` | Exposes processes statistics from `/proc`. | Linux | 2.2.0 |
## Threading
@@ -107,20 +107,20 @@ In the following configuration file, the input plugin `node_exporter_metrics` co
# $ curl http://127.0.0.1:2021/metrics
#
service:
- flush: 1
- log_level: info
+ flush: 1
+ log_level: info
pipeline:
- inputs:
- - name: node_exporter_metrics
- tag: node_metrics
- scrape_interval: 2
-
- outputs:
- - name: prometheus_exporter
- match: node_metrics
- host: 0.0.0.0
- port: 2021
+ inputs:
+ - name: node_exporter_metrics
+ tag: node_metrics
+ scrape_interval: 2
+
+ outputs:
+ - name: prometheus_exporter
+ match: node_metrics
+ host: 0.0.0.0
+ port: 2021
```
{% endtab %}
@@ -138,19 +138,19 @@ pipeline:
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
- flush 1
- log_level info
+ flush 1
+ log_level info
[INPUT]
- name node_exporter_metrics
- tag node_metrics
- scrape_interval 2
+ name node_exporter_metrics
+ tag node_metrics
+ scrape_interval 2
[OUTPUT]
- name prometheus_exporter
- match node_metrics
- host 0.0.0.0
- port 2021
+ name prometheus_exporter
+ match node_metrics
+ host 0.0.0.0
+ port 2021
```
@@ -170,16 +170,16 @@ When deploying Fluent Bit in a container you will need to specify additional set
```shell
docker run -ti -v /proc:/host/proc \
- -v /sys:/host/sys \
- -p 2021:2021 \
- fluent/fluent-bit:1.8.0 \
- /fluent-bit/bin/fluent-bit \
- -i node_exporter_metrics \
- -p path.procfs=/host/proc \
- -p path.sysfs=/host/sys \
- -o prometheus_exporter \
- -p "add_label=host $HOSTNAME" \
- -f 1
+ -v /sys:/host/sys \
+ -p 2021:2021 \
+ fluent/fluent-bit:1.8.0 \
+ /fluent-bit/bin/fluent-bit \
+ -i node_exporter_metrics \
+ -p path.procfs=/host/proc \
+ -p path.sysfs=/host/sys \
+ -o prometheus_exporter \
+ -p "add_label=host $HOSTNAME" \
+ -f 1
```
### Fluent Bit with Prometheus and Grafana
diff --git a/pipeline/inputs/opentelemetry.md b/pipeline/inputs/opentelemetry.md
index a126d6c00..86677ef04 100644
--- a/pipeline/inputs/opentelemetry.md
+++ b/pipeline/inputs/opentelemetry.md
@@ -10,20 +10,20 @@ Fluent Bit has a compliant implementation which fully supports `OTLP/HTTP` and `
## Configuration
-| Key | Description | Default |
-| -------- | ------------| ------- |
-| `listen` | The network address to listen on. | `0.0.0.0` |
-| `port` | The port for Fluent Bit to listen for incoming connections. In Fluent Bit 3.0.2 or later, this port is used for both transport `OTLP/HTTP` and `OTLP/GRPC`. | `4318` |
-| `tag` | Tag for all data ingested by this plugin. This will only be used if `tag_from_uri` is set to `false`. Otherwise, the tag will be created from the URI. | _none_ |
-| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
-| `raw_traces` | Route trace data as a log. | `false` |
-| `buffer_max_size` | Specify the maximum buffer size in `KB`, `MB`, or `GB` to the HTTP payload. | `4M` |
-| `buffer_chunk_size` | Initial size and allocation strategy to store the payload (advanced users only)` | `512K` |
-| `successful_response_code` | Allows for setting a successful response code. Supported values: `200`, `201`, or `204`. | `201` |
-| `tag_from_uri` | By default, the tag will be created from the URI. For example, `v1_metrics` from `/v1/metrics`. This must be set to false if using `tag`. | `true` |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
-
-Raw traces means that any data forwarded to the traces endpoint (`/v1/traces`) will be packed and forwarded as a log message, and won' be processed by Fluent Bit. The traces endpoint by default expects a valid `protobuf` encoded payload, but you can set the `raw_traces` option in case you want to get trace telemetry data to any of the Fluent Bit supported outputs.
+| Key | Description | Default |
+|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
+| `listen` | The network address to listen on. | `0.0.0.0` |
+| `port` | The port for Fluent Bit to listen for incoming connections. In Fluent Bit 3.0.2 or later, this port is used for both transport `OTLP/HTTP` and `OTLP/GRPC`. | `4318` |
+| `tag` | Tag for all data ingested by this plugin. This will only be used if `tag_from_uri` is set to `false`. Otherwise, the tag will be created from the URI. | _none_ |
+| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
+| `raw_traces` | Route trace data as a log. | `false` |
+| `buffer_max_size` | Specify the maximum buffer size in `KB`, `MB`, or `GB` to the HTTP payload. | `4M` |
+| `buffer_chunk_size` | Initial size and allocation strategy to store the payload (advanced users only)` | `512K` |
+| `successful_response_code` | Allows for setting a successful response code. Supported values: `200`, `201`, or `204`. | `201` |
+| `tag_from_uri` | By default, the tag will be created from the URI. For example, `v1_metrics` from `/v1/metrics`. This must be set to false if using `tag`. | `true` |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+
+Raw traces means that any data forwarded to the traces endpoint (`/v1/traces`) will be packed and forwarded as a log message, and won't be processed by Fluent Bit. The traces endpoint by default expects a valid `protobuf` encoded payload, but you can set the `raw_traces` option in case you want to get trace telemetry data to any of the Fluent Bit supported outputs.
### OpenTelemetry transport protocol endpoints
@@ -54,11 +54,11 @@ For `OTLP/GRPC`:
The OpenTelemetry input plugin supports the following telemetry data types:
-| Type | HTTP1/JSON | HTTP1/Protobuf | HTTP2/GRPC |
-| ------- | ---------- | -------------- | ---------- |
-| Logs | Stable | Stable | Stable |
-| Metrics | Unimplemented | Stable | Stable |
-| Traces | Unimplemented | Stable | Stable |
+| Type | HTTP1/JSON | HTTP1/Protobuf | HTTP2/GRPC |
+|---------|---------------|----------------|------------|
+| Logs | Stable | Stable | Stable |
+| Metrics | Unimplemented | Stable | Stable |
+| Traces | Unimplemented | Stable | Stable |
A sample configuration file to get started will look something like the following:
@@ -67,14 +67,14 @@ A sample configuration file to get started will look something like the followin
```yaml
pipeline:
- inputs:
- - name: opentelemetry
- listen: 127.0.0.1
- port: 4318
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: opentelemetry
+ listen: 127.0.0.1
+ port: 4318
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -82,13 +82,13 @@ pipeline:
```text
[INPUT]
- name opentelemetry
- listen 127.0.0.1
- port 4318
+ name opentelemetry
+ listen 127.0.0.1
+ port 4318
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
diff --git a/pipeline/inputs/podman-metrics.md b/pipeline/inputs/podman-metrics.md
index a9706705c..6fba9ae15 100644
--- a/pipeline/inputs/podman-metrics.md
+++ b/pipeline/inputs/podman-metrics.md
@@ -6,14 +6,14 @@ The metrics can be exposed later as, for example, Prometheus counters and gauges
## Configuration parameters
-| Key | Description | Default |
-| --- | ------------| ------- |
-| `scrape_interval` | Interval between each scrape of Podman data (in seconds). | `30` |
-| `scrape_on_start` | Sets whether this plugin scrapes Podman data on startup. | `false` |
-| `path.config` | Custom path to the Podman containers configuration file. | `/var/lib/containers/storage/overlay-containers/containers.json` |
-| `path.sysfs` | Custom path to the `sysfs` subsystem directory. | `/sys/fs/cgroup` |
-| `path.procfs` | Custom path to the `proc` subsystem directory. | `/proc` |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|-------------------|---------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|
+| `scrape_interval` | Interval between each scrape of Podman data (in seconds). | `30` |
+| `scrape_on_start` | Sets whether this plugin scrapes Podman data on startup. | `false` |
+| `path.config` | Custom path to the Podman containers configuration file. | `/var/lib/containers/storage/overlay-containers/containers.json` |
+| `path.sysfs` | Custom path to the `sysfs` subsystem directory. | `/sys/fs/cgroup` |
+| `path.procfs` | Custom path to the `proc` subsystem directory. | `/proc` |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -53,10 +53,10 @@ container_network_receive_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc
# HELP container_network_receive_errors_total Network received errors
# TYPE container_network_receive_errors_total counter
container_network_receive_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
-# HELP container_network_transmit_bytes_total Network transmited bytes
+# HELP container_network_transmit_bytes_total Network transmitted bytes
# TYPE container_network_transmit_bytes_total counter
container_network_transmit_bytes_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 962
-# HELP container_network_transmit_errors_total Network transmitedd errors
+# HELP container_network_transmit_errors_total Network transmitted errors
# TYPE container_network_transmit_errors_total counter
container_network_transmit_errors_total{id="858319c39f3f52cd44aa91a520aafb84ded3bc4b4a1e04130ccf87043149bbbf",name="blissful_wescoff",image="docker.io/library/ubuntu:latest",interface="eth0"} 0
# HELP fluentbit_input_storage_overlimit Is the input memory usage overlimit ?.
@@ -89,27 +89,26 @@ fluentbit_input_storage_chunks_busy_bytes{name="podman_metrics.0"} 0
```yaml
pipeline:
- inputs:
- - name: podman_metrics
- scrape_interval: 10
- scrape_on_start: true
-
- outputs:
- - name: prometheus_exporter
+ inputs:
+ - name: podman_metrics
+ scrape_interval: 10
+ scrape_on_start: true
+
+ outputs:
+ - name: prometheus_exporter
```
{% endtab %}
{% tab title="fluent-bit.conf" %}
-
```text
[INPUT]
- name podman_metrics
- scrape_interval 10
- scrape_on_start true
+ name podman_metrics
+ scrape_interval 10
+ scrape_on_start true
[OUTPUT]
- name prometheus_exporter
+ name prometheus_exporter
```
{% endtab %}
diff --git a/pipeline/inputs/process-exporter-metrics.md b/pipeline/inputs/process-exporter-metrics.md
index 28b562ffa..ce6e26bcb 100644
--- a/pipeline/inputs/process-exporter-metrics.md
+++ b/pipeline/inputs/process-exporter-metrics.md
@@ -21,27 +21,27 @@ access the relevant metrics. MacOS doesn't have the `proc` filesystem so this pl
## Configuration
-| Key | Description | Default |
-| ----| ----------- | --------- |
-| `scrape_interval` | The rate, in seconds, at which metrics are collected. | `5` |
-| `path.procfs` | The mount point used to collect process information and metrics. Read-only permissions are enough. | `/proc/` |
-| `process_include_pattern` | Regular expression to determine which names of processes are included in the metrics produced by this plugin. It's applied for all process unless explicitly set. | `.+` |
-| `process_exclude_pattern` | Regular expression to determine which names of processes are excluded in the metrics produced by this plugin. It's not applied unless explicitly set. | `NULL` |
-| `metrics` | Specify which process level of metrics are collected from the host operating system. Actual values of metrics will be read from `/proc` when needed. `cpu`, `io`, `memory`, `state`, `context_switches`, `fd,` `start_time`, `thread_wchan`, and `thread` metrics depend on `procfs`. | `cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread` |
+| Key | Description | Default |
+|---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|
+| `scrape_interval` | The rate, in seconds, at which metrics are collected. | `5` |
+| `path.procfs` | The mount point used to collect process information and metrics. Read-only permissions are enough. | `/proc/` |
+| `process_include_pattern` | Regular expression to determine which names of processes are included in the metrics produced by this plugin. It's applied for all process unless explicitly set. | `.+` |
+| `process_exclude_pattern` | Regular expression to determine which names of processes are excluded in the metrics produced by this plugin. It's not applied unless explicitly set. | `NULL` |
+| `metrics` | Specify which process level of metrics are collected from the host operating system. Actual values of metrics will be read from `/proc` when needed. `cpu`, `io`, `memory`, `state`, `context_switches`, `fd,` `start_time`, `thread_wchan`, and `thread` metrics depend on `procfs`. | `cpu,io,memory,state,context_switches,fd,start_time,thread_wchan,thread` |
## Available metrics
-| Name | Description |
-| ----------------- | -------------------------------------------------- |
-| `cpu` | Exposes CPU statistics from `/proc`. |
-| `io` | Exposes I/O statistics from `/proc`. |
-| `memory` | Exposes memory statistics from `/proc`. |
-| `state` | Exposes process state statistics from `/proc`. |
+| Name | Description |
+|--------------------|-----------------------------------------------------|
+| `cpu` | Exposes CPU statistics from `/proc`. |
+| `io` | Exposes I/O statistics from `/proc`. |
+| `memory` | Exposes memory statistics from `/proc`. |
+| `state` | Exposes process state statistics from `/proc`. |
| `context_switches` | Exposes `context_switches` statistics from `/proc`. |
-| `fd` | Exposes file descriptors statistics from `/proc`. |
+| `fd` | Exposes file descriptors statistics from `/proc`. |
| `start_time` | Exposes `start_time` statistics from `/proc`. |
| `thread_wchan` | Exposes `thread_wchan` from `/proc`. |
-| `thread` | Exposes thread statistics from `/proc`. |
+| `thread` | Exposes thread statistics from `/proc`. |
## Threading
@@ -67,20 +67,20 @@ In the following configuration file, the input plugin `process_exporter_metrics`
# $ curl http://127.0.0.1:2021/metrics
#
service:
- flush: 1
- log_level: info
+ flush: 1
+ log_level: info
pipeline:
- inputs:
- - name: process_exporter_metrics
- tag: process_metrics
- scrape_interval: 2
-
- outputs:
- - name: prometheus_exporter
- match: process_metrics
- host: 0.0.0.0
- port: 2021
+ inputs:
+ - name: process_exporter_metrics
+ tag: process_metrics
+ scrape_interval: 2
+
+ outputs:
+ - name: prometheus_exporter
+ match: process_metrics
+ host: 0.0.0.0
+ port: 2021
```
{% endtab %}
@@ -97,19 +97,19 @@ pipeline:
# $ curl http://127.0.0.1:2021/metrics
#
[SERVICE]
- flush 1
- log_level info
+ flush 1
+ log_level info
[INPUT]
- name process_exporter_metrics
- tag process_metrics
- scrape_interval 2
+ name process_exporter_metrics
+ tag process_metrics
+ scrape_interval 2
[OUTPUT]
- name prometheus_exporter
- match process_metrics
- host 0.0.0.0
- port 2021
+ name prometheus_exporter
+ match process_metrics
+ host 0.0.0.0
+ port 2021
```
{% endtab %}
@@ -131,13 +131,13 @@ These are then exposed over port 2021.
```shell
docker run -ti -v /proc:/host/proc:ro \
- -p 2021:2021 \
- fluent/fluent-bit:2.2 \
- /fluent-bit/bin/fluent-bit \
- -i process_exporter_metrics \
- -p path.procfs=/host/proc \
- -o prometheus_exporter \
- -f 1
+ -p 2021:2021 \
+ fluent/fluent-bit:2.2 \
+ /fluent-bit/bin/fluent-bit \
+ -i process_exporter_metrics \
+ -p path.procfs=/host/proc \
+ -o prometheus_exporter \
+ -f 1
```
## Enhancement requests
diff --git a/pipeline/inputs/process.md b/pipeline/inputs/process.md
index 1db1170a5..0fc790b2e 100644
--- a/pipeline/inputs/process.md
+++ b/pipeline/inputs/process.md
@@ -1,23 +1,22 @@
# Process metrics
-
The _Process metrics_ input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.
-This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](../pipeline/inputs/node-exporter-metrics) input plugin.
+This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics) input plugin.
## Configuration parameters
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `Proc_Name` | The name of the target process to check. | _none_ |
-| `Interval_Sec` | Specifies the interval between service checks, in seconds. | `1` |
-| `Interval_Nsec` | Specifies the interval between service checks, in nanoseconds. This works in conjunction with `Interval_Sec`. | `0` |
-| `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` |
-| `Fd` | If enabled, a number of `fd` is appended to each record. | `true` |
-| `Mem` | If enabled, memory usage of the process is appended to each record. | `true` |
-| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|-----------------|---------------------------------------------------------------------------------------------------------------|---------|
+| `Proc_Name` | The name of the target process to check. | _none_ |
+| `Interval_Sec` | Specifies the interval between service checks, in seconds. | `1` |
+| `Interval_Nsec` | Specifies the interval between service checks, in nanoseconds. This works in conjunction with `Interval_Sec`. | `0` |
+| `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` |
+| `Fd` | If enabled, a number of `fd` is appended to each record. | `true` |
+| `Mem` | If enabled, memory usage of the process is appended to each record. | `true` |
+| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -38,17 +37,17 @@ In your main configuration file, append the following `Input` & `Output` section
```yaml
pipeline:
- inputs:
- - name: proc
- proc_name: crond
- interval_sec: 1
- interval_nsec: 0
- fd: true
- mem: true
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: proc
+ proc_name: crond
+ interval_sec: 1
+ interval_nsec: 0
+ fd: true
+ mem: true
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -56,16 +55,16 @@ pipeline:
```text
[INPUT]
- Name proc
- Proc_Name crond
- Interval_Sec 1
- Interval_NSec 0
- Fd true
- Mem true
+ Name proc
+ Proc_Name crond
+ Interval_Sec 1
+ Interval_NSec 0
+ Fd true
+ Mem true
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -78,31 +77,10 @@ After Fluent Bit starts running, it outputs the health of the process:
```shell
$ fluent-bit -i proc -p proc_name=fluent-bit -o stdout
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] proc.0: [1485780297, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1120000, "mem.VmRSS"=>1120000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[1] proc.0: [1485780298, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1148000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[2] proc.0: [1485780299, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
[3] proc.0: [1485780300, {"alive"=>true, "proc_name"=>"fluent-bit", "pid"=>10964, "mem.VmPeak"=>14740000, "mem.VmSize"=>14740000, "mem.VmLck"=>0, "mem.VmHWM"=>1152000, "mem.VmRSS"=>1148000, "mem.VmData"=>2276000, "mem.VmStk"=>88000, "mem.VmExe"=>1768000, "mem.VmLib"=>2328000, "mem.VmPTE"=>68000, "mem.VmSwap"=>0, "fd"=>18}]
-```
+...
+```
\ No newline at end of file
diff --git a/pipeline/inputs/prometheus-remote-write.md b/pipeline/inputs/prometheus-remote-write.md
index 4011670f1..3052f6901 100644
--- a/pipeline/inputs/prometheus-remote-write.md
+++ b/pipeline/inputs/prometheus-remote-write.md
@@ -8,16 +8,16 @@ The _Prometheus remote write_ input plugin lets you ingest a payload in the Prom
## Configuration parameters
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `listen` | The address to listen on. | `0.0.0.0` |
-| `port` | The port to listen on. | `8080` |
-| `buffer_max_size` | Specifies the maximum buffer size in KB to receive a JSON message. | `4M` |
-| `buffer_chunk_size` | Sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space specified by `buffer_max_size`. | `512K` |
-| `successful_response_code` | Specifies the success response code. Supported values are `200`, `201`, and `204`. | `201` |
-| `tag_from_uri` | If true, a tag will be created from the `uri` parameter (for example, `api_prom_push` from `/api/prom/push`), and any tag specified in the configuration will be ignored. If false, you must provide a tag in the configuration for this plugin. | `true` |
-| `uri` | Specifies an optional HTTP URI for the target web server listening for Prometheus remote write payloads (for example, `/api/prom/push`). | _none_ |
-| `threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
+| `listen` | The address to listen on. | `0.0.0.0` |
+| `port` | The port to listen on. | `8080` |
+| `buffer_max_size` | Specifies the maximum buffer size in KB to receive a JSON message. | `4M` |
+| `buffer_chunk_size` | Sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space specified by `buffer_max_size`. | `512K` |
+| `successful_response_code` | Specifies the success response code. Supported values are `200`, `201`, and `204`. | `201` |
+| `tag_from_uri` | If true, a tag will be created from the `uri` parameter (for example, `api_prom_push` from `/api/prom/push`), and any tag specified in the configuration will be ignored. If false, you must provide a tag in the configuration for this plugin. | `true` |
+| `uri` | Specifies an optional HTTP URI for the target web server listening for Prometheus remote write payloads (for example, `/api/prom/push`). | _none_ |
+| `threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Configuration file
@@ -28,15 +28,15 @@ The following examples are sample configuration files for this input plugin:
```yaml
pipeline:
- inputs:
- - name: prometheus_remote_write
- listen: 127.0.0.1
- port: 8080
- uri: /api/prom/push
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: prometheus_remote_write
+ listen: 127.0.0.1
+ port: 8080
+ uri: /api/prom/push
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -44,14 +44,14 @@ pipeline:
```text
[INPUT]
- name prometheus_remote_write
- listen 127.0.0.1
- port 8080
- uri /api/prom/push
+ name prometheus_remote_write
+ listen 127.0.0.1
+ port 8080
+ uri /api/prom/push
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
@@ -72,14 +72,14 @@ To communicate with TLS, you must use these TLS-related parameters:
```yaml
pipeline:
- inputs:
- - name: prometheus_remote_write
- listen: 127.0.0.1
- port: 8080
- uri: /api/prom/push
- tls: on
- tls.crt_file: /path/to/certificate.crt
- tls.key_file: /path/to/certificate.key
+ inputs:
+ - name: prometheus_remote_write
+ listen: 127.0.0.1
+ port: 8080
+ uri: /api/prom/push
+ tls: on
+ tls.crt_file: /path/to/certificate.crt
+ tls.key_file: /path/to/certificate.key
```
{% endtab %}
@@ -87,16 +87,16 @@ pipeline:
```text
[INPUT]
- Name prometheus_remote_write
- Listen 127.0.0.1
- Port 8080
- Uri /api/prom/push
- Tls On
- tls.crt_file /path/to/certificate.crt
- tls.key_file /path/to/certificate.key
+ Name prometheus_remote_write
+ Listen 127.0.0.1
+ Port 8080
+ Uri /api/prom/push
+ Tls On
+ tls.crt_file /path/to/certificate.crt
+ tls.key_file /path/to/certificate.key
```
{% endtab %}
{% endtabs %}
-Now, you should be able to send data over TLS to the remote-write input.
+Now, you should be able to send data over TLS to the remote-write input.
\ No newline at end of file
diff --git a/pipeline/inputs/prometheus-scrape-metrics.md b/pipeline/inputs/prometheus-scrape-metrics.md
index 48d7c1c35..c3e2e8d62 100644
--- a/pipeline/inputs/prometheus-scrape-metrics.md
+++ b/pipeline/inputs/prometheus-scrape-metrics.md
@@ -4,13 +4,13 @@ Fluent Bit 1.9 and later includes additional metrics features to let you collect
## Configuration
-| Key | Description | Default |
-| --- | ----------- | -------- |
-| `host` | The host of the Prometheus metric endpoint to scrape. | _none_ |
-| `port` | The port of the Prometheus metric endpoint to scrape. | _none_ |
-| `scrape_interval` | The interval to scrape metrics. | `10s` |
-| `metrics_path` | The metrics URI endpoint, which must start with a forward slash (`/`). Parameters can be added to the path by using `?` | `/metrics` |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|-------------------|-------------------------------------------------------------------------------------------------------------------------|------------|
+| `host` | The host of the Prometheus metric endpoint to scrape. | _none_ |
+| `port` | The port of the Prometheus metric endpoint to scrape. | _none_ |
+| `scrape_interval` | The interval to scrape metrics. | `10s` |
+| `metrics_path` | The metrics URI endpoint, which must start with a forward slash (`/`). Parameters can be added to the path by using `?` | `/metrics` |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Example
@@ -21,35 +21,34 @@ If an endpoint exposes Prometheus Metrics you can specify the configuration to s
```yaml
pipeline:
- inputs:
- - name: prometheus_scrape
- host: 0.0.0.0
- port: 8201
- tag: vault
- metrics_path: /v1/sys/metrics?format=prometheus
- scrape_interval: 10s
+ inputs:
+ - name: prometheus_scrape
+ host: 0.0.0.0
+ port: 8201
+ tag: vault
+ metrics_path: /v1/sys/metrics?format=prometheus
+ scrape_interval: 10s
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[INPUT]
- name prometheus_scrape
- host 0.0.0.0
- port 8201
- tag vault
- metrics_path /v1/sys/metrics?format=prometheus
- scrape_interval 10s
+ name prometheus_scrape
+ host 0.0.0.0
+ port 8201
+ tag vault
+ metrics_path /v1/sys/metrics?format=prometheus
+ scrape_interval 10s
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
@@ -58,6 +57,7 @@ pipeline:
This returns output similar to:
```text
+...
2022-03-26T23:01:29.836663788Z go_memstats_alloc_bytes_total = 31891336
2022-03-26T23:01:29.836663788Z go_memstats_frees_total = 313264
2022-03-26T23:01:29.836663788Z go_memstats_lookups_total = 0
@@ -100,4 +100,5 @@ This returns output similar to:
2022-03-26T23:01:29.836663788Z vault_runtime_sys_bytes = 24724488
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_pause_ns = 1917611
2022-03-26T23:01:29.836663788Z vault_runtime_total_gc_runs = 19
-```
+...
+```
\ No newline at end of file
diff --git a/pipeline/inputs/random.md b/pipeline/inputs/random.md
index df9ae3318..9ea0ada7e 100644
--- a/pipeline/inputs/random.md
+++ b/pipeline/inputs/random.md
@@ -6,12 +6,12 @@ The _Random_ input plugin generates random value samples using the device interf
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `Samples` | Specifies the number of samples to generate. The default value of `-1` generates unlimited samples. | `-1` |
-| `Interval_Sec` | Specifies the interval between generated samples, in seconds. | `1` |
-| `Interval_Nsec` | Specifies the interval between generated samples, in nanoseconds. This works in conjunction with `Interval_Sec`. | `0` |
-| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|-----------------|------------------------------------------------------------------------------------------------------------------|---------|
+| `Samples` | Specifies the number of samples to generate. The default value of `-1` generates unlimited samples. | `-1` |
+| `Interval_Sec` | Specifies the interval between generated samples, in seconds. | `1` |
+| `Interval_Nsec` | Specifies the interval between generated samples, in nanoseconds. This works in conjunction with `Interval_Sec`. | `0` |
+| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -34,15 +34,15 @@ The following examples are sample configuration files for this input plugin:
```yaml
pipeline:
- inputs:
- - name: random
- samples: -1
- interval_sec: 1
- interval_nsec: 0
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: random
+ samples: -1
+ interval_sec: 1
+ interval_nsec: 0
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -50,14 +50,14 @@ pipeline:
```text
[INPUT]
- Name random
- Samples -1
- Interval_Sec 1
- Interval_NSec 0
+ Name random
+ Samples -1
+ Interval_Sec 1
+ Interval_NSec 0
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -70,32 +70,11 @@ After Fluent Bit starts running, it generates reports in the output interface:
```shell
$ fluent-bit -i random -o stdout
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] random.0: [1475893654, {"rand_value"=>1863375102915681408}]
[1] random.0: [1475893655, {"rand_value"=>425675645790600970}]
[2] random.0: [1475893656, {"rand_value"=>7580417447354808203}]
[3] random.0: [1475893657, {"rand_value"=>1501010137543905482}]
[4] random.0: [1475893658, {"rand_value"=>16238242822364375212}]
-```
+...
+```
\ No newline at end of file
diff --git a/pipeline/inputs/serial-interface.md b/pipeline/inputs/serial-interface.md
index 22c2df78d..2f30eeabb 100644
--- a/pipeline/inputs/serial-interface.md
+++ b/pipeline/inputs/serial-interface.md
@@ -6,14 +6,14 @@ The _Serial_ input plugin lets you retrieve messages and data from a serial inte
This plugin has the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | ---------|
-| `File` | Absolute path to the device entry. For example, `/dev/ttyS0`. | _none_ |
-| `Bitrate` | The bit rate for the communication. For example: `9600`, `38400`, `115200`. | _none_ |
-| `Min_Bytes` | The serial interface expects at least `Min_Bytes` to be available before processing the message. | `1` |
-| `Separator` | Specify a separator string that's used to determinate when a message ends. | _none_ |
-| `Format` | Specify the format of the incoming data stream. `Format` and `Separator` can't be used at the same time. | `json` (no other options available) |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:------------|:---------------------------------------------------------------------------------------------------------|-------------------------------------|
+| `File` | Absolute path to the device entry. For example, `/dev/ttyS0`. | _none_ |
+| `Bitrate` | The bit rate for the communication. For example: `9600`, `38400`, `115200`. | _none_ |
+| `Min_Bytes` | The serial interface expects at least `Min_Bytes` to be available before processing the message. | `1` |
+| `Separator` | Specify a separator string that's used to determinate when a message ends. | _none_ |
+| `Format` | Specify the format of the incoming data stream. `Format` and `Separator` can't be used at the same time. | `json` (no other options available) |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -42,30 +42,9 @@ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -o stdout -m '*'
Which should produce output like:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] data: [1463780680, {"msg"=>"this is some message"}]
+...
```
Using the `Separator` configuration, you can send multiple messages at once.
@@ -85,33 +64,12 @@ fluent-bit -i serial -t data -p File=/dev/tnt0 -p BitRate=9600 -p Separator=X -o
This should produce results similar to the following:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] data: [1463781902, {"msg"=>"aa"}]
[1] data: [1463781902, {"msg"=>"bb"}]
[2] data: [1463781902, {"msg"=>"cc"}]
[3] data: [1463781902, {"msg"=>"dd"}]
+...
```
### Configuration file
@@ -123,16 +81,16 @@ In your main configuration file append the following sections:
```yaml
pipeline:
- inputs:
- - name: serial
- tag: data
- file: /dev/tnt0
- bitrate: 9600
- separator: X
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: serial
+ tag: data
+ file: /dev/tnt0
+ bitrate: 9600
+ separator: X
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -140,15 +98,15 @@ pipeline:
```text
[INPUT]
- Name serial
- Tag data
- File /dev/tnt0
- BitRate 9600
- Separator X
+ Name serial
+ Tag data
+ File /dev/tnt0
+ BitRate 9600
+ Separator X
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/splunk.md b/pipeline/inputs/splunk.md
index 8bd8ec540..ad1e66a19 100644
--- a/pipeline/inputs/splunk.md
+++ b/pipeline/inputs/splunk.md
@@ -6,18 +6,18 @@ The _Splunk_ input plugin handles [Splunk HTTP HEC](https://docs.splunk.com/Docu
This plugin uses the following configuration parameters:
-| Key | Description | Default |
-| --- | ----------- | ------- |
-| `listen` | The address to listen on. | `0.0.0.0` |
-| `port` | The port for Fluent Bit to listen on. | `9880` |
-| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
-| `buffer_max_size` | Specify the maximum buffer size in KB to receive a JSON message. | `4M` |
-| `buffer_chunk_size` | This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. | `512K` |
-| `successful_response_code` | Set the successful response code. Allowed values: `200`, `201`, and `204` | `201` |
-| `splunk_token` | Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens. | _none_ |
-| `store_token_in_metadata` | Store Splunk HEC tokens in the Fluent Bit metadata. If set to `false`, tokens will be stored as normal key-value pairs in the record data. | `true` |
-| `splunk_token_key` | Use the specified key for storing the Splunk token for HTTP HEC. Use only when `store_token_in_metadata` is `false`. | `@splunk_token` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
+| `listen` | The address to listen on. | `0.0.0.0` |
+| `port` | The port for Fluent Bit to listen on. | `9880` |
+| `tag_key` | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | _none_ |
+| `buffer_max_size` | Specify the maximum buffer size in KB to receive a JSON message. | `4M` |
+| `buffer_chunk_size` | This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. | `512K` |
+| `successful_response_code` | Set the successful response code. Allowed values: `200`, `201`, and `204` | `201` |
+| `splunk_token` | Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens. | _none_ |
+| `store_token_in_metadata` | Store Splunk HEC tokens in the Fluent Bit metadata. If set to `false`, tokens will be stored as normal key-value pairs in the record data. | `true` |
+| `splunk_token_key` | Use the specified key for storing the Splunk token for HTTP HEC. Use only when `store_token_in_metadata` is `false`. | `@splunk_token` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -54,14 +54,14 @@ In your main configuration file append the following sections:
```yaml
pipeline:
- inputs:
- - name: splunk
- listen: 0.0.0.0
- port: 8088
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: splunk
+ listen: 0.0.0.0
+ port: 8088
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -69,13 +69,13 @@ pipeline:
```text
[INPUT]
- name splunk
- listen 0.0.0.0
- port 8088
+ name splunk
+ listen 0.0.0.0
+ port 8088
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
diff --git a/pipeline/inputs/standard-input.md b/pipeline/inputs/standard-input.md
index 8592ed048..1946278ea 100644
--- a/pipeline/inputs/standard-input.md
+++ b/pipeline/inputs/standard-input.md
@@ -13,11 +13,11 @@ If the `stdin` stream is closed (`end-of-file`), the plugin instructs Fluent Bit
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Buffer_Size` | Set the buffer size to read data. This value is used to increase buffer size and must be set according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `16k` |
-| `Parser` | The name of the parser to invoke instead of the default JSON input parser. | _none_ |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:--------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `Buffer_Size` | Set the buffer size to read data. This value is used to increase buffer size and must be set according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `16k` |
+| `Parser` | The name of the parser to invoke instead of the default JSON input parser. | _none_ |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Input formats
@@ -41,7 +41,7 @@ To handle inputs in other formats, a parser must be explicitly specified in the
## Log event timestamps
-The Fluent Bit event timestamp will be set from the input record if the two-element event input is used or a custom parser configuration supplies a timestamp. Otherwise the event timestamp will be set to the timestamp at which the record is read by the `stdin` plugin.
+The Fluent Bit event timestamp will be set from the input record if the two-element event input is used or a custom parser configuration supplies a timestamp. Otherwise, the event timestamp will be set to the timestamp at which the record is read by the `stdin` plugin.
## Examples
@@ -170,10 +170,10 @@ For example, if you want to read raw messages line by line and forward them, you
```yaml
parsers:
- - name: stringify_message
- format: regex
- key_name: message
- regex: '^(?.*)'
+ - name: stringify_message
+ format: regex
+ key_name: message
+ regex: '^(?.*)'
```
{% endtab %}
@@ -181,10 +181,10 @@ parsers:
```text
[PARSER]
- name stringify_message
- format regex
- Key_Name message
- regex ^(?.*)
+ name stringify_message
+ format regex
+ Key_Name message
+ regex ^(?.*)
```
{% endtab %}
@@ -197,17 +197,17 @@ You can then use the parsers file in a `stdin` plugin in the main Fluent Bit con
```yaml
service:
- parsers_file: parsers.yaml
+ parsers_file: parsers.yaml
pipeline:
- inputs:
- - name: stdin
- tag: stdin
- parser: stringify_message
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: stdin
+ tag: stdin
+ parser: stringify_message
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -215,16 +215,16 @@ pipeline:
```text
[SERVICE]
- parsers_file parsers.conf
+ parsers_file parsers.conf
[INPUT]
- Name stdin
- Tag stdin
- Parser stringify_message
+ Name stdin
+ Tag stdin
+ Parser stringify_message
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -243,36 +243,13 @@ seq 1 5 | ./fluent-bit --config fluent-bit.conf
Which returns output similar to:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/03 14:32:54] [ info] [fluent bit] version=4.0.3, commit=3a91b155d6, pid=18569
-[2025/07/03 14:32:54] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/03 14:32:54] [ info] [simd ] disabled
-[2025/07/03 14:32:54] [ info] [cmetrics] version=1.0.3
-[2025/07/03 14:32:54] [ info] [ctraces ] version=0.6.6
-[2025/07/03 14:32:54] [ info] [input:stdin:stdin.0] initializing
-[2025/07/03 14:32:54] [ info] [input:stdin:stdin.0] storage_strategy='memory' (memory only)
-[2025/07/03 14:32:54] [ info] [sp] stream processor started
-[2025/07/03 14:32:54] [ info] [output:stdout:stdout.0] worker #0 started
-[2025/07/03 14:32:54] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/03 14:32:54] [ warn] [input:stdin:stdin.0] end of file (stdin closed by remote end)
-[2025/07/03 14:32:54] [ warn] [engine] service will shutdown in max 5 seconds
+...
[0] stdin: [[1751545974.960182000, {}], {"message"=>"1"}]
[1] stdin: [[1751545974.960246000, {}], {"message"=>"2"}]
[2] stdin: [[1751545974.960255000, {}], {"message"=>"3"}]
[3] stdin: [[1751545974.960262000, {}], {"message"=>"4"}]
[4] stdin: [[1751545974.960268000, {}], {"message"=>"5"}]
+...
```
In production deployments it's best to use a parser that splits messages into real fields and adds appropriate tags.
\ No newline at end of file
diff --git a/pipeline/inputs/statsd.md b/pipeline/inputs/statsd.md
index 99012e270..34af3ce80 100644
--- a/pipeline/inputs/statsd.md
+++ b/pipeline/inputs/statsd.md
@@ -6,12 +6,12 @@ The _StatsD_ input plugin lets you receive metrics using the StatsD protocol.
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Listen` | Listener network interface. | `0.0.0.0` |
-| `Port` | UDP port that listens for connections. | `8125` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
-| `Metrics` | Ingested record will be marked as a metric record rather than a log record. | `off` |
+| Key | Description | Default |
+|:-----------|:--------------------------------------------------------------------------------------------------------|:----------|
+| `Listen` | Listener network interface. | `0.0.0.0` |
+| `Port` | UDP port that listens for connections. | `8125` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| `Metrics` | Ingested record will be marked as a metric record rather than a log record. | `off` |
When enabling `Metrics On`, Fluent Bit will also handle metrics from the DogStatsD protocol. The internal record in Fluent Bit will be handled as a metric type for downstream processing.
@@ -31,14 +31,14 @@ Here is a configuration example.
```yaml
pipeline:
- inputs:
- - name: statsd
- listen: 0.0.0.0
- port: 8125
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: statsd
+ listen: 0.0.0.0
+ port: 8125
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -46,13 +46,13 @@ pipeline:
```text
[INPUT]
- Name statsd
- Listen 0.0.0.0
- Port 8125
+ Name statsd
+ Listen 0.0.0.0
+ Port 8125
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -69,8 +69,10 @@ echo "active:99|g" | nc -q0 -u 127.0.0.1 8125
Fluent Bit will produce the following records:
```text
+...
[0] statsd.0: [1574905088.971380537, {"type"=>"counter", "bucket"=>"click", "value"=>10.000000, "sample_rate"=>0.100000}]
[0] statsd.0: [1574905141.863344517, {"type"=>"gauge", "bucket"=>"active", "value"=>99.000000, "incremental"=>0}]
+...
```
## Metrics setup
@@ -82,15 +84,15 @@ Here is a configuration example for metrics setup.
```yaml
pipeline:
- inputs:
- - name: statsd
- listen: 0.0.0.0
- port: 8125
- metrics: On
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: statsd
+ listen: 0.0.0.0
+ port: 8125
+ metrics: On
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -98,14 +100,14 @@ pipeline:
```text
[INPUT]
- Name statsd
- Listen 0.0.0.0
- Port 8125
- Metrics On
+ Name statsd
+ Listen 0.0.0.0
+ Port 8125
+ Metrics On
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -124,7 +126,9 @@ echo "inactive:29|g|@0.0125|#hi:from_fluent-bit" | nc -q0 -u 127.0.0.1 8125
Fluent Bit will produce the following metrics events:
```text
+...
2025-01-09T11:40:26.562424694Z click{incremental="true",hello="tag"} = 1000
2025-01-09T11:40:28.591477424Z active{incremental="true"} = 9900
2025-01-09T11:40:31.593118033Z inactive{hi="from_fluent-bit"} = 2320
+...
```
\ No newline at end of file
diff --git a/pipeline/inputs/syslog.md b/pipeline/inputs/syslog.md
index 5e3977e2e..8a42147fb 100644
--- a/pipeline/inputs/syslog.md
+++ b/pipeline/inputs/syslog.md
@@ -6,19 +6,19 @@ The _Syslog_ input plugin lets you collect `syslog` messages through a Unix sock
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Mode` | Defines transport protocol mode: UDP over Unix socket (`unix_udp`), TCP over Unix socket (`unix_tcp`), `tcp`, or `udp` | `unix_udp` |
-| `Listen` | If `Mode` is set to `tcp` or `udp`, specify the network interface to bind. | `0.0.0.0` |
-| `Port` | If `Mode` is set to `tcp` or `udp`, specify the TCP port to listen for incoming connections. | `5140` |
-| `Path` | If `Mode` is set to `unix_tcp` or `unix_udp`, set the absolute path to the Unix socket file. | _none_ |
-| `Unix_Perm` | If `Mode` is set to `unix_tcp` or `unix_udp`, set the permission of the Unix socket file. | `0644` |
-| `Parser` | Specify an alternative parser for the message. If `Mode` is set to `tcp` or `udp` then the default parser is `syslog-rfc5424`. Otherwise, `syslog-rfc3164-local` is used. If your syslog` messages have fractional seconds set this parser value to `syslog-rfc5424` instead. | _none_ |
-| `Buffer_Chunk_Size` | By default, the buffer to store the incoming `syslog` messages. Doesn't allocate the maximum memory allowed, instead it allocates memory when required. The rounds of allocations are set by `Buffer_Chunk_Size`. There are considerations when using `udp` or `unix_udp` mode. | `32KB` (set in code) |
-| `Buffer_Max_Size` | Specify the maximum buffer size to receive a `syslog` message. If not set, the default size is the value of `Buffer_Chunk_Size`. | _none_ |
-| `Receive_Buffer_Size` | Specify the maximum socket receive buffer size. If not set, the default value is OS-dependant, but generally too low to accept thousands of syslog messages per second without loss on `udp` or `unix_udp` sockets. For Linux, the value is capped by `sysctl net.core.rmem_max`. | _none_ |
-| `Source_Address_Key` | Specify the key where the source address will be injected. | _none_ |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------|
+| `Mode` | Defines transport protocol mode: UDP over Unix socket (`unix_udp`), TCP over Unix socket (`unix_tcp`), `tcp`, or `udp` | `unix_udp` |
+| `Listen` | If `Mode` is set to `tcp` or `udp`, specify the network interface to bind. | `0.0.0.0` |
+| `Port` | If `Mode` is set to `tcp` or `udp`, specify the TCP port to listen for incoming connections. | `5140` |
+| `Path` | If `Mode` is set to `unix_tcp` or `unix_udp`, set the absolute path to the Unix socket file. | _none_ |
+| `Unix_Perm` | If `Mode` is set to `unix_tcp` or `unix_udp`, set the permission of the Unix socket file. | `0644` |
+| `Parser` | Specify an alternative parser for the message. If `Mode` is set to `tcp` or `udp` then the default parser is `syslog-rfc5424`. Otherwise, `syslog-rfc3164-local` is used. If your syslog` messages have fractional seconds set this parser value to `syslog-rfc5424` instead. | _none_ |
+| `Buffer_Chunk_Size` | By default, the buffer to store the incoming `syslog` messages. Doesn't allocate the maximum memory allowed, instead it allocates memory when required. The rounds of allocations are set by `Buffer_Chunk_Size`. There are considerations when using `udp` or `unix_udp` mode. | `32KB` (set in code) |
+| `Buffer_Max_Size` | Specify the maximum buffer size to receive a `syslog` message. If not set, the default size is the value of `Buffer_Chunk_Size`. | _none_ |
+| `Receive_Buffer_Size` | Specify the maximum socket receive buffer size. If not set, the default value is OS-dependant, but generally too low to accept thousands of syslog messages per second without loss on `udp` or `unix_udp` sockets. For Linux, the value is capped by `sysctl net.core.rmem_max`. | _none_ |
+| `Source_Address_Key` | Specify the key where the source address will be injected. | _none_ |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
### Considerations
@@ -41,7 +41,7 @@ From the command line you can let Fluent Bit listen for `Forward` messages with
./fluent-bit -R /path/to/parsers.conf -i syslog -p path=/tmp/in_syslog -o stdout
```
-By default the service will create and listen for Syslog messages on the Unix socket `/tmp/in_syslog`.
+By default, the service will create and listen for Syslog messages on the Unix socket `/tmp/in_syslog`.
### Configuration file
@@ -52,21 +52,21 @@ In your main configuration file append the following sections:
```yaml
service:
- flush: 1
- log_level: info
- parsers_file: parsers.yaml
+ flush: 1
+ log_level: info
+ parsers_file: parsers.yaml
pipeline:
- inputs:
- - name: syslog
- path: /tmp/in_syslog
- buffer_chunk_size: 32000
- buffer_max_size: 64000
- receive_buffer_size: 512000
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: syslog
+ path: /tmp/in_syslog
+ buffer_chunk_size: 32000
+ buffer_max_size: 64000
+ receive_buffer_size: 512000
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -74,20 +74,20 @@ pipeline:
```text
[SERVICE]
- Flush 1
- Log_Level info
- Parsers_File parsers.conf
+ Flush 1
+ Log_Level info
+ Parsers_File parsers.conf
[INPUT]
- Name syslog
- Path /tmp/in_syslog
- Buffer_Chunk_Size 32000
- Buffer_Max_Size 64000
- Receive_Buffer_Size 512000
+ Name syslog
+ Path /tmp/in_syslog
+ Buffer_Chunk_Size 32000
+ Buffer_Max_Size 64000
+ Receive_Buffer_Size 512000
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -104,7 +104,7 @@ logger -u /tmp/in_syslog my_ident my_message
Then run Fluent bit using the following command:
```shell
-# For YAML ocnfiguration.
+# For YAML configuration.
./fluent-bit -R ../conf/parsers.yaml -i syslog -p path=/tmp/in_syslog -o stdout
# For classic configuration.
@@ -114,30 +114,9 @@ Then run Fluent bit using the following command:
You should see the following output:
```text
-Fluent Bit v4.0.3
-* Copyright (C) 2015-2025 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-______ _ _ ______ _ _ ___ _____
-| ___| | | | | ___ (_) | / || _ |
-| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __/ /| || |/' |
-| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| || /| |
-| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /\___ |\ |_/ /
-\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ |_(_)___/
-
-
-[2025/07/01 14:44:47] [ info] [fluent bit] version=4.0.3, commit=f5f5f3c17d, pid=1
-[2025/07/01 14:44:47] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2025/07/01 14:44:47] [ info] [simd ] disabled
-[2025/07/01 14:44:47] [ info] [cmetrics] version=1.0.3
-[2025/07/01 14:44:47] [ info] [ctraces ] version=0.6.6
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] initializing
-[2025/07/01 14:44:47] [ info] [input:mem:mem.0] storage_strategy='memory' (memory only)
-[2025/07/01 14:44:47] [ info] [sp] stream processor started
-[2025/07/01 14:44:47] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
-[2025/07/01 14:44:47] [ info] [output:stdout:stdout.0] worker #0 started
+...
[0] syslog.0: [1489047822, {"pri"=>"13", "host"=>"edsiper:", "ident"=>"my_ident", "pid"=>"", "message"=>"my_message"}]
+...
```
## Examples
@@ -155,20 +134,20 @@ Put the following content in your configuration file:
```yaml
service:
- flush: 1
- parsers_file: parsers.yaml
+ flush: 1
+ parsers_file: parsers.yaml
pipeline:
- inputs:
- - name: syslog
- parser: syslog-rfc3164
- listen: 0.0.0.0
- port: 5140
- mode: tcp
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: syslog
+ parser: syslog-rfc3164
+ listen: 0.0.0.0
+ port: 5140
+ mode: tcp
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -176,19 +155,19 @@ pipeline:
```text
[SERVICE]
- Flush 1
- Parsers_File parsers.conf
+ Flush 1
+ Parsers_File parsers.conf
[INPUT]
- Name syslog
- Parser syslog-rfc3164
- Listen 0.0.0.0
- Port 5140
- Mode tcp
+ Name syslog
+ Parser syslog-rfc3164
+ Listen 0.0.0.0
+ Port 5140
+ Mode tcp
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -221,20 +200,20 @@ Put the following content in your Fluent Bit configuration:
```yaml
service:
- flush: 1
- parsers_file: parsers.yaml
+ flush: 1
+ parsers_file: parsers.yaml
pipeline:
- inputs:
- - name: syslog
- parser: syslog-rfc3164
- path: /tmp/fluent-bit.sock
- mode: unix_udp
- unix_perm: 0644
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: syslog
+ parser: syslog-rfc3164
+ path: /tmp/fluent-bit.sock
+ mode: unix_udp
+ unix_perm: 0644
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -242,19 +221,19 @@ pipeline:
```text
[SERVICE]
- Flush 1
- Parsers_File parsers.conf
+ Flush 1
+ Parsers_File parsers.conf
[INPUT]
- Name syslog
- Parser syslog-rfc3164
- Path /tmp/fluent-bit.sock
- Mode unix_udp
- Unix_Perm 0644
+ Name syslog
+ Parser syslog-rfc3164
+ Path /tmp/fluent-bit.sock
+ Mode unix_udp
+ Unix_Perm 0644
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
diff --git a/pipeline/inputs/systemd.md b/pipeline/inputs/systemd.md
index 27964437b..dda63e963 100644
--- a/pipeline/inputs/systemd.md
+++ b/pipeline/inputs/systemd.md
@@ -6,20 +6,20 @@ The _Systemd_ input plugin lets you collect log messages from the `journald` dae
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `Path` | Optional path to the Systemd journal directory. If not set, the plugin uses default paths to read local-only logs. | _none_ |
-| `Max_Fields` | Set a maximum number of fields (keys) allowed per record. | `8000` |
-| `Max_Entries` | When Fluent Bit starts, the Journal might have a high number of logs in the queue. To avoid delays and reduce memory usage, use this option to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once `journald` performs the notification. | `5000` |
-| `Systemd_Filter` | Perform a query over logs that contain specific `journald` key/value pairs. For example, `_SYSTEMD_UNIT=UNIT`. The `Systemd_Filter` option can be specified multiple times in the input section to apply multiple filters. | _none_ |
-| `Systemd_Filter_Type` | Define the filter type when `Systemd_Filter` is specified multiple times. Allowed values:`And`, `Or`. With `And` a record is matched only when all of the `Systemd_Filter` have a match. With `Or` a record is matched when any `Systemd_Filter` has a match. | `Or` |
-| `Tag` | The tag is used to route messages but on Systemd plugin there is an additional capability: if the tag includes a wildcard (`*`), it will be expanded with the Systemd Unit file (`_SYSTEMD_UNIT`, like `host.\* => host.UNIT_NAME`) or `unknown` (`host.unknown`) if `_SYSTEMD_UNIT` is missing. | _none_ |
-| `DB` | Specify the absolute path of a database file to keep track of the `journald` cursor. | _none_ |
-| `DB.Sync` | Set a default synchronization (I/O) method. Values: `Extra`, `Full`, `Normal`, and `Off`. This flag affects how the internal SQLite engine synchronizes to disk. For more details [SQL lite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Available in Fluent Bit v1.4.6 and later. | `Full` |
-| `Read_From_Tail` | Start reading new entries. Skip entries already stored in`journald`. | `Off` |
-| `Lowercase` | Lowercase the `journald` field (key). | `Off` |
-| `Strip_Underscores` | Remove the leading underscore of the `journald` field (key). For example, the `journald` field `_PID` becomes the key `PID`. | `Off` |
-| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `Path` | Optional path to the Systemd journal directory. If not set, the plugin uses default paths to read local-only logs. | _none_ |
+| `Max_Fields` | Set a maximum number of fields (keys) allowed per record. | `8000` |
+| `Max_Entries` | When Fluent Bit starts, the Journal might have a high number of logs in the queue. To avoid delays and reduce memory usage, use this option to specify the maximum number of log entries that can be processed per round. Once the limit is reached, Fluent Bit will continue processing the remaining log entries once `journald` performs the notification. | `5000` |
+| `Systemd_Filter` | Perform a query over logs that contain specific `journald` key/value pairs. For example, `_SYSTEMD_UNIT=UNIT`. The `Systemd_Filter` option can be specified multiple times in the input section to apply multiple filters. | _none_ |
+| `Systemd_Filter_Type` | Define the filter type when `Systemd_Filter` is specified multiple times. Allowed values:`And`, `Or`. With `And` a record is matched only when all of the `Systemd_Filter` have a match. With `Or` a record is matched when any `Systemd_Filter` has a match. | `Or` |
+| `Tag` | The tag is used to route messages but on Systemd plugin there is an additional capability: if the tag includes a wildcard (`*`), it will be expanded with the Systemd Unit file (`_SYSTEMD_UNIT`, like `host.\* => host.UNIT_NAME`) or `unknown` (`host.unknown`) if `_SYSTEMD_UNIT` is missing. | _none_ |
+| `DB` | Specify the absolute path of a database file to keep track of the `journald` cursor. | _none_ |
+| `DB.Sync` | Set a default synchronization (I/O) method. Values: `Extra`, `Full`, `Normal`, and `Off`. This flag affects how the internal SQLite engine synchronizes to disk. For more details [SQL lite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Available in Fluent Bit v1.4.6 and later. | `Full` |
+| `Read_From_Tail` | Start reading new entries. Skip entries already stored in`journald`. | `Off` |
+| `Lowercase` | Lowercase the `journald` field (key). | `Off` |
+| `Strip_Underscores` | Remove the leading underscore of the `journald` field (key). For example, the `journald` field `_PID` becomes the key `PID`. | `Off` |
+| `Threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Get started
@@ -31,8 +31,9 @@ From the command line you can let Fluent Bit listen for Systemd messages with th
```shell
fluent-bit -i systemd \
- -p systemd_filter=_SYSTEMD_UNIT=docker.service \
- -p tag='host.*' -o stdout
+ -p systemd_filter=_SYSTEMD_UNIT=docker.service \
+ -p tag='host.*' \
+ -o stdout
```
This example collects all messages coming from the Docker service.
@@ -46,18 +47,18 @@ In your main configuration file append the following sections:
```yaml
service:
- flush: 1
- log_level: info
- parsers_file: parsers.yaml
+ flush: 1
+ log_level: info
+ parsers_file: parsers.yaml
pipeline:
- inputs:
- - name: systemd
- tag: host.*
- systemd_filter: _SYSTEMD_UNIT=docker.service
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: systemd
+ tag: host.*
+ systemd_filter: _SYSTEMD_UNIT=docker.service
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
@@ -65,19 +66,19 @@ pipeline:
```text
[SERVICE]
- Flush 1
- Log_Level info
- Parsers_File parsers.conf
+ Flush 1
+ Log_Level info
+ Parsers_File parsers.conf
[INPUT]
- Name systemd
- Tag host.*
- Systemd_Filter _SYSTEMD_UNIT=docker.service
+ Name systemd
+ Tag host.*
+ Systemd_Filter _SYSTEMD_UNIT=docker.service
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
-{% endtabs %}
+{% endtabs %}
\ No newline at end of file
diff --git a/pipeline/inputs/tail.md b/pipeline/inputs/tail.md
index b75abd036..16f78a743 100644
--- a/pipeline/inputs/tail.md
+++ b/pipeline/inputs/tail.md
@@ -2,41 +2,41 @@
The _Tail_ input plugin lets you monitor text files. Its behavior is similar to the `tail -f` shell command.
-The plugin reads every matched file in the `Path` pattern. For every new line found (separated by a newline character (`\n`), it generates a new record. Optionally, you can use a database file so the plugin can have a history of tracked files and a state of offsets. This helps resume a state if the service is restarted.
+The plugin reads every matched file in the `Path` pattern. For every new line found (separated by a newline character `\n`), it generates a new record. Optionally, you can use a database file so the plugin can have a history of tracked files and a state of offsets. This helps resume a state if the service is restarted.
## Configuration parameters
The plugin supports the following configuration parameters:
-| Key | Description | Default |
-| :-- | :---------- | :------ |
-| `buffer_chunk_size` | Set the initial buffer size to read file data. This value is used to increase buffer size. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `32k` |
-| `buffer_max_size` | Set the limit of the buffer size per monitored file. When a buffer needs to be increased, this value is used to restrict the memory buffer growth. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `32k` |
-| `path` | Pattern specifying a specific log file or multiple ones through the use of common wildcards. Allows multiple patterns separated by commas. | _none_ |
-| `path_key` | If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map. | _none_ |
-| `exclude_path` | Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, For example, `exclude_path *.gz,*.zip`. | _none_ |
-| `offset_key` | If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map. | _none_ |
-| `read_from_head` | For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail. | `false` |
-| `refresh_interval` | The interval of refreshing the list of watched files in seconds. | `60` |
-| `rotate_wait` | Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed. | `5` |
-| `ignore_older` | Ignores files older than `ignore_older`. Supports `m`, `h`, `d` (minutes, hours, days) syntax. | Read all. |
-| `skip_long_lines` | When a monitored file reaches its buffer capacity due to a very long line (`buffer_max_size`), the default behavior is to stop monitoring that file. `skip_long_lines` alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fit into the buffer size. | `off` |
-| `skip_empty_lines` | Skips empty lines in the log file from any further processing or output. | `off` |
-| `db` | Specify the database file to keep track of monitored files and offsets. | _none_ |
-| `db.sync` | Set a default synchronization (I/O) method. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option see [the SQLite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Most scenarios will be fine with `normal` mode. If you need full synchronization after every write operation set `full` mode. `full` has a high I/O performance cost. Values: `extra`, `full`, `normal`, `off`. | `normal` |
-| `db.locking` | Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps increase performance when accessing the database but restricts externals tool from querying the content. | `false` |
-| `db.journal_mode` | Sets the journal mode for databases (`wal`). Enabling `wal` provides higher performance. `wal` isn't compatible with shared network file systems. | `wal` |
-| `db.compare_filename` | This option determines whether to review both `inode` and `filename` when retrieving stored file information from the database. `true` verifies both `inode` and `filename`, while `false` checks only the `inode`. To review the `inode` and `filename` in the database, refer [see `keep_state`](#tailing-files-keeping-state). | `false` |
-| `mem_buf_limit` | Set a memory limit that Tail plugin can use when appending data to the engine. If the limit is reached, it will be paused. When the data is flushed it resumes. | _none_ |
-| `exit_on_eof` | When reading a file will exit as soon as it reach the end of the file. Used for bulk load and tests. | `false` |
-| `parser` | Specify the name of a parser to interpret the entry as a structured message. | _none_ |
-| `key` | When a message is unstructured (no parser applied), it's appended as a string under the key name `log`. This option lets you define an alternative name for that key. | `log` |
-| `inotify_watcher` | Set to `false` to use file stat watcher instead of `inotify`. | `true` |
-| `tag` | Set a tag with `regexextract` fields that will be placed on lines read. For example, `kube....`. Tag expansion is supported: if the tag includes an asterisk (`*`), that asterisk will be replaced with the absolute path of the monitored file, with slashes replaced by dots. See [Workflow of Tail + Kubernetes Filter](../filters/kubernetes.md#workflow-of-tail--kubernetes-filter). | _none_ |
-| `tag_regex` | Set a regular expression to extract fields from the filename. For example: `(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$`. | _none_ |
-| `static_batch_size` | Set the maximum number of bytes to process per iteration for the monitored static files (files that already exist upon Fluent Bit start). | `50M` |
-| `file_cache_advise` | Set the `posix_fadvise` in `POSIX_FADV_DONTNEED` mode. This reduces the usage of the kernel file cache. This option is ignored if not running on Linux. | `on` |
-| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
+| Key | Description | Default |
+|:----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|
+| `buffer_chunk_size` | Set the initial buffer size to read file data. This value is used to increase buffer size. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `32k` |
+| `buffer_max_size` | Set the limit of the buffer size per monitored file. When a buffer needs to be increased, this value is used to restrict the memory buffer growth. If reading a file exceeds this limit, the file is removed from the monitored file list. The value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | `32k` |
+| `path` | Pattern specifying a specific log file or multiple ones through the use of common wildcards. Allows multiple patterns separated by commas. | _none_ |
+| `path_key` | If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map. | _none_ |
+| `exclude_path` | Set one or multiple shell patterns separated by commas to exclude files matching certain criteria, For example, `exclude_path *.gz,*.zip`. | _none_ |
+| `offset_key` | If enabled, Fluent Bit appends the offset of the current monitored file as part of the record. The value assigned becomes the key in the map. | _none_ |
+| `read_from_head` | For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail. | `false` |
+| `refresh_interval` | The interval of refreshing the list of watched files in seconds. | `60` |
+| `rotate_wait` | Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed. | `5` |
+| `ignore_older` | Ignores files older than `ignore_older`. Supports `m`, `h`, `d` (minutes, hours, days) syntax. | Read all. |
+| `skip_long_lines` | When a monitored file reaches its buffer capacity due to a very long line (`buffer_max_size`), the default behavior is to stop monitoring that file. `skip_long_lines` alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fit into the buffer size. | `off` |
+| `skip_empty_lines` | Skips empty lines in the log file from any further processing or output. | `off` |
+| `db` | Specify the database file to keep track of monitored files and offsets. | _none_ |
+| `db.sync` | Set a default synchronization (I/O) method. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option see [the SQLite documentation](https://www.sqlite.org/pragma.html#pragma_synchronous). Most scenarios will be fine with `normal` mode. If you need full synchronization after every write operation set `full` mode. `full` has a high I/O performance cost. Values: `extra`, `full`, `normal`, `off`. | `normal` |
+| `db.locking` | Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps increase performance when accessing the database but restricts externals tool from querying the content. | `false` |
+| `db.journal_mode` | Sets the journal mode for databases (`wal`). Enabling `wal` provides higher performance. `wal` isn't compatible with shared network file systems. | `wal` |
+| `db.compare_filename` | This option determines whether to review both `inode` and `filename` when retrieving stored file information from the database. `true` verifies both `inode` and `filename`, while `false` checks only the `inode`. To review the `inode` and `filename` in the database, refer [see `keep_state`](#tailing-files-keeping-state). | `false` |
+| `mem_buf_limit` | Set a memory limit that Tail plugin can use when appending data to the engine. If the limit is reached, it will be paused. When the data is flushed it resumes. | _none_ |
+| `exit_on_eof` | When reading a file will exit as soon as it reach the end of the file. Used for bulk load and tests. | `false` |
+| `parser` | Specify the name of a parser to interpret the entry as a structured message. | _none_ |
+| `key` | When a message is unstructured (no parser applied), it's appended as a string under the key name `log`. This option lets you define an alternative name for that key. | `log` |
+| `inotify_watcher` | Set to `false` to use file stat watcher instead of `inotify`. | `true` |
+| `tag` | Set a tag with `regexextract` fields that will be placed on lines read. For example, `kube....`. Tag expansion is supported: if the tag includes an asterisk (`*`), that asterisk will be replaced with the absolute path of the monitored file, with slashes replaced by dots. See [Workflow of Tail + Kubernetes Filter](../filters/kubernetes.md#workflow-of-tail--kubernetes-filter). | _none_ |
+| `tag_regex` | Set a regular expression to extract fields from the filename. For example: `(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$`. | _none_ |
+| `static_batch_size` | Set the maximum number of bytes to process per iteration for the monitored static files (files that already exist upon Fluent Bit start). | `50M` |
+| `file_cache_advise` | Set the `posix_fadvise` in `POSIX_FADV_DONTNEED` mode. This reduces the usage of the kernel file cache. This option is ignored if not running on Linux. | `on` |
+| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
## Buffers and memory management
@@ -102,7 +102,7 @@ You can also provide a custom systemd configuration file that overrides the defa
```yaml
service:
- limitnofile: LIMIT
+ limitnofile: LIMIT
```
{% endtab %}
@@ -110,7 +110,7 @@ service:
```text
[Service]
-LimitNOFILE=LIMIT
+ LimitNOFILE=LIMIT
```
{% endtab %}
@@ -135,8 +135,8 @@ Fluent Bit 1.8 and later supports multiline core capabilities for the Tail input
The new multiline core is exposed by the following configuration:
-| Key | Description |
-| :--- | :--- |
+| Key | Description |
+|:-------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
| `multiline.parser` | Specify one or multiple [Multiline Parser definitions](../../administration/configuring-fluent-bit/multiline-parsing.md) to apply to the content. |
[Multiline Parser](../../administration/configuring-fluent-bit/multiline-parsing.md) provides built-in configuration modes. When using a new `multiline.parser` definition, you must disable the old configuration from your tail section like:
@@ -157,10 +157,10 @@ If you are running Fluent Bit to process logs coming from containers like Docker
```yaml
pipeline:
- inputs:
- - name: tail
- path: /var/log/containers/*.log
- multiline.parser: docker, cri
+ inputs:
+ - name: tail
+ path: /var/log/containers/*.log
+ multiline.parser: docker, cri
```
{% endtab %}
@@ -168,9 +168,9 @@ pipeline:
```text
[INPUT]
- name tail
- path /var/log/containers/*.log
- multiline.parser docker, cri
+ name tail
+ path /var/log/containers/*.log
+ multiline.parser docker, cri
```
{% endtab %}
@@ -186,22 +186,22 @@ For example, it will first try `docker`, and if `docker` doesn't match, it will
For the old multiline configuration, the following options exist to configure the handling of multiline logs:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `multiline` | If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. When this option is enabled the Parser option isn't used. | `off` |
-| `multiline_flush` | Wait period time in seconds to process queued multiline messages. | `4` |
-| `parser_firstline` | Name of the parser that matches the beginning of a multiline message. The regular expression defined in the parser must include a group name (named `capture`), and the value of the last match group must be a string. | _none_ |
-| `parser_N` | Optional. Extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers. For example, `parser_1 ab1`, `parser_2 ab2`, `parser_N abN`. | _none_ |
+| Key | Description | Default |
+|:-------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `multiline` | If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. When this option is enabled the Parser option isn't used. | `off` |
+| `multiline_flush` | Wait period time in seconds to process queued multiline messages. | `4` |
+| `parser_firstline` | Name of the parser that matches the beginning of a multiline message. The regular expression defined in the parser must include a group name (named `capture`), and the value of the last match group must be a string. | _none_ |
+| `parser_N` | Optional. Extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers. For example, `parser_1 ab1`, `parser_2 ab2`, `parser_N abN`. | _none_ |
### Old Docker mode configuration parameters
Docker mode exists to recombine JSON log lines split by the Docker daemon due to its line length limit. To use this feature, configure the tail plugin with the corresponding parser and then enable Docker mode:
-| Key | Description | Default |
-| :--- | :--- | :--- |
-| `docker_mode` | If enabled, the plugin will recombine split Docker log lines before passing them to any parser. This mode can't be used at the same time as Multiline. | `Off` |
-| `docker_mode_flush` | Wait period time in seconds to flush queued unfinished split lines. | `4` |
-| `docker_mode_parser` | Specify an optional parser for the first line of the Docker multiline mode. The parser name to be specified must be registered in the `parsers.conf` file. | _none_ |
+| Key | Description | Default |
+|:---------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:--------|
+| `docker_mode` | If enabled, the plugin will recombine split Docker log lines before passing them to any parser. This mode can't be used at the same time as Multiline. | `Off` |
+| `docker_mode_flush` | Wait period time in seconds to flush queued unfinished split lines. | `4` |
+| `docker_mode_parser` | Specify an optional parser for the first line of the Docker multiline mode. The parser name to be specified must be registered in the `parsers.conf` file. | _none_ |
## Get started
@@ -224,13 +224,13 @@ Append the following in your main configuration file:
```yaml
pipeline:
- inputs:
- - name: tail
- path: /var/log/syslog
+ inputs:
+ - name: tail
+ path: /var/log/syslog
- outputs:
- - stdout:
- match: *
+ outputs:
+ - stdout:
+ match: *
```
{% endtab %}
@@ -238,12 +238,12 @@ pipeline:
```text
[INPUT]
- Name tail
- Path /var/log/syslog
+ Name tail
+ Path /var/log/syslog
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -269,27 +269,27 @@ Specify a `parser_firstline` parameter that matches the first line of a multilin
In this case you can use the following parser, which extracts the time as `time` and the remaining portion of the multiline as `log`.
{% tabs %}
-{% tab title="fluent-bit.yaml" %}
+{% tab title="parsers.yaml" %}
```yaml
parsers:
- - name: multiline
- format: regex
- regex: '/(?