diff --git a/administration/backpressure.md b/administration/backpressure.md
index b666d489c..09d92a8ff 100644
--- a/administration/backpressure.md
+++ b/administration/backpressure.md
@@ -1,7 +1,5 @@
# Backpressure
-
-
It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates _backpressure_, leading to high memory consumption in the service.
To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters `Mem_Buf_Limit` and `storage.Max_Chunks_Up`.
@@ -70,4 +68,4 @@ With `storage.type filesystem` and `storage.max_chunks_up`, the following log me
```text
[input] {input name or alias} paused (storage buf overlimit)
[input] {input name or alias} resume (storage buf overlimit)
-```
+```
\ No newline at end of file
diff --git a/administration/buffering-and-storage.md b/administration/buffering-and-storage.md
index 42d77f64b..4b0230775 100644
--- a/administration/buffering-and-storage.md
+++ b/administration/buffering-and-storage.md
@@ -56,27 +56,26 @@ Choose your preferred format for an example input definition:
```yaml
pipeline:
- inputs:
- - name: tcp
- listen: 0.0.0.0
- port: 5170
- format: none
- tag: tcp-logs
- mem_buf_limit: 50MB
+ inputs:
+ - name: tcp
+ listen: 0.0.0.0
+ port: 5170
+ format: none
+ tag: tcp-logs
+ mem_buf_limit: 50MB
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[INPUT]
- Name tcp
- Listen 0.0.0.0
- Port 5170
- Format none
- Tag tcp-logs
- Mem_Buf_Limit 50MB
+ Name tcp
+ Listen 0.0.0.0
+ Port 5170
+ Format none
+ Tag tcp-logs
+ Mem_Buf_Limit 50MB
```
{% endtab %}
@@ -89,8 +88,9 @@ If this input uses more than 50 MB memory to buffer logs, you will get a wa
```
{% hint style="info" %}
-`mem_buf_Limit` applies only when `storage.type` is set to the default value of
-`memory`.
+
+`m em_buf_Limit` applies only when `storage.type` is set to the default value of `memory`.
+
{% endhint %}
#### Filesystem buffering
@@ -156,28 +156,27 @@ A Service section will look like this:
```yaml
service:
- flush: 1
- log_level: info
- storage.path: /var/log/flb-storage/
- storage.sync: normal
- storage.checksum: off
- storage.backlog.mem_limit: 5M
- storage.backlog.flush_on_shutdown: off
+ flush: 1
+ log_level: info
+ storage.path: /var/log/flb-storage/
+ storage.sync: normal
+ storage.checksum: off
+ storage.backlog.mem_limit: 5M
+ storage.backlog.flush_on_shutdown: off
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- flush 1
- log_Level info
- storage.path /var/log/flb-storage/
- storage.sync normal
- storage.checksum off
- storage.backlog.mem_limit 5M
- storage.backlog.flush_on_shutdown off
+ flush 1
+ log_Level info
+ storage.path /var/log/flb-storage/
+ storage.sync normal
+ storage.checksum off
+ storage.backlog.mem_limit 5M
+ storage.backlog.flush_on_shutdown off
```
{% endtab %}
@@ -201,44 +200,43 @@ The following example configures a service offering filesystem buffering capabil
```yaml
service:
- flush: 1
- log_level: info
- storage.path: /var/log/flb-storage/
- storage.sync: normal
- storage.checksum: off
- storage.max_chunks_up: 128
- storage.backlog.mem_limit: 5M
+ flush: 1
+ log_level: info
+ storage.path: /var/log/flb-storage/
+ storage.sync: normal
+ storage.checksum: off
+ storage.max_chunks_up: 128
+ storage.backlog.mem_limit: 5M
pipeline:
- inputs:
- - name: cpu
- storage.type: filesystem
+ inputs:
+ - name: cpu
+ storage.type: filesystem
- - name: mem
- storage.type: memory
+ - name: mem
+ storage.type: memory
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- flush 1
- log_Level info
- storage.path /var/log/flb-storage/
- storage.sync normal
- storage.checksum off
- storage.max_chunks_up 128
- storage.backlog.mem_limit 5M
+ flush 1
+ log_Level info
+ storage.path /var/log/flb-storage/
+ storage.sync normal
+ storage.checksum off
+ storage.max_chunks_up 128
+ storage.backlog.mem_limit 5M
[INPUT]
- name cpu
- storage.type filesystem
+ name cpu
+ storage.type filesystem
[INPUT]
- name mem
- storage.type memory
+ name mem
+ storage.type memory
```
{% endtab %}
@@ -259,50 +257,49 @@ The following example creates records with CPU usage samples in the filesystem w
```yaml
service:
- flush: 1
- log_level: info
- storage.path: /var/log/flb-storage/
- storage.sync: normal
- storage.checksum: off
- storage.max_chunks_up: 128
- storage.backlog.mem_limit: 5M
+ flush: 1
+ log_level: info
+ storage.path: /var/log/flb-storage/
+ storage.sync: normal
+ storage.checksum: off
+ storage.max_chunks_up: 128
+ storage.backlog.mem_limit: 5M
pipeline:
- inputs:
- - name: cpu
- storage.type: filesystem
-
- outputs:
- - name: stackdriver
- match: '*'
- storage.total_limit_size: 5M
+ inputs:
+ - name: cpu
+ storage.type: filesystem
+
+ outputs:
+ - name: stackdriver
+ match: '*'
+ storage.total_limit_size: 5M
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- flush 1
- log_Level info
- storage.path /var/log/flb-storage/
- storage.sync normal
- storage.checksum off
- storage.max_chunks_up 128
- storage.backlog.mem_limit 5M
+ flush 1
+ log_Level info
+ storage.path /var/log/flb-storage/
+ storage.sync normal
+ storage.checksum off
+ storage.max_chunks_up 128
+ storage.backlog.mem_limit 5M
[INPUT]
- name cpu
- storage.type filesystem
+ name cpu
+ storage.type filesystem
[OUTPUT]
- name stackdriver
- match *
- storage.total_limit_size 5M
+ name stackdriver
+ match *
+ storage.total_limit_size 5M
```
{% endtab %}
{% endtabs %}
-If Fluent Bit is offline because of a network issue, it will continue buffering CPU samples, keeping a maximum of 5 MB of the newest data.
+If Fluent Bit is offline because of a network issue, it will continue buffering CPU samples, keeping a maximum of 5 MB of the newest data.
\ No newline at end of file
diff --git a/administration/configuring-fluent-bit/multiline-parsing.md b/administration/configuring-fluent-bit/multiline-parsing.md
index 613353443..7715a740e 100644
--- a/administration/configuring-fluent-bit/multiline-parsing.md
+++ b/administration/configuring-fluent-bit/multiline-parsing.md
@@ -77,7 +77,6 @@ rules:
```
{% endtab %}
-
{% tab title="parsers_multiline.conf" %}
```text
@@ -110,88 +109,54 @@ This is the primary Fluent Bit YAML configuration file. It includes the `parsers
```yaml
service:
- flush: 1
- log_level: info
- parsers_file: parsers_multiline.yaml
+ flush: 1
+ log_level: info
+ parsers_file: parsers_multiline.yaml
pipeline:
- inputs:
- - name: tail
- path: test.log
- read_from_head: true
- multiline.parser: multiline-regex-test
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: tail
+ path: test.log
+ read_from_head: true
+ multiline.parser: multiline-regex-test
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. Then it sends the processing to the standard output.
```text
[SERVICE]
- flush 1
- log_level info
- parsers_file parsers_multiline.conf
+ flush 1
+ log_level info
+ parsers_file parsers_multiline.conf
[INPUT]
- name tail
- path test.log
- read_from_head true
- multiline.parser multiline-regex-test
+ name tail
+ path test.log
+ read_from_head true
+ multiline.parser multiline-regex-test
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
-
{% tab title="parsers_multiline.yaml" %}
This file defines a multiline parser for the YAML configuration example.
```yaml
multiline_parsers:
- - name: multiline-regex-test
- type: regex
- flush_timeout: 1000
- #
- # Regex rules for multiline parsing
- # ---------------------------------
- #
- # configuration hints:
- #
- # - first state always has the name: start_state
- # - every field in the rule must be inside double quotes
- #
- # rules | state name | regex pattern | next state
- # ------|---------------|--------------------------------------------
- rules:
- - state: start_state
- regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
- next_state: cont
- - state: cont
- regex: '/^\s+at.*/'
- next_state: cont
-```
-
-
-{% endtab %}
-
-{% tab title="parsers_multiline.conf" %}
-
-This second file defines a multiline parser for the classic configuration example.
-
-```text
-[MULTILINE_PARSER]
- name multiline-regex-test
- type regex
- flush_timeout 1000
+ - name: multiline-regex-test
+ type: regex
+ flush_timeout: 1000
#
# Regex rules for multiline parsing
# ---------------------------------
@@ -203,12 +168,41 @@ This second file defines a multiline parser for the classic configuration exampl
#
# rules | state name | regex pattern | next state
# ------|---------------|--------------------------------------------
- rule "start_state" "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/" "cont"
- rule "cont" "/^\s+at.*/" "cont"
+ rules:
+ - state: start_state
+ regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
+ next_state: cont
+ - state: cont
+ regex: '/^\s+at.*/'
+ next_state: cont
```
{% endtab %}
+{% tab title="parsers_multiline.conf" %}
+
+This second file defines a multiline parser for the classic configuration example.
+
+```text
+[MULTILINE_PARSER]
+ name multiline-regex-test
+ type regex
+ flush_timeout 1000
+ #
+ # Regex rules for multiline parsing
+ # ---------------------------------
+ #
+ # configuration hints:
+ #
+ # - first state always has the name: start_state
+ # - every field in the rule must be inside double quotes
+ #
+ # rules | state name | regex pattern | next state
+ # ------|---------------|--------------------------------------------
+ rule "start_state" "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/" "cont"
+ rule "cont" "/^\s+at.*/" "cont"
+```
+{% endtab %}
{% tab title="test.log" %}
The example log file with multiline content:
@@ -269,111 +263,72 @@ The following example retrieves `date` and `message` from concatenated logs.
Example files content:
{% tabs %}
-
{% tab title="fluent-bit.yaml" %}
This is the primary Fluent Bit YAML configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser `named-capture-test`. Then it sends the processing to the standard output.
```yaml
service:
- flush: 1
- log_level: info
- parsers_file: parsers_multiline.yaml
+ flush: 1
+ log_level: info
+ parsers_file: parsers_multiline.yaml
pipeline:
- inputs:
- - name: tail
- path: test.log
- read_from_head: true
- multiline.parser: multiline-regex-test
-
- filters:
- - name: parser
- match: '*'
- key_name: log
- parser: named-capture-test
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: tail
+ path: test.log
+ read_from_head: true
+ multiline.parser: multiline-regex-test
+
+ filters:
+ - name: parser
+ match: '*'
+ key_name: log
+ parser: named-capture-test
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
This is the primary Fluent Bit classic configuration file. It includes the `parsers_multiline.conf` and tails the file `test.log` by applying the multiline parser `multiline-regex-test`. It also parses concatenated log by applying parser `named-capture-test`. Then it sends the processing to the standard output.
```text
[SERVICE]
- flush 1
- log_level info
- parsers_file parsers_multiline.conf
+ flush 1
+ log_level info
+ parsers_file parsers_multiline.conf
[INPUT]
- name tail
- path test.log
- read_from_head true
- multiline.parser multiline-regex-test
+ name tail
+ path test.log
+ read_from_head true
+ multiline.parser multiline-regex-test
[FILTER]
- name parser
- match *
- key_name log
- parser named-capture-test
+ name parser
+ match *
+ key_name log
+ parser named-capture-test
[OUTPUT]
- name stdout
- match *
+ name stdout
+ match *
```
{% endtab %}
-
{% tab title="parsers_multiline.yaml" %}
This file defines a multiline parser for the YAML example.
```yaml
multiline_parsers:
- - name: multiline-regex-test
- type: regex
- flush_timeout: 1000
- #
- # Regex rules for multiline parsing
- # ---------------------------------
- #
- # configuration hints:
- #
- # - first state always has the name: start_state
- # - every field in the rule must be inside double quotes
- #
- # rules | state name | regex pattern | next state
- # ------|---------------|--------------------------------------------
- rules:
- - state: start_state
- regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
- next_state: cont
- - state: cont
- regex: '/^\s+at.*/'
- next_state: cont
-
-parsers:
- - name: named-capture-test
- format: regex
- regex: '/^(?[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?.*)/m'
- ```
-
-{% endtab %}
-
-{% tab title="parsers_multiline.conf" %}
-
-This file defines a multiline parser for the classic example.
-
-```text
-[MULTILINE_PARSER]
- name multiline-regex-test
- type regex
- flush_timeout 1000
+ - name: multiline-regex-test
+ type: regex
+ flush_timeout: 1000
#
# Regex rules for multiline parsing
# ---------------------------------
@@ -385,17 +340,52 @@ This file defines a multiline parser for the classic example.
#
# rules | state name | regex pattern | next state
# ------|---------------|--------------------------------------------
- rule "start_state" "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/" "cont"
- rule "cont" "/^\s+at.*/" "cont"
+ rules:
+ - state: start_state
+ regex: '/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/'
+ next_state: cont
+
+ - state: cont
+ regex: '/^\s+at.*/'
+ next_state: cont
+
+parsers:
+ - name: named-capture-test
+ format: regex
+ regex: '/^(?[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?.*)/m'
+ ```
+
+{% endtab %}
+{% tab title="parsers_multiline.conf" %}
+
+This file defines a multiline parser for the classic example.
+
+```text
+[MULTILINE_PARSER]
+ name multiline-regex-test
+ type regex
+ flush_timeout 1000
+ #
+ # Regex rules for multiline parsing
+ # ---------------------------------
+ #
+ # configuration hints:
+ #
+ # - first state always has the name: start_state
+ # - every field in the rule must be inside double quotes
+ #
+ # rules | state name | regex pattern | next state
+ # ------|---------------|--------------------------------------------
+ rule "start_state" "/([a-zA-Z]+ \d+ \d+\:\d+\:\d+)(.*)/" "cont"
+ rule "cont" "/^\s+at.*/" "cont"
[PARSER]
- Name named-capture-test
- Format regex
- Regex /^(?[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?.*)/m
+ Name named-capture-test
+ Format regex
+ Regex /^(?[a-zA-Z]+ \d+ \d+\:\d+\:\d+) (?.*)/m
```
{% endtab %}
-
{% tab title="test.log" %}
The example log file with multiline content:
@@ -435,4 +425,4 @@ $ ./fluent-bit --config fluent-bit.conf
"}]
[2] tail.0: [[1750333602.460998000, {}], {"log"=>"another line...
"}]
-```
+```
\ No newline at end of file
diff --git a/administration/memory-management.md b/administration/memory-management.md
index 4bc0464ec..39951394e 100644
--- a/administration/memory-management.md
+++ b/administration/memory-management.md
@@ -1,7 +1,5 @@
# Memory management
-
-
You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.
To make an estimate, in-use input plugins must set the `Mem_Buf_Limit`option. Learn more about it in [Backpressure](backpressure.md).
@@ -22,8 +20,8 @@ It's strongly suggested that in any production environment, Fluent Bit should be
Use the following command to determine if Fluent Bit has been built with jemalloc:
-```bash
-bin/fluent-bit -h | grep JEMALLOC
+```shell
+fluent-bit -h | grep JEMALLOC
```
The output should look like:
@@ -35,4 +33,4 @@ FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
```
-If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
+If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
\ No newline at end of file
diff --git a/administration/monitoring.md b/administration/monitoring.md
index bee5d0f83..999dec9b1 100644
--- a/administration/monitoring.md
+++ b/administration/monitoring.md
@@ -28,35 +28,34 @@ To get started, enable the HTTP server from the configuration file. The followin
```yaml
service:
- http_server: on
- http_listen: 0.0.0.0
- http_port: 2020
+ http_server: on
+ http_listen: 0.0.0.0
+ http_port: 2020
pipeline:
- inputs:
- - name: cpu
+ inputs:
+ - name: cpu
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- HTTP_Server On
- HTTP_Listen 0.0.0.0
- HTTP_PORT 2020
+ HTTP_Server On
+ HTTP_Listen 0.0.0.0
+ HTTP_PORT 2020
[INPUT]
- Name cpu
+ Name cpu
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -66,21 +65,16 @@ Start Fluent bit with the corresponding configuration chosen previously:
```shell
# For YAML configuration.
-./bin/fluent-bit --config fluent-bit.yaml
+$ fluent-bit --config fluent-bit.yaml
# For classic configuration.
-./bin/fluent-bit --config fluent-bit.conf
+$ fluent-bit --config fluent-bit.conf
```
Fluent Bit starts and generates output in your terminal:
```shell
-Fluent Bit v1.4.0
-* Copyright (C) 2019-2020 The Fluent Bit Authors
-* Copyright (C) 2015-2018 Treasure Data
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
+...
[2020/03/10 19:08:24] [ info] [engine] started
[2020/03/10 19:08:24] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
```
@@ -242,12 +236,12 @@ The following are detailed descriptions for the metrics collected by the storage
Query the service uptime with the following command:
```shell
-$ curl -s http://127.0.0.1:2020/api/v1/uptime | jq
+curl -s http://127.0.0.1:2020/api/v1/uptime | jq
```
The command prints a similar output like this:
-```javascript
+```json
{
"uptime_sec": 8950000,
"uptime_hr": "Fluent Bit has been running: 103 days, 14 hours, 6 minutes and 40 seconds"
@@ -264,7 +258,7 @@ curl -s http://127.0.0.1:2020/api/v1/metrics | jq
The command prints a similar output like this:
-```javascript
+```json
{
"input": {
"cpu.0": {
@@ -315,39 +309,38 @@ The following example sets an alias to the `INPUT` section of the configuration
```yaml
service:
- http_server: on
- http_listen: 0.0.0.0
- http_port: 2020
+ http_server: on
+ http_listen: 0.0.0.0
+ http_port: 2020
pipeline:
- inputs:
- - name: cpu
- alias: server1_cpu
-
- outputs:
- - name: stdout
- alias: raw_output
- match: '*'
+ inputs:
+ - name: cpu
+ alias: server1_cpu
+
+ outputs:
+ - name: stdout
+ alias: raw_output
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- HTTP_Server On
- HTTP_Listen 0.0.0.0
- HTTP_PORT 2020
+ HTTP_Server On
+ HTTP_Listen 0.0.0.0
+ HTTP_PORT 2020
[INPUT]
- Name cpu
- Alias server1_cpu
+ Name cpu
+ Alias server1_cpu
[OUTPUT]
- Name stdout
- Alias raw_output
- Match *
+ Name stdout
+ Alias raw_output
+ Match *
```
{% endtab %}
@@ -355,7 +348,7 @@ pipeline:
When querying the related metrics, the aliases are returned instead of the plugin name:
-```javascript
+```json
{
"input": {
"server1_cpu": {
@@ -421,43 +414,42 @@ The following configuration examples show how to define these settings:
```yaml
service:
- http_server: on
- http_listen: 0.0.0.0
- http_port: 2020
- health_check: on
- hc_errors_count: 5
- hc_retry_failure_count: 5
- hc_period: 5
+ http_server: on
+ http_listen: 0.0.0.0
+ http_port: 2020
+ health_check: on
+ hc_errors_count: 5
+ hc_retry_failure_count: 5
+ hc_period: 5
pipeline:
- inputs:
- - name: cpu
+ inputs:
+ - name: cpu
- outputs:
- - name: stdout
- match: '*'
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- HTTP_Server On
- HTTP_Listen 0.0.0.0
- HTTP_PORT 2020
- Health_Check On
- HC_Errors_Count 5
- HC_Retry_Failure_Count 5
- HC_Period 5
+ HTTP_Server On
+ HTTP_Listen 0.0.0.0
+ HTTP_PORT 2020
+ Health_Check On
+ HC_Errors_Count 5
+ HC_Retry_Failure_Count 5
+ HC_Period 5
[INPUT]
- Name cpu
+ Name cpu
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -480,4 +472,4 @@ Health status = (HC_Errors_Count > 5) OR (HC_Retry_Failure_Count > 5) IN 5 secon
## Telemetry Pipeline
-[Telemetry Pipeline](https://chronosphere.io/platform/telemetry-pipeline/) is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations.
+[Telemetry Pipeline](https://chronosphere.io/platform/telemetry-pipeline/) is a hosted service that lets you monitor your Fluent Bit agents including data flow, metrics, and configurations.
\ No newline at end of file
diff --git a/administration/networking.md b/administration/networking.md
index 8539d5d8c..863b86df0 100644
--- a/administration/networking.md
+++ b/administration/networking.md
@@ -81,53 +81,52 @@ Use the following configuration snippet of your choice in a corresponding file n
```yaml
service:
- flush: 1
- log_level: info
+ flush: 1
+ log_level: info
pipeline:
- inputs:
- - name: random
- samples: 5
-
- outputs:
- - name: tcp
- match: '*'
- host: 127.0.0.1
- port: 9090
- format: json_lines
- # Networking Setup
- net.dns.mode: TCP
- net.connect_timeout: 5
- net.source_address: 127.0.0.1
- net.keepalive: on
- net.keepalive_idle_timeout: 10
+ inputs:
+ - name: random
+ samples: 5
+
+ outputs:
+ - name: tcp
+ match: '*'
+ host: 127.0.0.1
+ port: 9090
+ format: json_lines
+ # Networking Setup
+ net.dns.mode: TCP
+ net.connect_timeout: 5
+ net.source_address: 127.0.0.1
+ net.keepalive: on
+ net.keepalive_idle_timeout: 10
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- flush 1
- log_level info
+ flush 1
+ log_level info
[INPUT]
- name random
- samples 5
+ name random
+ samples 5
[OUTPUT]
- name tcp
- match *
- host 127.0.0.1
- port 9090
- format json_lines
- # Networking Setup
- net.dns.mode TCP
- net.connect_timeout 5
- net.source_address 127.0.0.1
- net.keepalive on
- net.keepalive_idle_timeout 10
+ name tcp
+ match *
+ host 127.0.0.1
+ port 9090
+ format json_lines
+ # Networking Setup
+ net.dns.mode TCP
+ net.connect_timeout 5
+ net.source_address 127.0.0.1
+ net.keepalive on
+ net.keepalive_idle_timeout 10
```
{% endtab %}
@@ -152,4 +151,4 @@ $ nc -l 9090
If the `net.keepalive` option isn't enabled, Fluent Bit closes the TCP connection and netcat quits.
-After the five records arrive, the connection idles. After 10 seconds, the connection closes due to `net.keepalive_idle_timeout`.
+After the five records arrive, the connection idles. After 10 seconds, the connection closes due to `net.keepalive_idle_timeout`.
\ No newline at end of file
diff --git a/administration/scheduling-and-retries.md b/administration/scheduling-and-retries.md
index 3c04e2611..649882fc3 100644
--- a/administration/scheduling-and-retries.md
+++ b/administration/scheduling-and-retries.md
@@ -54,24 +54,23 @@ The following example configures the `scheduler.base` as `3` seconds and `schedu
```yaml
service:
- flush: 5
- daemon: off
- log_level: debug
- scheduler.base: 3
- scheduler.cap: 30
+ flush: 5
+ daemon: off
+ log_level: debug
+ scheduler.base: 3
+ scheduler.cap: 30
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[SERVICE]
- Flush 5
- Daemon off
- Log_Level debug
- scheduler.base 3
- scheduler.cap 30
+ Flush 5
+ Daemon off
+ Log_Level debug
+ scheduler.base 3
+ scheduler.cap 30
```
{% endtab %}
@@ -105,40 +104,37 @@ The following example configures two outputs, where the HTTP plugin has an unlim
```yaml
pipeline:
- inputs:
- ...
-
- outputs:
- - name: http
- host: 192.168.5.6
- port: 8080
- retry_limit: false
-
- - name: es
- host: 192.168.5.20
- port: 9200
- logstash_format: on
- retry_limit: 5
+
+ outputs:
+ - name: http
+ host: 192.168.5.6
+ port: 8080
+ retry_limit: false
+
+ - name: es
+ host: 192.168.5.20
+ port: 9200
+ logstash_format: on
+ retry_limit: 5
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[OUTPUT]
- Name http
- Host 192.168.5.6
- Port 8080
- Retry_Limit False
+ Name http
+ Host 192.168.5.6
+ Port 8080
+ Retry_Limit False
[OUTPUT]
- Name es
- Host 192.168.5.20
- Port 9200
- Logstash_Format On
- Retry_Limit 5
+ Name es
+ Host 192.168.5.20
+ Port 9200
+ Logstash_Format On
+ Retry_Limit 5
```
{% endtab %}
-{% endtabs %}
+{% endtabs %}
\ No newline at end of file
diff --git a/administration/transport-security.md b/administration/transport-security.md
index 3a28f823e..ca45f7e8f 100644
--- a/administration/transport-security.md
+++ b/administration/transport-security.md
@@ -80,8 +80,8 @@ In addition, other plugins implement a subset of TLS support, with restricted co
By default, the HTTP input plugin uses plain TCP. Run the following command to enable TLS:
-```bash
-./bin/fluent-bit -i http \
+```shell
+fluent-bit -i http \
-p port=9999 \
-p tls=on \
-p tls.verify=off \
@@ -92,7 +92,9 @@ By default, the HTTP input plugin uses plain TCP. Run the following command to e
```
{% hint style="info" %}
+
See the Tips and Tricks section below for details on generating `self_signed.crt` and `self_signed.key` files shown in these examples.
+
{% endhint %}
In the previous command, the two properties `tls` and `tls.verify` are set for demonstration purposes. Always enable verification in production environments.
@@ -100,40 +102,38 @@ In the previous command, the two properties `tls` and `tls.verify` are set for d
The same behavior can be accomplished using a configuration file:
{% tabs %}
-
{% tab title="fluent-bit.yaml" %}
```yaml
pipeline:
- inputs:
- - name: http
- port: 9999
- tls: on
- tls.verify: off
- tls.cert_file: self_signed.crt
- tls.key_file: self_signed.key
-
- outputs:
- - name: stdout
- match: '*'
+ inputs:
+ - name: http
+ port: 9999
+ tls: on
+ tls.verify: off
+ tls.cert_file: self_signed.crt
+ tls.key_file: self_signed.key
+
+ outputs:
+ - name: stdout
+ match: '*'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[INPUT]
- name http
- port 9999
- tls on
- tls.verify off
- tls.crt_file self_signed.crt
- tls.key_file self_signed.key
+ name http
+ port 9999
+ tls on
+ tls.verify off
+ tls.crt_file self_signed.crt
+ tls.key_file self_signed.key
[OUTPUT]
- Name stdout
- Match *
+ Name stdout
+ Match *
```
{% endtab %}
@@ -145,9 +145,9 @@ By default, the HTTP output plugin uses plain TCP. Run the following command to
```bash
fluent-bit -i cpu -t cpu -o http://192.168.2.3:80/something \
- -p tls=on \
- -p tls.verify=off \
- -m '*'
+ -p tls=on \
+ -p tls.verify=off \
+ -m '*'
```
In the previous command, the properties `tls` and `tls.verify` are enabled for demonstration purposes. Always enable verification in production environments.
@@ -155,42 +155,40 @@ In the previous command, the properties `tls` and `tls.verify` are enabled for d
The same behavior can be accomplished using a configuration file:
{% tabs %}
-
{% tab title="fluent-bit.yaml" %}
```yaml
pipeline:
- inputs:
- - name: cpu
- tag: cpu
-
- outputs:
- - name: http
- match: '*'
- host: 192.168.2.3
- port: 80
- uri: /something
- tls: on
- tls.verify: off
+ inputs:
+ - name: cpu
+ tag: cpu
+
+ outputs:
+ - name: http
+ match: '*'
+ host: 192.168.2.3
+ port: 80
+ uri: /something
+ tls: on
+ tls.verify: off
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[INPUT]
- Name cpu
- Tag cpu
+ Name cpu
+ Tag cpu
[OUTPUT]
- Name http
- Match *
- Host 192.168.2.3
- Port 80
- URI /something
- tls On
- tls.verify Off
+ Name http
+ Match *
+ Host 192.168.2.3
+ Port 80
+ URI /something
+ tls On
+ tls.verify Off
```
{% endtab %}
@@ -198,11 +196,11 @@ pipeline:
## Tips and Tricks
-### Generate a self signed certificates for testing purposes
+### Generate a self-signed certificates for testing purposes
-The following command generates a 4096 bit RSA key pair and a certificate that's signed using `SHA-256` with the expiration date set to 30 days in the future. In this example, `test.host.net` is set as the common name. This example opts out of `DES`, so the private key is stored in plain text.
+The following command generates a 4096-bit RSA key pair and a certificate that's signed using `SHA-256` with the expiration date set to 30 days in the future. In this example, `test.host.net` is set as the common name. This example opts out of `DES`, so the private key is stored in plain text.
-```bash
+```shell
openssl req -x509 \
-newkey rsa:4096 \
-sha256 \
@@ -217,44 +215,42 @@ openssl req -x509 \
Fluent Bit supports [TLS server name indication](https://en.wikipedia.org/wiki/Server_Name_Indication). If you are serving multiple host names on a single IP address (for example, using virtual hosting), you can make use of `tls.vhost` to connect to a specific hostname.
{% tabs %}
-
{% tab title="fluent-bit.yaml" %}
```yaml
pipeline:
- inputs:
- - name: cpu
- tag: cpu
-
- outputs:
- - name: forward
- match: '*'
- host: 192.168.10.100
- port: 24224
- tls: on
- tls.verify: off
- tls.ca_file: '/etc/certs/fluent.crt'
- tls.vhost: 'fluent.example.com'
+ inputs:
+ - name: cpu
+ tag: cpu
+
+ outputs:
+ - name: forward
+ match: '*'
+ host: 192.168.10.100
+ port: 24224
+ tls: on
+ tls.verify: off
+ tls.ca_file: '/etc/certs/fluent.crt'
+ tls.vhost: 'fluent.example.com'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
```text
[INPUT]
- Name cpu
- Tag cpu
+ Name cpu
+ Tag cpu
[OUTPUT]
- Name forward
- Match *
- Host 192.168.10.100
- Port 24224
- tls On
- tls.verify On
- tls.ca_file /etc/certs/fluent.crt
- tls.vhost fluent.example.com
+ Name forward
+ Match *
+ Host 192.168.10.100
+ Port 24224
+ tls On
+ tls.verify On
+ tls.ca_file /etc/certs/fluent.crt
+ tls.vhost fluent.example.com
```
{% endtab %}
@@ -274,44 +270,42 @@ This certificate covers only `my.fluent-aggregator.net` so if you use a differen
To fully verify the alternative name and demonstrate the failure, enable `tls.verify_hostname`:
{% tabs %}
-
{% tab title="fluent-bit.yaml" %}
```yaml
pipeline:
- inputs:
- - name: cpu
- tag: cpu
-
- outputs:
- - name: forward
- match: '*'
- host: other.fluent-aggregator.net
- port: 24224
- tls: on
- tls.verify: on
- tls.verify_hostname: on
- tls.ca_file: '/path/to/fluent-x509v3-alt-name.crt'
+ inputs:
+ - name: cpu
+ tag: cpu
+
+ outputs:
+ - name: forward
+ match: '*'
+ host: other.fluent-aggregator.net
+ port: 24224
+ tls: on
+ tls.verify: on
+ tls.verify_hostname: on
+ tls.ca_file: '/path/to/fluent-x509v3-alt-name.crt'
```
{% endtab %}
-
{% tab title="fluent-bit.conf" %}
-```python
+```text
[INPUT]
- Name cpu
- Tag cpu
+ Name cpu
+ Tag cpu
[OUTPUT]
- Name forward
- Match *
- Host other.fluent-aggregator.net
- Port 24224
- tls On
- tls.verify On
- tls.verify_hostname on
- tls.ca_file /path/to/fluent-x509v3-alt-name.crt
+ Name forward
+ Match *
+ Host other.fluent-aggregator.net
+ Port 24224
+ tls On
+ tls.verify On
+ tls.verify_hostname on
+ tls.ca_file /path/to/fluent-x509v3-alt-name.crt
```
{% endtab %}
@@ -323,4 +317,4 @@ This outgoing connect will fail and disconnect:
[2024/06/17 16:51:31] [error] [tls] error: unexpected EOF with reason: certificate verify failed
[2024/06/17 16:51:31] [debug] [upstream] connection #50 failed to other.fluent-aggregator.net:24224
[2024/06/17 16:51:31] [error] [output:forward:forward.0] no upstream connections available
-```
+```
\ No newline at end of file
diff --git a/administration/troubleshooting.md b/administration/troubleshooting.md
index 15600a5a7..83272805f 100644
--- a/administration/troubleshooting.md
+++ b/administration/troubleshooting.md
@@ -26,30 +26,11 @@ If the `--enable-chunk-trace` option is present, your Fluent Bit version support
You can start Fluent Bit with tracing activated from the beginning by using the `trace-input` and `trace-output` properties:
-```bash
-fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
-Fluent Bit v2.1.8
-* Copyright (C) 2015-2022 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2023/07/21 16:27:01] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=622937
-[2023/07/21 16:27:01] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2023/07/21 16:27:01] [ info] [cmetrics] version=0.6.3
-[2023/07/21 16:27:01] [ info] [ctraces ] version=0.3.1
-[2023/07/21 16:27:01] [ info] [input:dummy:dummy.0] initializing
-[2023/07/21 16:27:01] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
-[2023/07/21 16:27:01] [ info] [sp] stream processor started
-[2023/07/21 16:27:01] [ info] [output:stdout:stdout.0] worker #0 started
-[2023/07/21 16:27:01] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=622937
-[2023/07/21 16:27:01] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2023/07/21 16:27:01] [ info] [cmetrics] version=0.6.3
-[2023/07/21 16:27:01] [ info] [ctraces ] version=0.3.1
-[2023/07/21 16:27:01] [ info] [input:emitter:trace-emitter] initializing
-[2023/07/21 16:27:01] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
-[2023/07/21 16:27:01] [ info] [sp] stream processor started
-[2023/07/21 16:27:01] [ info] [output:stdout:stdout.0] worker #0 started
-.[0] dummy.0: [[1689971222.068537501, {}], {"message"=>"dummy"}]
+```shell
+$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout
+
+...
+[0] dummy.0: [[1689971222.068537501, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971223.068556121, {}], {"message"=>"dummy"}]
[0] trace: [[1689971222.068677045, {}], {"type"=>1, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
[1] trace: [[1689971222.068735577, {}], {"type"=>3, "trace_id"=>"0", "plugin_instance"=>"dummy.0", "records"=>[{"timestamp"=>1689971222, "record"=>{"message"=>"dummy"}}], "start_time"=>1689971222, "end_time"=>1689971222}]
@@ -82,30 +63,11 @@ The following warning indicates the `-Z` or `--enable-chunk-tracing` option is m
Set properties for the output using the `--trace-output-property` option:
-```bash
+```shell
$ fluent-bit -Z -i dummy -o stdout -f 1 --trace-input=dummy.0 --trace-output=stdout --trace-output-property=format=json_lines
-Fluent Bit v2.1.8
-* Copyright (C) 2015-2022 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2023/07/21 16:28:59] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=623170
-[2023/07/21 16:28:59] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2023/07/21 16:28:59] [ info] [cmetrics] version=0.6.3
-[2023/07/21 16:28:59] [ info] [ctraces ] version=0.3.1
-[2023/07/21 16:28:59] [ info] [input:dummy:dummy.0] initializing
-[2023/07/21 16:28:59] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
-[2023/07/21 16:28:59] [ info] [sp] stream processor started
-[2023/07/21 16:28:59] [ info] [output:stdout:stdout.0] worker #0 started
-[2023/07/21 16:28:59] [ info] [fluent bit] version=2.1.8, commit=824ba3dd08, pid=623170
-[2023/07/21 16:28:59] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2023/07/21 16:28:59] [ info] [cmetrics] version=0.6.3
-[2023/07/21 16:28:59] [ info] [ctraces ] version=0.3.1
-[2023/07/21 16:28:59] [ info] [input:emitter:trace-emitter] initializing
-[2023/07/21 16:28:59] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
-[2023/07/21 16:29:00] [ info] [sp] stream processor started
-[2023/07/21 16:29:00] [ info] [output:stdout:stdout.0] worker #0 started
-.[0] dummy.0: [[1689971340.068565891, {}], {"message"=>"dummy"}]
+
+...
+[0] dummy.0: [[1689971340.068565891, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1689971341.068632477, {}], {"message"=>"dummy"}]
{"date":1689971340.068745,"type":1,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
{"date":1689971340.068825,"type":3,"trace_id":"0","plugin_instance":"dummy.0","records":[{"timestamp":1689971340,"record":{"message":"dummy"}}],"start_time":1689971340,"end_time":1689971340}
@@ -120,7 +82,7 @@ With that option set, the stdout plugin emits traces in `json_lines` format:
All three options can also be defined using the more flexible `--trace` option:
-```bash
+```shell
fluent-bit -Z -i dummy -o stdout -f 1 --trace="input=dummy.0 output=stdout output.format=json_lines"
```
@@ -134,43 +96,24 @@ Tap support can also be activated and deactivated using the embedded web server:
```shell
$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
-Fluent Bit v2.0.0
-* Copyright (C) 2015-2022 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2022/10/21 10:03:16] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
-[2022/10/21 10:03:16] [ info] [output:stdout:stdout.0] worker #0 started
-[2022/10/21 10:03:16] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2022/10/21 10:03:16] [ info] [cmetrics] version=0.5.2
-[2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] initializing
-[2022/10/21 10:03:16] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
-[2022/10/21 10:03:16] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
-[2022/10/21 10:03:16] [ info] [sp] stream processor started
+
+...
[0] dummy.0: [1666346597.203307010, {"message"=>"dummy"}]
[0] dummy.0: [1666346598.204103793, {"message"=>"dummy"}]
-...
-
```
In another terminal, activate Tap by either using the instance id of the input (`dummy.0`) or its alias. The alias is more predictable, and is used here:
```shell
$ curl 127.0.0.1:2020/api/v1/trace/input_dummy
+
{"status":"ok"}
```
This response means Tap is active. The terminal with Fluent Bit running should now look like this:
-```shell
-[0] dummy.0: [1666346615.203253156, {"message"=>"dummy"}]
-[2022/10/21 10:03:36] [ info] [fluent bit] version=2.0.0, commit=3000f699f2, pid=1
-[2022/10/21 10:03:36] [ info] [storage] ver=1.3.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2022/10/21 10:03:36] [ info] [cmetrics] version=0.5.2
-[2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] initializing
-[2022/10/21 10:03:36] [ info] [input:emitter:trace-emitter] storage_strategy='memory' (memory only)
-[2022/10/21 10:03:36] [ info] [sp] stream processor started
-[2022/10/21 10:03:36] [ info] [output:stdout:stdout.0] worker #0 started
+```text
+...
[0] dummy.0: [1666346616.203551736, {"message"=>"dummy"}]
[0] trace: [1666346617.205221952, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346617, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346617, "end_time"=>1666346617}]
[0] dummy.0: [1666346617.205131790, {"message"=>"dummy"}]
@@ -178,7 +121,6 @@ This response means Tap is active. The terminal with Fluent Bit running should n
[0] trace: [1666346618.204110867, {"type"=>1, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{[0] dummy.0: [1666346618.204049246, {"message"=>"dummy"}]
"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
[0] trace: [1666346618.204198654, {"type"=>3, "trace_id"=>"trace.1", "plugin_instance"=>"dummy.0", "plugin_alias"=>"input_dummy", "records"=>[{"timestamp"=>1666346618, "record"=>{"message"=>"dummy"}}], "start_time"=>1666346618, "end_time"=>1666346618}]
-
```
All the records that display are those emitted by the activities of the dummy plugin.
@@ -190,9 +132,8 @@ This example takes the same steps but demonstrates how the mechanism works with
This example follows a single input, out of many, and which passes through several filters.
```shell
-$ docker run --rm -ti -p 2020:2020 \
- fluent/fluent-bit:latest \
- -Z -H \
+$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest \
+ -Z -H \
-i dummy -p alias=dummy_0 -p \
dummy='{"dummy": "dummy_0", "key_name": "foo", "key_cnt": "1"}' \
-i dummy -p alias=dummy_1 -p dummy='{"dummy": "dummy_1"}' \
@@ -200,7 +141,7 @@ $ docker run --rm -ti -p 2020:2020 \
-F record_modifier -m 'dummy.0' -p record="powered_by fluent" \
-F record_modifier -m 'dummy.1' -p record="powered_by fluent-bit" \
-F nest -m 'dummy.0' \
- -p operation=nest -p wildcard='key_*' -p nest_under=data \
+ -p operation=nest -p wildcard='key_*' -p nest_under=data \
-o null -m '*' -f 1
```
@@ -210,12 +151,14 @@ Activate with the following `curl` command:
```shell
$ curl 127.0.0.1:2020/api/v1/trace/dummy_0
+
{"status":"ok"}
```
You should start seeing output similar to the following:
-```shell
+```text
+...
[0] trace: [1666349359.325597543, {"type"=>1, "trace_id"=>"trace.0", "plugin_instance"=>"dummy.0", "plugin_alias"=>"dummy_0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1"}}], "start_time"=>1666349359, "end_time"=>1666349359}]
[0] trace: [1666349359.325723747, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"record_modifier.0", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "key_name"=>"foo", "key_cnt"=>"1", "powered_by"=>"fluent"}}]}]
[0] trace: [1666349359.325783954, {"type"=>2, "start_time"=>1666349359, "end_time"=>1666349359, "trace_id"=>"trace.0", "plugin_instance"=>"nest.2", "records"=>[{"timestamp"=>1666349359, "record"=>{"dummy"=>"dummy_0", "powered_by"=>"fluent", "data"=>{"key_name"=>"foo", "key_cnt"=>"1"}}}]}]
@@ -259,35 +202,24 @@ First, run Fluent Bit enabling Tap:
```shell
$ docker run --rm -ti -p 2020:2020 fluent/fluent-bit:latest -Z -H -i dummy -p alias=input_dummy -o stdout -f 1
-Fluent Bit v2.0.8
-* Copyright (C) 2015-2022 The Fluent Bit Authors
-* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
-* https://fluentbit.io
-
-[2023/01/27 07:44:25] [ info] [fluent bit] version=2.0.8, commit=9444fdc5ee, pid=1
-[2023/01/27 07:44:25] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
-[2023/01/27 07:44:25] [ info] [cmetrics] version=0.5.8
-[2023/01/27 07:44:25] [ info] [ctraces ] version=0.2.7
-[2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] initializing
-[2023/01/27 07:44:25] [ info] [input:dummy:input_dummy] storage_strategy='memory' (memory only)
-[2023/01/27 07:44:25] [ info] [output:stdout:stdout.0] worker #0 started
-[2023/01/27 07:44:25] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
-[2023/01/27 07:44:25] [ info] [sp] stream processor started
+
+...
[0] dummy.0: [1674805465.976012761, {"message"=>"dummy"}]
[0] dummy.0: [1674805466.973669512, {"message"=>"dummy"}]
-...
```
In another terminal, activate Tap including the output (`stdout`), and the parameters wanted (`"format": "json"`):
```shell
$ curl 127.0.0.1:2020/api/v1/trace/input_dummy -d '{"output":"stdout", "params": {"format": "json"}}'
+
{"status":"ok"}
```
In the first terminal, you should see the output similar to the following:
-```shell
+```text
+...
[0] dummy.0: [1674805635.972373840, {"message"=>"dummy"}]
[{"date":1674805634.974457,"type":1,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805634.974605,"type":3,"trace_id":"0","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805634,"record":{"message":"dummy"}}],"start_time":1674805634,"end_time":1674805634},{"date":1674805635.972398,"type":1,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635},{"date":1674805635.972413,"type":3,"trace_id":"1","plugin_instance":"dummy.0","plugin_alias":"input_dummy","records":[{"timestamp":1674805635,"record":{"message":"dummy"}}],"start_time":1674805635,"end_time":1674805635}]
[0] dummy.0: [1674805636.973970215, {"message"=>"dummy"}]
@@ -304,22 +236,22 @@ This filter record is an example to explain the details of a Tap record:
```json
{
- "type": 2,
- "start_time": 1666349231,
- "end_time": 1666349231,
- "trace_id": "trace.1",
- "plugin_instance": "nest.2",
- "records": [{
- "timestamp": 1666349231,
- "record": {
- "dummy": "dummy_0",
- "powered_by": "fluent",
- "data": {
- "key_name": "foo",
- "key_cnt": "1"
- }
+ "type": 2,
+ "start_time": 1666349231,
+ "end_time": 1666349231,
+ "trace_id": "trace.1",
+ "plugin_instance": "nest.2",
+ "records": [{
+ "timestamp": 1666349231,
+ "record": {
+ "dummy": "dummy_0",
+ "powered_by": "fluent",
+ "data": {
+ "key_name": "foo",
+ "key_cnt": "1"
}
- }]
+ }
+ }]
}
```
@@ -361,6 +293,7 @@ The command `pidof` aims to identify the Process ID of Fluent Bit.
Fluent Bit will dump the following information to the standard output interface (`stdout`):
```text
+...
[engine] caught signal (SIGCONT)
[2020/03/23 17:39:02] Fluent Bit Dump
@@ -410,7 +343,7 @@ Overall ingestion status of the plugin.
### Tasks
-When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contains multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.
+When an input plugin ingests data into the engine, a Chunk is created. A Chunk can contain multiple records. At flush time, the engine creates a Task that contains the routes for the Chunk associated in question.
The Task dump describes the tasks associated to the input plugin:
@@ -425,7 +358,7 @@ The Task dump describes the tasks associated to the input plugin:
The Chunks dump tells more details about all the chunks that the input plugin has generated and are still being processed.
-Depending of the buffering strategy and limits imposed by configuration, some Chunks might be `up` (in memory) or `down` (filesystem).
+Depending on the buffering strategy and limits imposed by configuration, some Chunks might be `up` (in memory) or `down` (filesystem).
| Entry | Sub-entry | Description |
| :--- | :--- | :--- |
@@ -446,4 +379,4 @@ Fluent Bit relies on a custom storage layer interface designed for hybrid buffer
| `mem chunks` | | Total number of Chunks memory-based. |
| `fs chunks` | | Total number of Chunks filesystem based. |
| | `up` | Total number of filesystem chunks up in memory. |
-| | `down` | Total number of filesystem chunks down (not loaded in memory). |
+| | `down` | Total number of filesystem chunks down (not loaded in memory). |
\ No newline at end of file