You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: administration/configuring-fluent-bit/classic-mode/upstream-servers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Upstream servers
2
2
3
-
Fluent Bit [output plugins](../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
3
+
Fluent Bit [output plugins](../../../pipeline/outputs/) aim to connect to external services to deliver logs over the network. Being able to connect to one node (host) is normal and enough for more of the use cases, but there are other scenarios where balancing across different nodes is required. The `Upstream` feature provides this capability.
4
4
5
5
An `Upstream` defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the `Upstream` feature. The following plugin has `Upstream` support:
Copy file name to clipboardExpand all lines: administration/configuring-fluent-bit/multiline-parsing.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ Fluent Bit exposes certain pre-configured parsers (built-in) to solve specific m
18
18
| Parser | Description |
19
19
| ------ | ----------- |
20
20
|`docker`| Process a log entry generated by a Docker container engine. This parser supports the concatenation of log entries split by Docker. |
21
-
|`cri`| Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries |
21
+
|`cri`| Process a log entry generated by CRI-O container engine. Like the `docker` parser, it supports concatenation of log entries.|
22
22
|`go`| Process log entries generated by a Go based language application and perform concatenation if multiline messages are detected. |
23
23
|`python`| Process log entries generated by a Python based language application and perform concatenation if multiline messages are detected. |
24
24
|`java`| Process log entries generated by a Google Cloud Java language application and perform concatenation if multiline messages are detected. |
@@ -35,7 +35,7 @@ To understand which multiline parser type is required for your use case you have
35
35
| -------- | ----------- | ------- |
36
36
|`name`| Specify a unique name for the multiline parser definition. A good practice is to prefix the name with the word `multiline_` to avoid confusion with normal parser definitions. |_none_|
37
37
|`type`| Set the multiline mode. Fluent Bit supports the type `regex`.|_none_|
38
-
|`parser`| Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. <br /> When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. |_none_|
38
+
|`parser`| Name of a pre-defined parser that must be applied to the incoming content before applying the regular expression rule. If no parser is defined, it's assumed that's a raw text and not a structured message. When a parser is applied to a raw text, the regular expression is applied against a specific key of the structured message by using the `key_content` configuration property. |_none_|
39
39
|`key_content`| For an incoming structured message, specify the key that contains the data that should be processed by the regular expression and possibly concatenated. |_none_|
40
40
|`flush_timeout`| Timeout in milliseconds to flush a non-terminated multiline buffer. |`5s`|
41
41
|`rule`| Configure a rule to match a multiline pattern. The rule has a [specific format](#rules-definition). Multiple rules can be defined. |_none_|
@@ -59,7 +59,7 @@ A rule specifies how to match a multiline pattern and perform the concatenation.
59
59
- regular expression pattern
60
60
- next state
61
61
62
-
A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:
62
+
A rule might be defined as follows (comments added to simplify the definition) in corresponding YAML and classic configuration examples below:
This example defines two rules. Each rule has its own state name, regex patterns, and the next state name. Every field that composes a rule must be inside double quotes.
93
+
This example defines two rules. Each rule has its own state name, regular expression patterns, and the next state name. Every field that composes a rule must be inside double quotes.
94
94
95
-
The first rule of a state name must be `start_state`. The regex pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.
95
+
The first rule of a state name must be `start_state`. The regular expression pattern must match the first line of a multiline message, and a next state must be set to specify what the possible continuation lines look like.
96
96
97
97
{% hint style="info" %}
98
-
To simplify the configuration of regular expressions, you can use the [Rubular]((https://rubular.com/r/NDuyKwlTGOvq2g)) web site. This link uses the regex described in the previous example, plus a log line that matches the pattern:
98
+
To simplify the configuration of regular expressions, you can use the [Rubular](https://rubular.com/r/NDuyKwlTGOvq2g) web site. This link uses the regular expression described in the previous example, plus a log line that matches the pattern:
Some configuration directives in Fluent Bit refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/tail.md), [Forward Input](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/input/forward.md) or generic properties like [Mem\_Buf\_Limit](https://github.com/fluent/fluent-bit-docs/tree/5f926fd1330690179b8c1edab90d672699599ec7/administration/configuring-fluent-bit/backpressure.md) use unit sizes.
3
+
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure) use unit sizes.
4
4
5
-
[Fluent Bit](http://fluentbit.io) v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
5
+
Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
6
6
7
7
| Suffix | Description | Example |
8
8
| :--- | :--- | :--- |
9
-
|| When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of 32000 means 32000 bytes. |
10
-
|`k`, `K`, `KB`, `kb`| Kilobyte: a unit of memory equal to 1,000 bytes. | 32k means 32000 bytes. |
11
-
|`m`, `M`, `MB`, `mb`| Megabyte: a unit of memory equal to 1,000,000 bytes. |1M means 1000000 bytes. |
12
-
|`g`, `G`, `GB`, `gb`| Gigabyte: a unit of memory equal to 1,000,000,000 bytes. |1G means 1000000000 bytes. |
9
+
|| When a suffix isn't specified, assume that the value given is a bytes representation. | Specifying a value of `32000` means 32000 bytes. |
10
+
|`k`, `K`, `KB`, `kb`| Kilobyte: a unit of memory equal to 1,000 bytes. |`32k` means 32000 bytes. |
11
+
|`m`, `M`, `MB`, `mb`| Megabyte: a unit of memory equal to 1,000,000 bytes. |`1M` means 1000000 bytes. |
12
+
|`g`, `G`, `GB`, `gb`| Gigabyte: a unit of memory equal to 1,000,000,000 bytes. |`1G` means 1000000000 bytes. |
Copy file name to clipboardExpand all lines: administration/monitoring.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ pipeline:
62
62
{% endtab %}
63
63
{% endtabs %}
64
64
65
-
Start Fluent bit with the corresponding configuration chosen above:
65
+
Start Fluent bit with the corresponding configuration chosen previously:
66
66
67
67
```shell
68
68
# For YAML configuration.
@@ -381,15 +381,15 @@ When querying the related metrics, the aliases are returned instead of the plugi
381
381
382
382
You can create Grafana dashboards and alerts using Fluent Bit's exposed Prometheus style metrics.
383
383
384
-
The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://banzaicloud.com)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.
384
+
The provided [example dashboard](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/dashboard.json) is heavily inspired by [Banzai Cloud](https://github.com/banzaicloud)'s [logging operator dashboard](https://grafana.com/grafana/dashboards/7752) with a few key differences, such as the use of the `instance` label, stacked graphs, and a focus on Fluent Bit metrics. See [this blog post](https://www.robustperception.io/controlling-the-instance-label) for more information.
385
385
386
386

387
387
388
388
### Alerts
389
389
390
-
Sample alerts are available[here](https://github.com/fluent/fluent-bit-docs/blob/master/monitoring/alerts.yaml).
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
@@ -338,7 +338,7 @@ This filter record is an example to explain the details of a Tap record:
338
338
-`plugin_alias`: If an alias is set this field will contain the alias set for a plugin.
339
339
-`records`: An array of all the records being sent. Fluent Bit handles records in chunks of multiple records and chunks are indivisible, the same is done in the Tap output. Each record consists of its timestamp followed by the actual data which is a composite type of keys and values.
340
340
341
-
## Dump Internals / Signal
341
+
## Dump Internals and signal
342
342
343
343
When the service is running, you can export [metrics](monitoring.md) to see the overall status of the data flow of the service. There are other use cases where you might need to know the current status of the service internals, like the current status of the internal buffers. Dump Internals can help provide this information.
344
344
@@ -436,7 +436,7 @@ Depending of the buffering strategy and limits imposed by configuration, some Ch
436
436
||`size`| Amount of bytes used by the Chunk. |
437
437
||`size err`| Number of Chunks in an error state where its size couldn't be retrieved. |
438
438
439
-
### Storage Layer
439
+
### Storage layer
440
440
441
441
Fluent Bit relies on a custom storage layer interface designed for hybrid buffering. The `Storage Layer` entry contains a total summary of Chunks registered by Fluent Bit:
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](pipeline/parsers/regular-expression) is used, the log entry could be converted to:
29
+
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:
Copy file name to clipboardExpand all lines: installation/docker.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,27 +16,31 @@ Use the following command to start Fluent Bit while using a configuration file:
16
16
17
17
{% tabs %}
18
18
{% tab title="fluent-bit.conf" %}
19
+
19
20
```shell
20
21
docker run -ti -v ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf \
21
22
cr.fluentbit.io/fluent/fluent-bit
22
23
```
24
+
23
25
{% endtab %}
24
26
25
27
{% tab title="fluent-bit.yaml" %}
28
+
26
29
```shell
27
30
docker run -ti -v ./fluent-bit.yaml:/fluent-bit/etc/fluent-bit.yaml \
28
31
cr.fluentbit.io/fluent/fluent-bit \
29
32
-c /fluent-bit/etc/fluent-bit.yaml
33
+
30
34
```
35
+
31
36
{% endtab %}
32
37
{% endtabs %}
33
38
34
-
35
39
## Tags and versions
36
40
37
41
The following table describes the Linux container tags that are available on Docker Hub [fluent/fluent-bit](https://hub.docker.com/r/fluent/fluent-bit/) repository:
@@ -189,7 +191,7 @@ From a deployment perspective, there's no need to specify an architecture. The c
189
191
190
192
## Verify signed container images
191
193
192
-
Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)):
194
+
Version 1.9 and 2.0 container images are signed using Cosign/Sigstore. Verify these signatures using `cosign` ([install guide](https://docs.sigstore.dev/quickstart/quickstart-cosign/)):
[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor with full support for Kubernetes:
10
8
11
9
- Process Kubernetes containers logs from the file system or Systemd/Journald.
@@ -16,17 +14,17 @@ description: Kubernetes Production Grade Log Processor
16
14
17
15
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
18
16
19
-
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../pipeline/filters/kubernetes) filter plugin.
17
+
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) filter plugin.
20
18
21
19
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
22
20
23
21
## Installation
24
22
25
-
[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
23
+
Fluent Bit should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
26
24
27
-
The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm Chart at <https://github.com/fluent/helm-charts>.
25
+
The recommended way to deploy Fluent Bit for Kubernetes is with the official [Helm Chart](https://github.com/fluent/helm-charts).
28
26
29
-
### Note for OpenShift
27
+
### OpenShift
30
28
31
29
If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC) using the relevant option in the helm chart.
32
30
@@ -46,7 +44,7 @@ To validate that the repository was added, run `helm search repo fluent` to ensu
The default chart values include configuration to read container logs. With Docker parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an Elasticsearch cluster. You can modify the [included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
Copy file name to clipboardExpand all lines: installation/linux/raspbian-raspberry-pi.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
# Raspbian and Raspberry Pi
1
+
# Raspberry Pi
2
2
3
-
Fluent Bit is distributed as the `fluent-bit` package and is available for the Raspberry, specifically for [Raspbian](http://raspbian.org) distribution. The following versions are supported:
3
+
Fluent Bit is distributed as the `fluent-bit` package and is available for [Raspberry Pi](https://www.raspberrypi.com/software/operating-systems/). The following versions are supported:
0 commit comments