Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .gitbook/assets/logo_documentation_1.6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
* [Docker Events](pipeline/inputs/docker-events.md)
* [Dummy](pipeline/inputs/dummy.md)
* [Exec](pipeline/inputs/exec.md)
* [Forward](pipeline/inputs/forward.md)
* [Forward](pipeline/outputs/forward.md)
* [Head](pipeline/inputs/head.md)
* [Health](pipeline/inputs/health.md)
* [Kernel Logs](pipeline/inputs/kernel-logs.md)
Expand Down Expand Up @@ -159,3 +159,4 @@
* [Ingest Records Manually](development/ingest-records-manually.md)
* [Golang Output Plugins](development/golang-output-plugins.md)
* [Developer guide for beginners on contributing to Fluent Bit](development/developer-guide.md)

3 changes: 2 additions & 1 deletion administration/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The following **output** plugins can take advantage of the TLS feature:
* [BigQuery](../pipeline/outputs/bigquery.md)
* [Datadog](../pipeline/outputs/datadog.md)
* [Elasticsearch](../pipeline/outputs/elasticsearch.md)
* [Forward]()
* [Forward](security.md)
* [GELF](../pipeline/outputs/gelf.md)
* [HTTP](../pipeline/outputs/http.md)
* [InfluxDB](../pipeline/outputs/influxdb.md)
Expand Down Expand Up @@ -93,3 +93,4 @@ Fluent Bit supports [TLS server name indication](https://en.wikipedia.org/wiki/S
tls.ca_file /etc/certs/fluent.crt
tls.vhost fluent.example.com
```

2 changes: 1 addition & 1 deletion concepts/key-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Every Event that gets into Fluent Bit gets assigned a Tag. This tag is an intern
Most of the tags are assigned manually in the configuration. If a tag is not specified, Fluent Bit will assign the name of the Input plugin instance from where that Event was generated from.

{% hint style="info" %}
The only input plugin that **don't** assign Tags is [Forward](../pipeline/inputs/forward.md) input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.
The only input plugin that **don't** assign Tags is [Forward](../pipeline/outputs/forward.md) input. This plugin speaks the Fluentd wire protocol called Forward where every Event already comes with a Tag associated. Fluent Bit will always use the incoming Tag set by the client.
{% endhint %}

A Tagged record must always have a Matching rule. To learn more about Tags and Matches check the [Routing](data-pipeline/router.md) section.
Expand Down
23 changes: 12 additions & 11 deletions installation/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,18 +102,18 @@ When deploying Fluent Bit to Kubernetes, there are three log files that you need

`C:\k\kubelet.err.log`

* This is the error log file from kubelet daemon running on host.
* You will need to retain this file for future troubleshooting (to debug deployment failures etc.)
* This is the error log file from kubelet daemon running on host.
* You will need to retain this file for future troubleshooting \(to debug deployment failures etc.\)

`C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log`

* This is the main log file you need to watch. Configure Fluent Bit to follow this file.
* It is actually a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on its file name.
* This is the main log file you need to watch. Configure Fluent Bit to follow this file.
* It is actually a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on its file name.

`C:\ProgramData\Docker\containers\<docker>\<docker>.log`

* This is the log file produced by Docker.
* Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
* This is the log file produced by Docker.
* Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.

Typically, your deployment yaml contains the following volume configuration.

Expand Down Expand Up @@ -185,17 +185,18 @@ parsers.conf: |

### Mitigate unstable network on Windows pods

Windows pods often lack working DNS immediately after boot ([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up:
Windows pods often lack working DNS immediately after boot \([\#78479](https://github.com/kubernetes/kubernetes/issues/78479)\). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up:

* `DNS_Retries` - Retries N times until the network start working (6)
* `DNS_Wait_Time` - Lookup interval between network status checks (30)
* `DNS_Retries` - Retries N times until the network start working \(6\)
* `DNS_Wait_Time` - Lookup interval between network status checks \(30\)

By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.
By default, Fluent Bit waits for 3 minutes \(30 seconds x 6 times\). If it's not enough for you, tweak the configuration as follows.

```
```text
[filter]
Name kubernetes
...
DNS_Retries 10
DNS_Wait_Time 30
```

3 changes: 2 additions & 1 deletion installation/sources/build-and-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ The _input plugins_ provides certain features to gather information from a speci
| [FLB\_IN\_DISK](../../pipeline/inputs/disk-io-metrics.md) | Enable Disk I/O Metrics input plugin | On |
| [FLB\_IN\_DOCKER](../docker.md) | Enable Docker metrics input plugin | On |
| [FLB\_IN\_EXEC](../../pipeline/inputs/exec.md) | Enable Exec input plugin | On |
| [FLB\_IN\_FORWARD]() | Enable Forward input plugin | On |
| [FLB\_IN\_FORWARD](build-and-install.md) | Enable Forward input plugin | On |
| [FLB\_IN\_HEAD](../../pipeline/inputs/head.md) | Enable Head input plugin | On |
| [FLB\_IN\_HEALTH](../../pipeline/inputs/health.md) | Enable Health input plugin | On |
| [FLB\_IN\_KMSG](../../pipeline/inputs/kernel-logs.md) | Enable Kernel log input plugin | On |
Expand Down Expand Up @@ -182,3 +182,4 @@ The _output plugins_ gives the capacity to flush the information to some externa
| [FLB\_OUT\_STDOUT](build-and-install.md) | Enable STDOUT output plugin | On |
| FLB\_OUT\_TCP | Enable TCP/TLS output plugin | On |
| [FLB\_OUT\_TD](../../pipeline/outputs/treasure-data.md) | Enable [Treasure Data](http://www.treasuredata.com) output plugin | On |

18 changes: 18 additions & 0 deletions installation/upgrade-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,24 @@ The following article cover the relevant notes for users upgrading from previous

For more details about changes on each release please refer to the [Official Release Notes](https://fluentbit.io/announcements/).



## Fluent Bit v1.6

If you are migrating from previous version of Fluent Bit please review the following important changes:

#### Tail Input Plugin

Now by default the plugin follows a file from the end once the service starts (old behavior was always read from the beginning). For every file found at start, its followed from it last position, for new files discovered at runtime or rotated, they are read from the beginning.

If you desire to keep the old behavior you can set the option ```read_from_head``` to true.

### Stackdriver Output Plugin

The project_id of [resource](https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource) in [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) sent to Google Cloud Logging would be set to the project ID rather than the project number. To learn the difference between Project ID and project number, see [this](https://cloud.google.com/resource-manager/docs/creating-managing-projects#before_you_begin) for more details.

If you have any existing queries based on the resource's project_id, please update your query accordingly.

## Fluent Bit v1.5

The migration from v1.4 to v1.5 is pretty straightforward.
Expand Down
5 changes: 3 additions & 2 deletions pipeline/filters/aws-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The plugin supports the following configuration parameters:
| imds\_version | Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'. | v2 |
| az | The [availability zone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html); for example, "us-east-1a". | true |
| ec2\_instance\_id | The EC2 instance ID. | true |
| ec2\_instance\_type | The EC2 instance type. | false |
| ec2\_instance\_type | The EC2 instance type. | false |
| private\_ip | The EC2 instance private ip. | false |
| ami\_id | The EC2 instance image id. | false |
| account\_id | The account ID for current EC2 instance. | false |
Expand Down Expand Up @@ -53,4 +53,5 @@ $ bin/fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf
[OUTPUT]
Name stdout
Match *
```
```

9 changes: 5 additions & 4 deletions pipeline/filters/lua.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,11 @@ The plugin supports the following configuration parameters:

| Key | Description |
| :--- | :--- |
| Script | Path to the Lua script that will be used. |
| Call | Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above. |
| Type\_int\_key | If these keys are matched, the fields are converted to integer. If more than one key, delimit by space |
| Protected\_mode | If enabled, Lua script will be executed in protected mode. It prevents to crash when invalid Lua script is executed. Default is true. |
| script | Path to the Lua script that will be used. |
| call | Lua function name that will be triggered to do filtering. It's assumed that the function is declared inside the Script defined above. |
| type\_int\_key | If these keys are matched, the fields are converted to integer. If more than one key, delimit by space. Note that starting from Fluent Bit v1.6 integer data types are preserved and not converted to double as in previous versions. |
| protected\_mode | If enabled, Lua script will be executed in protected mode. It prevents to crash when invalid Lua script is executed. Default is true. |
| time_as_table | By default when the Lua script is invoked, the record timestamp is passed as a Floating number which might lead to loss precision when the data is converted back. If you desire timestamp precision enabling this option will pass the timestamp as a Lua table with keys ```sec``` for seconds since epoch and ```nsec``` for nanoseconds. |

## Getting Started <a id="getting_started"></a>

Expand Down
38 changes: 21 additions & 17 deletions pipeline/filters/tensorflow.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,44 @@
# Tensorflow

_Tensorflow Filter_ allows running Machine Learning inference tasks on the records of data coming from
input plugins or stream processor. This filter uses [Tensorflow Lite](https://www.tensorflow.org/lite/)
as the inference engine, and **requires Tensorflow Lite shared library to be present during build and at runtime**.
## Tensorflow

Tensorflow Lite is a lightweight open-source deep learning framework that is used for mobile and IoT applications. Tensorflow Lite only handles inference (not training), therefore, it loads pre-trained models (`.tflite` files) that are converted into Tensorflow Lite format (`FlatBuffer`). You can read more on converting Tensorflow models [here](https://www.tensorflow.org/lite/convert)
_Tensorflow Filter_ allows running Machine Learning inference tasks on the records of data coming from input plugins or stream processor. This filter uses [Tensorflow Lite](https://www.tensorflow.org/lite/) as the inference engine, and **requires Tensorflow Lite shared library to be present during build and at runtime**.

## Configuration Parameters
Tensorflow Lite is a lightweight open-source deep learning framework that is used for mobile and IoT applications. Tensorflow Lite only handles inference \(not training\), therefore, it loads pre-trained models \(`.tflite` files\) that are converted into Tensorflow Lite format \(`FlatBuffer`\). You can read more on converting Tensorflow models [here](https://www.tensorflow.org/lite/convert)

### Configuration Parameters

The plugin supports the following configuration parameters:

| Key | Description | Default |
| :--- | :--- | :--- |
| input_field | Specify the name of the field in the record to apply inference on. | |
| model_file | Path to the model file (`.tflite`) to be loaded by Tensorflow Lite. | |
| include_input_fields | Include all input filed in filter's output | True |
| normalization_value | Divide input values to normalization_value | |
| input\_field | Specify the name of the field in the record to apply inference on. | |
| model\_file | Path to the model file \(`.tflite`\) to be loaded by Tensorflow Lite. | |
| include\_input\_fields | Include all input filed in filter's output | True |
| normalization\_value | Divide input values to normalization\_value | |

## Creating Tensorflow Lite shared library
### Creating Tensorflow Lite shared library

Clone [Tensorflow repository](https://github.com/tensorflow/tensorflow), install bazel package manager, and run the following command in order to create the shared library:

```bash
$ bazel build -c opt //tensorflow/lite/c:tensorflowlite_c # see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c
```
The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`. You need to copy the library to a location (such as `/usr/lib`) that can be used by Fluent Bit.

## Building Fluent Bit with Tensorflow filter plugin
The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`. You need to copy the library to a location \(such as `/usr/lib`\) that can be used by Fluent Bit.

### Building Fluent Bit with Tensorflow filter plugin

Tensorflow filter plugin is disabled by default. You need to build Fluent Bit with Tensorflow plugin enabled. In addition, it requires access to Tensorflow Lite header files to compile. Therefore, you also need to pass the address of the Tensorflow source code on your machine to the [build script](https://github.com/fluent/fluent-bit#build-from-scratch):

Tensorflow filter plugin is disabled by default. You need to build Fluent Bit with Tensorflow plugin enabled. In addition, it requires access to Tensorflow Lite header files to compile.
Therefore, you also need to pass the address of the Tensorflow source code on your machine to the [build script](https://github.com/fluent/fluent-bit#build-from-scratch):
```bash
cmake -DFLB_FILTER_TENSORFLOW=On -DTensorflow_DIR=<AddressOfTensorflowSourceCode> ...
```

### Command line
#### Command line

If Tensorflow plugin initializes correctly, it reports successful creation of the interpreter, and prints a summary of model's input/output types and dimensions.

```bash
$ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field=image' -p 'model_file=/home/user/model.tflite' -p 'include_input_fields=false' -p 'normalization_value=255' -o stdout
[2020/08/04 20:00:00] [ info] Tensorflow Lite interpreter created!
Expand All @@ -45,7 +48,7 @@ $ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field
[2020/08/04 20:00:00] [ info] [tensorflow] type: FLOAT32 dimensions: {1, 2}
```

### Configuration File
#### Configuration File

```text
[SERVICE]
Expand All @@ -70,7 +73,8 @@ $ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field
Match *
```

# Limitations
## Limitations

1. Currently supports single-input models
2. Uses Tensorflow 2.3 header files

1 change: 1 addition & 0 deletions pipeline/inputs/syslog.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ The plugin supports the following configuration parameters:
| Parser | Specify an alternative parser for the message. If _Mode_ is set to _tcp_ or _udp_ then the default parser is _syslog-rfc5424_ otherwise _syslog-rfc3164-local_ is used. If your syslog messages have fractional seconds set this Parser value to _syslog-rfc5424_ instead. | |
| Buffer\_Chunk\_Size | By default the buffer to store the incoming Syslog messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by _Buffer\_Chunk\_Size_. If not set, _Buffer\_Chunk\_Size_ is equal to 32000 bytes \(32KB\). Read considerations below when using _udp_ or _unix\_udp_ mode. | |
| Buffer\_Max\_Size | Specify the maximum buffer size to receive a Syslog message. If not set, the default size will be the value of _Buffer\_Chunk\_Size_. | |
| Encoding | Specify input character set (if not UTF-8). [Tail](tail.md#encoding) input plugin has more info. |

### Considerations

Expand Down
Loading