Skip to content
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion administration/configuring-fluent-bit/unit-sizes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Unit sizes

Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure) use unit sizes.
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md), or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes.

Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ description: Learn about the YAML configuration file used by Fluent Bit

# YAML configuration file

<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=864c6f0e-8977-4838-8772-84416943548e" />
<img referrerpolicy="no-referrer-when-downgrade"
src="https://static.scarf.sh/a.png?x-pxid=864c6f0e-8977-4838-8772-84416943548e" alt="" />

One of the ways to configure Fluent Bit is using a YAML configuration file that works at a global scope.

Expand Down
2 changes: 1 addition & 1 deletion concepts/data-pipeline/parser.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The parser converts unstructured data to structured data. As an example, conside
192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
```

This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression.md) is used, the log entry could be converted to:

```javascript
{
Expand Down
2 changes: 1 addition & 1 deletion installation/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ Alpine Linux uses Musl C library instead of Glibc. Musl isn't fully compatible w
### Why use distroless containers?

The reasons for using distroless are well covered in
[Why should I use Distroless images?](https://github.com/GoogleContainerTools/distroless#why-should-i-use-distroless-images).
[Why should I use Distroless images?](tps://github.com/GoogleContainerTools/distroless#why-should-i-use-distroless-images).

- Include only what you need, reduce the attack surface available.
- Reduces size and improves performance.
Expand Down
10 changes: 5 additions & 5 deletions local-testing/validating-your-data-and-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Fluent Bit supports multiple sources and formats. In addition, it provides filte

Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.

In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect) filter, which you can use to validate keys and values from your records and take action when an exception is found.
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect.md) filter, which you can use to validate keys and values from your records and take action when an exception is found.

A simplified view of the data processing pipeline is as follows:

Expand All @@ -20,8 +20,8 @@ IS --> Fil --> OD

Consider the following pipeline, which uses a JSON file as its data source and has two filters:

- [Grep](../pipeline/filters/grep) to exclude certain records.
- [Record Modifier](../pipeline/filters/record-modifier) to alter records' content by adding and removing specific keys.
- [Grep](../pipeline/filters/grep.md) to exclude certain records.
- [Record Modifier](../pipeline/filters/record-modifier.md) to alter records' content by adding and removing specific keys.

```mermaid
flowchart LR
Expand All @@ -37,7 +37,7 @@ record --> stdout

Add data validation between each step to ensure your data structure is correct.

This example uses the [Expect](../pipeline/filters/expect) filter.
This example uses the [Expect](../pipeline/filters/expect.md) filter.

```mermaid
flowchart LR
Expand Down Expand Up @@ -164,7 +164,7 @@ The following is the Fluent Bit classic parsers file:
{% endtab %}
{% endtabs %}

If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail) input (`parser json`), the Expect filter triggers the `exit` action.
If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail.md) input (`parser json`), the Expect filter triggers the `exit` action.

To extend the pipeline, add a Grep filter to match records that map `label` containing a key called `name` with value the `abc`, and add an Expect filter to re-validate that condition:

Expand Down
2 changes: 1 addition & 1 deletion pipeline/filters/checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The plugin supports the following configuration parameters
| Key | Description | Default |
| :-- | :---------- | :------ |
| `file` | The single value file that Fluent Bit will use as a lookup table to determine if the specified `lookup_key` exists. | _none_ |
| `lookup_key` | The specific key to look up and determine if it exists. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor). | _none_ |
| `lookup_key` | The specific key to look up and determine if it exists. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ |
| `record` | The record to add if the `lookup_key` is found in the specified `file`. You can add multiple record parameters. | _none_ |
| `mode` | Set the check mode. `exact` and `partial` are supported. | `exact`|
| `print_query_time` | Print to stdout the elapsed query time for every matched record. | `false` |
Expand Down
2 changes: 1 addition & 1 deletion pipeline/filters/grep.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ To start filtering records, run the filter from the command line or through the

When using the command line, pay close attention to quote the regular expressions. Using a configuration file might be easier.

The following command loads the [tail](../../pipeline/inputs/tail) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:
The following command loads the [tail](../../pipeline/inputs/tail.md) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:

```shell
fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout
Expand Down
6 changes: 3 additions & 3 deletions pipeline/filters/log_to_metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ The plugin supports the following configuration parameters:
| `metric_description` | Required. Sets a description for the metric. | |
| `bucket` | Required for mode `histogram`. Defines a bucket for histograms. | For example, `0.75` |
| `add_label` | Adds a custom label `NAME` and set the value to the value of `KEY`. | |
| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. |
| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor) notation for nested fields. |
| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [Kubernetes](./kubernetes) filter plugin, which fills those fields. | |
| `label_field` | Includes a record field as label dimension in the metric. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. |
| `value_field` | Required for modes `gauge` and `histogram`. Specifies the record field that holds a numerical value. | Name of record key. Supports [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) notation for nested fields. |
| `kubernetes_mode` | If enabled, adds `pod_id`, `pod_name`, `namespace_name`, `docker_id` and `container_name` to the metric as labels. This option is intended to be used in combination with the [Kubernetes](./kubernetes.md) filter plugin, which fills those fields. | |
| `Regex` | Includes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` |
| `Exclude` | Excludes records in which the content of `KEY` matches the regular expression. | `KEY REGEX` |
| `Flush_Interval_Sec` | The interval for metrics emission, in seconds. If `Flush_Interval_Sec` and `Flush_Interval_Nsec` are either both unset or both set to `0`, the filter emits metrics immediately after each filter match. Otherwise, if either parameter is set to a non-zero value, the filter emits metrics at the specified interval. Longer intervals help lower resource consumption in high-load situations. Default value: `0`. | |
Expand Down
4 changes: 2 additions & 2 deletions pipeline/inputs/process.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The _Process metrics_ input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.

This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics) input plugin.
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics.md) input plugin.

## Configuration parameters

Expand All @@ -16,7 +16,7 @@ The plugin supports the following configuration parameters:
| `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` |
| `Fd` | If enabled, a number of `fd` is appended to each record. | `true` |
| `Mem` | If enabled, memory usage of the process is appended to each record. | `true` |
| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading#inputs). | `false` |
| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |

## Get started

Expand Down
4 changes: 2 additions & 2 deletions pipeline/outputs/azure_kusto.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Fluent Bit uses the application's credentials to ingest data into your cluster.

- [Register an application](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application)
- [Add a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-a-client-secret)
- [Authorize the app in your database](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/access-control/principals-and-identity-providers#azure-ad-tenants)
- [Authorize the app in your database](https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization)

## Create a table

Expand Down Expand Up @@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring
| `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` |
| `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` |
| `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` |
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` |
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. | `200MB` |
| `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` |
| `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` |
| `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` |
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/kinesis.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This is the documentation for the core Fluent Bit Kinesis plugin written in C. I

This `kinesis_streams` plugin always uses a random partition key when uploading records to Kinesis through the [PutRecords API](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html).

For information about how AWS credentials are fetched, see [AWS credentials](../../administration/aws-credentials).
For information about how AWS credentials are fetched, see [AWS credentials](../../administration/aws-credentials.md).

## Configuration parameters

Expand Down
1 change: 0 additions & 1 deletion pipeline/outputs/logdna.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ This plugin uses the following configuration parameters:
| `tags` | A list of comma-separated strings to group records in LogDNA and simplify the query with filters. | _none_ |
| `file` | Optional name of a file being monitored. This value is only set if the record doesn't contain a reference to it. | _none_ |
| `app` | Name of the application. This value is automatically discovered on each record. If no value is found, the default value is used. | `Fluent Bit` |
| `workers` | The number of [workers](../../administration/multithreading#outputs) to perform flush operations for this output. | `0` |

## Data discovery and enrichment

Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/new-relic.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The _New Relic_ output plugin lets you send logs to New Relic.
| `api_key` | Your [New Relic API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required.| _none_ |
| `license_key` | Your [New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required. | _none_ |
| `compress` | Sets the compression mechanism for the payload. Possible values: `gzip` or `false`. | `gzip` |
| `workers` | Sets the number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
| `workers` | Sets the number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |

## Example configuration

Expand Down
4 changes: 2 additions & 2 deletions pipeline/outputs/observe.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Observe

Use the [HTTP output plugin](./http) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
Use the [HTTP output plugin](`http`) to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html). It issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.

## Configuration parameters

The following HTTP configuration parameters are relevant to Observe:

| Key | Description | Default |
| --- | ----------- | ------- |
| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id). | `OBSERVE_CUSTOMER.collect.observeinc.com` |
| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/hints/CustomerId.html). | `OBSERVE_CUSTOMER.collect.observeinc.com` |
| `port` | TCP port to use when sending data to Observe. | `443` |
| `tls` | Specifies whether to use TLS. | `on` |
| `uri` | Specifies the HTTP URI for Observe. | `/v1/http/fluentbit` |
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/oci-logging-analytics.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This plugin uses the following configuration parameters:
| `profile_name` | The OCI configuration profile name to be used from the configuration file. | `DEFAULT` |
| `namespace` | The OCI tenancy namespace to upload log data to. | _none_ |
| `proxy` | The proxy name, in `http://host:port` format. Only supports HTTP protocol. | _none_ |
| `workers` | The number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
| `oci_config_in_record` | If set to `true`, the following `oci_la_*` will be read from the record itself instead of the output plugin configuration. | `false` |
| `oci_la_log_group_id` | Required. The Oracle Cloud Identifier (OCID) of the Logging Analytics where you want to store logs. | _none_ |
| `oci_la_log_source_name` | Required. The Logging Analytics Source to use for processing log records. | _none_ |
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/standard-output.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The _standard output_ plugin prints ingested data to standard output.
fluent-bit -i cpu -o stdout -v
```

This example directs the plugin to gather [CPU](../inputs/cpu-metrics) usage metrics and print them out to the standard output in a human-readable way:
This example directs the plugin to gather [CPU](../inputs/cpu-metrics.md) usage metrics and print them out to the standard output in a human-readable way:

```shell
fluent-bit -i cpu -o stdout -p format=msgpack -v
Expand Down
2 changes: 1 addition & 1 deletion pipeline/processors/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Additionally, filters can be implemented in a way that mimics the behavior of pr

{% hint style="info" %}

Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/configuration-file) support processors.
Only [YAML configuration files](../../administration/configuring-fluent-bit/yaml/configuration-file.md) support processors.

{% endhint %}

Expand Down