Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ IgnoredScopes = tt, code
[formats]
mdx = md

[*.{md,markdown,txt,htm,html}]
[*.{md,markdown}]
BasedOnStyles = FluentBit

FluentBit.Terms = YES
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ description: High Performance Telemetry Agent for Logs, Metrics and Traces
- Connect nearly any source to nearly any destination using preexisting plugins
- Extensibility:
- Write input, filter, or output plugins in the C language
- WASM: [WASM Filter Plugins](development/wasm-filter-plugins.md) or [WASM Input Plugins](development/wasm-input-plugins.md)
- Wasm: [Wasm Filter Plugins](development/wasm-filter-plugins.md) or [Wasm Input Plugins](development/wasm-input-plugins.md)
- Write [Filters in Lua](pipeline/filters/lua.md) or [Output plugins in Golang](development/golang-output-plugins.md)
- [Monitoring](administration/monitoring.md): Expose internal metrics over HTTP in JSON and [Prometheus](https://prometheus.io/) format
- [Stream Processing](stream-processing/introduction.md): Perform data selection and transformation using simple SQL queries
Expand Down
2 changes: 1 addition & 1 deletion SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@
* [Docker events](pipeline/inputs/docker-events.md)
* [Docker metrics](pipeline/inputs/docker-metrics.md)
* [Dummy](pipeline/inputs/dummy.md)
* [Ebpf](pipeline/inputs/ebpf.md)
* [eBPF](pipeline/inputs/ebpf.md)
* [Elasticsearch](pipeline/inputs/elasticsearch.md)
* [Exec WASI](pipeline/inputs/exec-wasi.md)
* [Exec](pipeline/inputs/exec.md)
Expand Down
2 changes: 1 addition & 1 deletion pipeline/inputs/disk-io-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,4 @@ pipeline:

Total interval (sec) = `Interval_Sec` + `(Interval_Nsec` / 1000000000)

For example: `1.5s` = `1s` + `500000000ns`
For example: `1.5s` = `1s` + `500000000ns`
2 changes: 1 addition & 1 deletion pipeline/inputs/ebpf.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Ebpf
# eBPF

{% hint style="info" %}
This plugin is experimental and might be unstable. Use it in development or testing environments only. Its features and behavior are subject to change.
Expand Down
8 changes: 4 additions & 4 deletions pipeline/inputs/elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ The plugin supports the following configuration parameters:
| `buffer_max_size` | Set the maximum size of buffer. | `4M` |
| `buffer_chunk_size` | Set the buffer chunk size. | `512K` |
| `tag_key` | Specify a key name for extracting as a tag. | `NULL` |
| `meta_key` | Specify a key name for meta information. | "@meta" |
| `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" |
| `version` | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" |
| `meta_key` | Specify a key name for meta information. | `@meta` |
| `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | `localhost` |
| `version` | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | `8.0.0` |
| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |

The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The `hostname` will be used for sniffing information and this is handled by the sniffing endpoint.
The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients, which means it builds its cluster and dynamically generate a connection list . The `hostname` will be used for sniffing information and this is handled by the sniffing endpoint.

## Get started

Expand Down
6 changes: 3 additions & 3 deletions pipeline/inputs/exec-wasi.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The plugin supports the following configuration parameters:
|:-------------------|:---------------------------------------------------------------------------------------------------------------------------------------------|
| `WASI_Path` | The location of a Wasm program file. |
| `Parser` | Specify the name of a parser to interpret the entry as a structured message. |
| `Accessible_Paths` | Specify the allowed list of paths to be able to access paths from WASM programs. |
| `Accessible_Paths` | Specify the allowed list of paths to be able to access paths from Wasm programs. |
| `Interval_Sec` | Polling interval (seconds). |
| `Interval_NSec` | Polling interval (nanosecond). |
| `Wasm_Heap_Size` | Size of the heap size of Wasm execution. Review [unit sizes](../../administration/configuring-fluent-bit/unit-sizes.md) for allowed values. |
Expand All @@ -23,11 +23,11 @@ The plugin supports the following configuration parameters:

Here is a configuration example.

`in_exec_wasi` can handle parsers. To retrieve from structured data from a WASM program, you must create a `parser.conf`:
`in_exec_wasi` can handle parsers. To retrieve from structured data from a Wasm program, you must create a `parser.conf`:

The `Time_Format` should be aligned for the format of your using timestamp.

This example assumes the WASM program writes JSON style strings to `stdout`.
This example assumes the Wasm program writes JSON style strings to `stdout`.

{% tabs %}
{% tab title="parsers.yaml" %}
Expand Down
2 changes: 1 addition & 1 deletion pipeline/inputs/exec.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,4 +194,4 @@ The previous script would be safer if written with:
-p command='echo '"$(printf '%q' "$@")" \
```

It's generally best to avoid dynamically generating the command or handling untrusted arguments.
It's generally best to avoid dynamically generating the command or handling untrusted arguments.
4 changes: 2 additions & 2 deletions pipeline/inputs/forward.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ pipeline:
port: 24224
buffer_chunk_size: 1M
buffer_max_size: 6M

outputs:
- name: stdout
match: '*'
Expand All @@ -82,7 +82,7 @@ pipeline:
{% endtab %}
{% endtabs %}

## Fluent Bit and Secure Forward Setup
## Fluent Bit and secure forward setup

In Fluent Bit v3 or later, `in_forward` can handle secure forward protocol.

Expand Down
6 changes: 3 additions & 3 deletions pipeline/inputs/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ pipeline:
- name: http
listen: 0.0.0.0
port: 8888

outputs:
- name: stdout
match: app.log
Expand Down Expand Up @@ -171,7 +171,7 @@ The `success_header` parameter lets you set multiple HTTP headers on success. Th
pipeline:
inputs:
- name: http
success_header:
success_header:
- X-Custom custom-answer
- X-Another another-answer
```
Expand Down Expand Up @@ -233,4 +233,4 @@ pipeline:

```shell
fluent-bit -i http -p port=8888 -o stdout
```
```
47 changes: 22 additions & 25 deletions pipeline/inputs/kafka.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Kafka Consumer
# Kafka

The _Kafka_ input plugin enables Fluent Bit to consume messages directly from one or more [Apache Kafka](https://kafka.apache.org/) topics. By subscribing to specified topics, this plugin efficiently collects and forwards Kafka messages for further processing within your Fluent Bit pipeline.

Expand All @@ -22,7 +22,7 @@ This plugin uses the official [librdkafka C library](https://github.com/edenhill

## Get started

To subscribe to or collect messages from Apache Kafka, run the plugin from the command line or through the configuration file as shown below.
To subscribe to or collect messages from Apache Kafka, run the plugin from the command line or through the configuration file as shown in the following examples.

### Command line

Expand Down Expand Up @@ -132,41 +132,34 @@ Every message received is then processed with `kafka.lua` and sent back to the `

The example can be executed locally with `make start` in the `examples/kafka_filter` directory (`docker/compose` is used).

## AWS MSK IAM Authentication
## AWS MSK IAM authentication

*Available since Fluent Bit v4.0.4*
Fluent Bit v4.0.4 and later supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This lets you securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.

Fluent Bit supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This allows you to securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.

### Prerequisites

**Build Requirements**
### Build requirements

If you are compiling Fluent Bit from source, ensure the following requirements are met to enable AWS MSK IAM support:

- The packages `libsasl2` and `libsasl2-dev` must be installed on your build environment.

**Runtime Requirements**
### Runtime requirements

- **Network Access:** Fluent Bit must be able to reach your MSK broker endpoints (AWS VPC setup).
- **AWS Credentials:** Provide credentials using any supported AWS method:
- **AWS Credentials:** Provide these AWS credentials using any supported AWS method. These credentials are discovered by default when `aws_msk_iam` flag is enabled.
- IAM roles (recommended for EC2, ECS, or EKS)
- Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`)
- AWS credentials file (`~/.aws/credentials`)
- Instance metadata service (IMDS)
- **IAM Permissions:** The credentials must allow access to the target MSK cluster, as shown in the following example policy.

Note these credentials are discovery by default when `aws_msk_iam` flag is enabled.

- **IAM Permissions:** The credentials must allow access to the target MSK cluster (see example policy below).

### Configuration Parameters
### Configuration parameters [#config-aws]

| Property | Description | Type | Required |
|---------------------------|---------------------------------------------------|---------|--------------------------------|
| `aws_msk_iam` | Enable AWS MSK IAM authentication | Boolean | No (default: false) |
| `aws_msk_iam_cluster_arn` | Full ARN of the MSK cluster for region extraction | String | Yes (if `aws_msk_iam` is true) |
| Property | Description | Required |
| -------- | ----------- | -------- |
| `aws_msk_iam` | If `true`, enables AWS MSK IAM authentication. Possible values: `true`, `false`. | `false` |
| `aws_msk_iam_cluster_arn` | Full ARN of the MSK cluster for region extraction. This value is required if `aws_msk_iam` is `true`. | _none_ |


### Configuration Example
### Configuration example

```yaml
pipeline:
Expand All @@ -182,9 +175,13 @@ pipeline:
match: '*'
```

### Example AWS IAM Policy
### Example AWS IAM policy

{% hint style="info" %}

> **Note:** IAM policies and permissions can be complex and may vary depending on your organization's security requirements. If you are unsure about the correct permissions or best practices, please consult with your AWS administrator or an AWS expert who is familiar with MSK and IAM security.
IAM policies and permissions can be complex and might vary depending on your organization's security requirements. If you are unsure about the correct permissions or best practices, consult your AWS administrator or an AWS expert who is familiar with MSK and IAM security.

{% endhint %}

The AWS credentials used by Fluent Bit must have permission to connect to your MSK cluster. Here is a minimal example policy:

Expand All @@ -206,4 +203,4 @@ The AWS credentials used by Fluent Bit must have permission to connect to your M
}
]
}
```
```
4 changes: 2 additions & 2 deletions pipeline/inputs/kernel-logs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Kernel logs

The _kmsg_ input plugin reads the Linux Kernel log buffer from the beginning. It gets every record and parses fields as `priority`, `sequence`, `seconds`, `useconds`, and `message`.
The _Kernel logs_ (`kmsg`) input plugin reads the Linux Kernel log buffer from the beginning. It gets every record and parses fields as `priority`, `sequence`, `seconds`, `useconds`, and `message`.

## Configuration parameters

Expand Down Expand Up @@ -64,4 +64,4 @@ pipeline:
```

{% endtab %}
{% endtabs %}
{% endtabs %}
4 changes: 2 additions & 2 deletions pipeline/inputs/mqtt.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ pipeline:
tag: data
listen: 0.0.0.0
port: 1883

outputs:
- name: stdout
match: '*'
Expand All @@ -77,4 +77,4 @@ pipeline:
```

{% endtab %}
{% endtabs %}
{% endtabs %}
2 changes: 1 addition & 1 deletion pipeline/inputs/nginx.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ To gather metrics from the command line with the NGINX Plus REST API you need to
fluent-bit -i nginx_metrics -p host=127.0.0.1 -p port=80 -p nginx_plus=on -p status_url=/api -o stdout
```

### Configuration File
### Configuration file

In your main configuration file append the following:

Expand Down
Loading