You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/inputs/kafka.md
+85-7Lines changed: 85 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,8 @@
1
-
# Kafka
1
+
# Kafka Consumer
2
2
3
-
The _Kafka_ input plugin subscribes to one or more Kafka topics to collect messages from an [Apache Kafka](https://kafka.apache.org/) service.
3
+
The _Kafka_ input plugin enables Fluent Bit to consume messages directly from one or more [Apache Kafka](https://kafka.apache.org/) topics. By subscribing to specified topics, this plugin efficiently collects and forwards Kafka messages for further processing within your Fluent Bit pipeline.
4
+
5
+
Starting with version 4.0.4, the Kafka input plugin supports authentication with AWS MSK IAM, enabling integration with Amazon MSK (Managed Streaming for Apache Kafka) clusters that require IAM-based access.
4
6
5
7
This plugin uses the official [librdkafka C library](https://github.com/edenhill/librdkafka) as a built-in dependency.
6
8
@@ -30,7 +32,7 @@ The Kafka plugin can read parameters through the `-p` argument (property):
In your main configuration file append the following:
36
38
@@ -44,7 +46,7 @@ pipeline:
44
46
brokers: 192.168.1.3:9092
45
47
topics: some-topic
46
48
poll_ms: 100
47
-
49
+
48
50
outputs:
49
51
- name: stdout
50
52
match: '*'
@@ -83,13 +85,13 @@ pipeline:
83
85
topics: fb-source
84
86
poll_ms: 100
85
87
format: json
86
-
88
+
87
89
filters:
88
90
- name: lua
89
91
match: '*'
90
92
script: kafka.lua
91
93
call: modify_kafka_message
92
-
94
+
93
95
outputs:
94
96
- name: kafka
95
97
brokers: kafka-broker:9092
@@ -128,4 +130,80 @@ Since the payload will be in JSON format, the plugin is configured to parse the
128
130
129
131
Every message received is then processed with `kafka.lua` and sent back to the `fb-sink` topic of the same broker.
130
132
131
-
The example can be executed locally with `make start` in the `examples/kafka_filter` directory (`docker/compose` is used).
133
+
The example can be executed locally with `make start` in the `examples/kafka_filter` directory (`docker/compose` is used).
134
+
135
+
## AWS MSK IAM Authentication
136
+
137
+
*Available since Fluent Bit v4.0.4*
138
+
139
+
Fluent Bit supports authentication to Amazon MSK (Managed Streaming for Apache Kafka) clusters using AWS IAM. This allows you to securely connect to MSK brokers with AWS credentials, leveraging IAM roles and policies for access control.
140
+
141
+
### Prerequisites
142
+
143
+
**Build Requirements**
144
+
145
+
If you are compiling Fluent Bit from source, ensure the following requirements are met to enable AWS MSK IAM support:
146
+
147
+
- The packages `libsasl2` and `libsasl2-dev` must be installed on your build environment.
148
+
149
+
**Runtime Requirements**
150
+
-**Network Access:** Fluent Bit must be able to reach your MSK broker endpoints (AWS VPC setup).
151
+
-**AWS Credentials:** Provide credentials using any supported AWS method:
> **Note:** IAM policies and permissions can be complex and may vary depending on your organization's security requirements. If you are unsure about the correct permissions or best practices, please consult with your AWS administrator or an AWS expert who is familiar with MSK and IAM security.
188
+
189
+
The AWS credentials used by Fluent Bit must have permission to connect to your MSK cluster. Here is a minimal example policy:
Copy file name to clipboardExpand all lines: pipeline/inputs/opentelemetry.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,6 @@ The OpenTelemetry input plugin supports the following telemetry data types:
63
63
A sample configuration file to get started will look something like the following:
64
64
65
65
{% tabs %}
66
-
67
66
{% tab title="fluent-bit.yaml" %}
68
67
69
68
```yaml
@@ -72,16 +71,16 @@ pipeline:
72
71
- name: opentelemetry
73
72
listen: 127.0.0.1
74
73
port: 4318
74
+
75
75
outputs:
76
76
- name: stdout
77
77
match: '*'
78
78
```
79
79
80
80
{% endtab %}
81
-
82
81
{% tab title="fluent-bit.conf" %}
83
82
84
-
```python
83
+
```text
85
84
[INPUT]
86
85
name opentelemetry
87
86
listen 127.0.0.1
@@ -93,13 +92,12 @@ pipeline:
93
92
```
94
93
95
94
{% endtab %}
96
-
97
95
{% endtabs %}
98
96
99
97
With this configuration, Fluent Bit listens on port `4318` for data. You can now send telemetry data to the endpoints `/v1/metrics` for metrics, `/v1/traces` for traces, and `/v1/logs` for logs.
100
98
101
99
A sample curl request to POST JSON encoded log data would be:
102
100
103
-
```bash
104
-
curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}' http://0.0.0.0:4318/v1/logs
105
-
```
101
+
```shell
102
+
$ curl --header "Content-Type: application/json" --request POST --data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}' http://0.0.0.0:4318/v1/logs
Copy file name to clipboardExpand all lines: pipeline/inputs/process-exporter-metrics.md
+53-16Lines changed: 53 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,12 +12,12 @@ Fluent Bit 2.2 and later includes a process exporter plugin that builds off the
12
12
The Process Exporter Metrics plugin implements collecting of the various metrics available from [the third party implementation of Prometheus Process Exporter](https://github.com/ncabatoff/process-exporter) and these will be expanded over time as needed.
13
13
14
14
{% hint style="info" %}
15
-
All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters don't operate on top of metrics.
16
-
{% endhint %}
17
15
18
-
This plugin is only supported on Linux based operating systems as it uses the `proc` filesystem to access the relevant metrics.
16
+
All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters don't
17
+
operate on top of metrics. This plugin is only supported on Linux based operating systems as it uses the `proc` filesystem to
18
+
access the relevant metrics. MacOS doesn't have the `proc` filesystem so this plugin won't work for it.
19
19
20
-
macOS doesn't have the `proc` filesystem so this plugin won't work for it.
20
+
{% endhint %}
21
21
22
22
## Configuration
23
23
@@ -53,7 +53,40 @@ This input always runs in its own [thread](../../administration/multithreading.m
53
53
54
54
In the following configuration file, the input plugin `process_exporter_metrics` collects metrics every 2 seconds and exposes them through the [Prometheus Exporter](../outputs/prometheus-exporter.md) output plugin on HTTP/TCP port 2021.
55
55
56
-
```python
56
+
{% tabs %}
57
+
{% tab title="fluent-bit.yaml" %}
58
+
59
+
```yaml
60
+
# Process Exporter Metrics + Prometheus Exporter
61
+
# -------------------------------------------
62
+
# The following example collect host metrics on Linux and expose
63
+
# them through a Prometheus HTTP end-point.
64
+
#
65
+
# After starting the service try it with:
66
+
#
67
+
# $ curl http://127.0.0.1:2021/metrics
68
+
#
69
+
service:
70
+
flush: 1
71
+
log_level: info
72
+
73
+
pipeline:
74
+
inputs:
75
+
- name: process_exporter_metrics
76
+
tag: process_metrics
77
+
scrape_interval: 2
78
+
79
+
outputs:
80
+
- name: prometheus_exporter
81
+
match: process_metrics
82
+
host: 0.0.0.0
83
+
port: 2021
84
+
```
85
+
86
+
{% endtab %}
87
+
{% tab title="fluent-bit.conf" %}
88
+
89
+
```text
57
90
# Process Exporter Metrics + Prometheus Exporter
58
91
# -------------------------------------------
59
92
# The following example collect host metrics on Linux and expose
@@ -79,10 +112,13 @@ In the following configuration file, the input plugin `process_exporter_metrics`
79
112
port 2021
80
113
```
81
114
115
+
{% endtab %}
116
+
{% endtabs %}
117
+
82
118
You can see the metrics by using `curl`:
83
119
84
-
```bash
85
-
curl http://127.0.0.1:2021/metrics
120
+
```shell
121
+
$ curl http://127.0.0.1:2021/metrics
86
122
```
87
123
88
124
### Container to collect host metrics
@@ -93,18 +129,19 @@ The following `docker` command deploys Fluent Bit with a specific mount path for
93
129
`procfs` and settings enabled to ensure that Fluent Bit can collect from the host.
Development prioritises a subset of the available collectors in the [third party implementation of Prometheus Process Exporter](https://github.com/ncabatoff/process-exporter). To request others, open a GitHub issue by using the following template:
0 commit comments