Skip to content

Slow performance with Kafka input module #9813

@uranru

Description

@uranru

Bug Report

Describe the bug
I want to receive messages from a Kafka cluster, process them and send them further.
Now I'm testing with standard output, I see very low performance.
About 20,000 messages per minute, it's very slow for my number of messages.
I tried different buffer settings and rdkafka.X settings.
But the speed is always the same.
Bit works in a docker container.
Tell me how can I increase performance?
Memory is not loaded, the processor is not loaded, my container is resting.

To Reproduce

  • Example log message if applicable:
{
	"host": {
		"name": "int-queues",
		"domain": "int"
	},
	"tags": [
		"internal"
	],
	"@timestamp": "2025-01-09T16:04:52.082Z",
	"log": {
		"name": "debug-trn-sync",
		"level": "debug",
		"pid": "2825983",
		"dir": "log",
		"ip": "local-script",
		"time": "2025-01-09T16:04:51.445Z"
	},
	"app": {
		"name": "backend",
		"group": "web",
		"environment": "prod"
	},
	"@version": "1"
}
  • Steps to reproduce the problem:

Expected behavior

Screenshots

Your Environment

  • Version used:
[2025/01/09 18:52:30] [ info] [fluent bit] version=3.2.4, commit=5b0ff04120, pid=1
[2025/01/09 18:52:30] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/01/09 18:52:30] [ info] [simd    ] disabled
[2025/01/09 18:52:30] [ info] [cmetrics] version=0.9.9
[2025/01/09 18:52:30] [ info] [ctraces ] version=0.5.7
  • Configuration:
[SERVICE]
    flush        1
    http_server  on
    http_port    2020
    health_check on              # /api/v1/health
    #daemon       Off
    log_level    info
    #log_level    debug
    #Buffer_max_size 1024Mb
    #Buffer_chunk_size 50Mb
    #storage.max_chunks_up 1000

[INPUT]
    name            fluentbit_metrics
    tag             metrics.internal
    scrape_interval 5

[INPUT]
    Name          kafka
    Tag           kafka
    threaded      true
    brokers       nd-kafka-n01.int:9092,nd-kafka-n02.int:9092,nd-kafka-n03.int:9092
    Mem_Buf_Limit 256MB
    topics        int.web.backend
    poll_ms       100
    format        json
    group_id      fluent

    #rdkafka.queued.max.messages.kbytes 262144  # 256MB
    #rdkafka.fetch.message.max.bytes   10485760
    #rdkafka.max.partition.fetch.bytes 10485760
    #rdkafka.fetch.max.bytes           524288000
    #rdkafka.fetch.min.bytes 16384
    rdkafka.sasl.mechanism     PLAIN
    rdkafka.security.protocol  SASL_SSL
    rdkafka.sasl.username      kafkaclient
    rdkafka.sasl.password      {{ lookup('hashi_vault', 'secret=data/nomad/kafka:kafka_pass_kafkaclient') }}
    rdkafka.ssl.ca.location    /etc/certs/ca.cer

[OUTPUT]
    Name        stdout
    Match       kafka

[OUTPUT]
    Match        metrics.internal
    Name        prometheus_remote_write
    Host        victoria.stream.service.int
    Port        8428
    Uri         /api/v1/write

[OUTPUT]
    name            prometheus_exporter
    match           metrics.internal
    host            0.0.0.0
    port            2021
  • Environment name and version (e.g. Kubernetes? What version?):
    I use https://hub.docker.com/r/bitnami/fluent-bit
  • Server type and version:
  • Operating System and version:
    22.04.5 LTS (GNU/Linux 5.15.0-130-generic x86_64)
  • Filters and plugins:

Additional context

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions