Skip to content

Kafka auto-commit on failed index events #343

@cdenneen

Description

@cdenneen

Currently if my kafka input tries to send the event to Elasticsearch/OpenSearch and it fails (in my recent scenario the write index alias was missing) the events during that time appear to be "dropped on the floor" but the consumer group offset doesn't account for them after the alias was fixed.

After the alias was fixed all new events have ingested properly.
However the 15 hour window when the alias wasn't working for indexing those events haven't been ingested.

Need way to track those items so they can sync. Changing the consumer to offset of earliest from latest would involve a lot of duplicate events so that isn't an option.

The commit to ZK should account for the fact that the event wasn't successfully indexed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions