Smallrye Kafka connector usage on the same @Outgoing topic will not report failure ? #33271
Unanswered
GauthierHacout
asked this question in
Q&A
Replies: 2 comments 1 reply
-
/cc @Ladicek (smallrye), @alesj (kafka), @cescoffier (kafka), @jmartisk (smallrye), @ozangunalp (kafka), @phillip-kruger (smallrye), @radcortez (smallrye) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Kafka records are polled in batches. But when a failure happens, it should stop polling, and the application will be marked as unhealthy eventually. Polled records that are not processed / acknowledged will be re-processed when the application restart, as the offset won't be committed. Are you seeing a different behavior? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
This is my first participation to the Quarkus Github discussion, so let me know if I did anything wrong. I decided to write this message in order to discuss about wheter a specific behaviour should be reported as a bug or not. It's a very specific usecase so I'll try to explain it the best I can:
I'm using a Quarkus app in order to process Kafka records by filtering and transforming them and then produce them in another Kafka topic. My app reads from multiple topics, but always produce in the same. To illustrate what I'm doing my code looks like this:
Now my problem is that:
Whenever a failure is reported in the processorA (e.g: Multi fails for any reason inside the FilterAndTranform() function and then the Kafka connector reports the consumer as being "down", logs an error, etc.). Then processorB will stop producing Kafka records into the
@Outgoing
topic, although it will still consume records from the@Incoming
topic.It took me a long time to understand this, it created an unexpected behaviour because whenever a failure was reported in processorA Kafka records from the "topic-in-B" would be skipped, since the consumer would still be working and acknowledging record after FilterAndTransform() but the producer wouldn't do anything with what comes out of the Multi<KafkaRecord<String, Event>>, so my records would never find themselves in the "topic-out".
So my question is:
Shouldn't the Kafka producer of
@Outgoing("topic-out")
report a failure ? Or warn me that I'm sending records that will never be produced to Kafka, instead of just accepting them and not doing anything with it ?I know this is very specific but I feel like it would help developers like me to understand faster what's going on !
Edit: I'm using Quarkus 3.0.1
Beta Was this translation helpful? Give feedback.
All reactions