Implements a KafkaConsumerResource to solve consumer already closed i…#306
Implements a KafkaConsumerResource to solve consumer already closed i…#306paualarco wants to merge 2 commits intomonix:masterfrom
Conversation
…ssue Reproducible test KafkaConsumerResourceSpec Scaladocs Revert unwanted changes a
3f4b035 to
fd067b2
Compare
Trigger ci 1 Remove printline Update scala version build Refinement Bring back serialization test
1e0cb17 to
5e238d1
Compare
|
I can review this if you're interested :) |
Thanks, that would be helpful! I've updated the description :) |
|
Hi @paualarco sorry for ignoring this Looking at the bug example: KafkaConsumerObservable
.manualCommit[String, String](kafkaConfig, List(topic))
.timeoutOnSlowUpstreamTo(5.seconds, Observable.empty)
.foldLeft(CommittableOffsetBatch.empty) { case (batch, message) => batch.updated(message.committableOffset) }
.mapEval(completeBatch => completeBatch.commitAsync())
.headOrElseL(List.empty)I feel like the real issue is that |
|
@Avasil no problem, I don't think that's the case, so the poll heartbeat consumer would fix the problem of partition reassignment by rebalancing, which was caused by slow downstream consumers. It gets fixed when the resource is responsible of closing the consumer, and not from the Observable For the poll heartbeat to have been caused this issue, the max.poll.interval.ms would have needed to be higher than 300sec (as the default value), but the test were not as high :) |
Resolves #240 by introducing a new class
KafkaConsumerResourcewhich essentially is an improved way of creating and closing aKafkaConsumerthan directly formKafkaConsumerObservable, the problem with the latter one was on the last element of the observable, the consumer was closed after emitting the last element and signaling on complete, which lead to an exception when trying to commit the last consumer record.This is fixed with the resource usage, since the consumer will not be closed after the latest element is consumed.