-
Notifications
You must be signed in to change notification settings - Fork 94
Description
I was trying to develop resilient Consumer that reads durable replicated Kafka topic via dapr/sdk in Node.js.
The issue is that topic offset is committed(i.e. moved forward, ack-ed, acknowledged) even if an error occurs. So Kafka-side resilient retry doesn't work. I.e. any bugs in my code or transient errors result in message loss.
When pinpointing the problem I found this code:
https://github.com/dapr/js-sdk/blob/main/src/implementation/Server/HTTPServer/HTTPServerImpl.ts#L171
In particular this:
try {
status = await cb(data, headers);
} catch (e) {
// We catch and log an error, but we don't do anything with it as the statuses should define that
this.logger.error(`[route-${routeObj.path}] Message processing failed, ${e}`);
}
statuses.push(status ?? DaprPubSubStatusEnum.SUCCESS);
So it seems if my callback throws an unexpected error then the message is lost. I can't use dapr/sdk and guarantee that all messages are processed and not lost.
Why catch doesn't do anything and Success is always default? How will retry work?
Does retry even work at all with Kafka? I really need messages to be durable and durably replicated and acked only after full finish of the handler. Money is inside the message payload. What am I doing wrong?