-
-
Notifications
You must be signed in to change notification settings - Fork 269
Description
We're using a single franz-go Client to produce to multiple topics. Until recently they all had the same default size limit (both in the producer and on the broker) but we encountered large records on one topic so we decided we need to increase the topic record size limit on the broker side.
The problem is that we realized that we can't find any way to increase it safely on the producer side: the ProducerBatchMaxBytes limit applies to all topics and the topics with smaller records might be affected: with a larger ProducerBatchMaxBytes value large batches with many small records can be created on those topics that will be rejected by the broker.
We considered using MaxBufferedBytes, but that has two downsides: it applies to the total across topics and (more importantly) it puts a hard limit on the maximum record size, so it can't be used as a soft target at which to flush.
That would leave one solution (if constrained to using one Client): manually check BufferedProduceBytes and call Flush. But this has pretty severe performance implications.
Is there anything we're missing? What would be the recommended solution in this case?
Should we use a Client per topic? How much overhead would this cause? Are connection pools reused across Clients?
As an aside, it seems unfortunate that this config needs to be manually synchronized between broker and producer and can't be auto-detected.