You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|**brokers**|null|A list of strings, representing the bootstrap brokers. **Afunction is no longer allowed as an argument for this.** |
95
-
| **ssl** | false | A boolean, set to true if ssl needs to be enabled. **Additional properties like CA, certificate, key, etc. need to be specified outside the kafkaJs block.** |
95
+
| **ssl** | false | A boolean, set to true if ssl needs to be enabled. **Additional properties like CA, certificate, key, etc. need to be specified outside the kafkaJS block.** |
96
96
| **sasl** | - | An optional object of the form `{ mechanism: 'plain' or 'scram-sha-256' or 'scram-sha-512', username: string, password: string }`. **Additional authentication types are not yet supported.** |
97
97
| clientId | "rdkafka" | An optional string used to identify the client. |
98
98
| **connectionTimeout** | 1000 | This timeout is not enforced individually, but a sum of `connectionTimeout` and `authenticationTimeout` is enforced together. |
| **retry.restartOnFailure** | true | Consumer only. **Cannot be changed**. Consumer will always make an attempt to restart. |
110
110
| logLevel | `logLevel.INFO` | Decides the severity level of the logger created by the underlying library. A logger created with the `INFO` level will not be able to log `DEBUG` messages later. |
| outer config | {} | The configuration outside the kafkaJs block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
112
+
| outer config | {} | The configuration outside the kafkaJS block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
|**createPartitioner**|DefaultPartioner (murmur2_random) - Java client compatible | Custom partitioner support is not yet provided. The default partitioner's behaviour is retained, and a number of partitioners are provided via the `partitioner` property, which is specified outside the `kafkaJs` block. |
128
+
|**createPartitioner**|DefaultPartioner (murmur2_random) - Java client compatible | Custom partitioner support is not yet provided. The default partitioner's behaviour is retained, and a number of partitioners are provided via the `partitioner` property, which is specified outside the `kafkaJS` block. |
129
129
| **retry** | object | Identical to `retry` in the common configuration. This takes precedence over the common config retry. |
130
130
| metadataMaxAge | 5 minutes | Time in milliseconds after which to refresh metadata for known topics |
131
131
| allowAutoTopicCreation | true | Determines if a topic should be created if it doesn't exist while producing. |
|**acks**|-1| The number of required acks before a Produce succeeds. **This is set on a per-producer level, not on a per `send` level**. -1 denotes it will wait for all brokers in the in-sync replica set. |
137
137
|**compression**|CompressionTypes.NONE| Compression codec for Produce messages. **This is set on a per-producer level, not on a per `send` level**. It must be a key of the object CompressionType, namely GZIP, SNAPPY, LZ4, ZSTD or NONE. |
138
138
|**timeout**|30000| The ack timeout of the producer request inmilliseconds. This value is only enforced by the broker. **This is set on a per-producer level, not on a per `send` level**. |
139
-
| outer config | {} | The configuration outside the kafkaJs block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
139
+
| outer config | {} | The configuration outside the kafkaJS block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
| **fromBeginning** | false | If there is initial offset in offset store or the desired offset is out of range, and this is true, we consume the earliest possible offset. **This is set on a per-consumer level, not on a per `subscribe` level**. |
212
212
| **autoCommit** | true | Whether to periodically auto-commit offsets to the broker while consuming. **This is set on a per-consumer level, not on a per `run` level**. |
213
213
| **autoCommitInterval** | 5000 | Offsets are committed periodically at this interval, if autoCommit is true. **This is set on a per-consumer level, not on a per `run` level. The default value is changed to 5 seconds.**. |
214
-
| outer config | {} | The configuration outside the kafkaJs block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
214
+
| outer config | {} | The configuration outside the kafkaJS block can contain any of the keys present in the [librdkafka CONFIGURATION table](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
0 commit comments