|
4 | 4 |
|
5 | 5 | ### Common
|
6 | 6 |
|
7 |
| -* Configuration changes |
| 7 | +#### Configuration changes |
| 8 | + ```javascript |
| 9 | + const kafka = new Kafka({/* common configuration changes */}); |
| 10 | + ``` |
| 11 | + There are several changes in the common configuration. Each config property is discussed. |
| 12 | + If there needs to be any change, the property is highlighted. |
8 | 13 |
|
9 |
| -* Error Handling: Some possible subtypes of `KafkaJSError` have been removed, |
| 14 | + * An `rdKafka` block can be added to the config. It allows directly setting librdkafka properties. |
| 15 | + If you are starting to make the configuration anew, it is best to specify properties using |
| 16 | + the `rdKafka` block. [Complete list of properties here](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
| 17 | + |
| 18 | + Example: |
| 19 | + ```javascript |
| 20 | + const kafka = new Kafka({ |
| 21 | + rdKafka: { |
| 22 | + globalConfig: { /* properties mentioned within the 'global config' section of the list */ } |
| 23 | + topicConfig: { /* properties mentioned within the 'topic config' section of the list */ } |
| 24 | + }, |
| 25 | + /* ... */ |
| 26 | + }); |
| 27 | + ``` |
| 28 | + * **`brokers`** list of strings, representing the bootstrap brokers. |
| 29 | + a function is no longer allowed as an argument for this. |
| 30 | + * **`ssl`**: boolean, set true if ssl needs to be enabled. |
| 31 | + In case additional properties, like CA, Certificate, Key etc. need to be added, use the `rdKafka` block. |
| 32 | + * **`sasl`**: omit if the brokers need no authentication, otherwise, an object of the following forms: |
| 33 | + - For SASL PLAIN or SASL SCRAM : `{ mechanism: 'plain'|'scram-sha-256'|'scram-sha-512', username: string, password: string }` |
| 34 | + - For SASL OAUTHBEARER: not supported yet. |
| 35 | + - For AWS IAM or custom mechanisms: not supported with no planned support. |
| 36 | + - For GSSAPI/Kerberos: use the `rdKafka` configuration. |
| 37 | + * `clientId`: string for identifying this client. |
| 38 | + * **`connectionTimeout`** and **`authenticationTimeout`**: |
| 39 | + These timeouts (specified in milliseconds) are not enforced individually. Instead, the sum of these values is |
| 40 | + enforced. The default value of the sum is 30000. It corresponds to librdkafka's `socket.connection.setup.timeout.ms`. |
| 41 | + * **`reauthenticationThreshold`**: no longer checked, librdkafka handles reauthentication on its own. |
| 42 | + * **`requestTimeout`**: number of milliseconds for a network request to timeout. The default value has been changed to 60000. It now corresponds to librdkafka's `socket.timeout.ms`. |
| 43 | + * **`enforceRequestTimeout`**: if this is set to false, `requestTimeout` is set to 5 minutes. The timeout cannot be disabled completely. |
| 44 | + * **`retry`** is partially supported. It must be an object, with the following (optional) properties |
| 45 | + - `maxRetryTime`: maximum time to backoff a retry, in milliseconds. Corresponds to librdkafka's `retry.backoff.max.ms`. The default is 1000. |
| 46 | + - `initialRetryTime`: minimum time to backoff a retry, in milliseconds. Corresponds to librdkafka's `retry.backoff.ms`. The default is 100. |
| 47 | + - `retries`: maximum number of retries, *only* applicable to Produce messages. However, it's recommended to keep this unset. |
| 48 | + Librdkafka handles the number of retries, and rather than capping the number of retries, caps the total time spent |
| 49 | + while sending the message, controlled by `message.timeout.ms`. |
| 50 | + - `factor` and `multiplier` cannot be changed from their defaults of 0.2 and 2. |
| 51 | + * **`restartOnFailure`**: this cannot be changed, and will always be true (the consumer recovers from errors on its own). |
| 52 | + * `logLevel` is mapped to the syslog(3) levels supported by librdkafka. `LOG_NOTHING` is not YET supported, as some panic situations are still logged. |
| 53 | + * **`socketFactory`** is no longer supported. |
| 54 | + |
| 55 | +#### Error Handling |
| 56 | + |
| 57 | + Some possible subtypes of `KafkaJSError` have been removed, |
10 | 58 | and additional information has been added into `KafkaJSError`.
|
11 |
| - Internally, fields have been added denoting if the error is fatal, retriable, or abortable (the latter two only relevant for a |
12 |
| - transactional producer). |
13 |
| - Some error-specific fields have also been removed. An exhaustive list is at the bottom of this section. |
| 59 | + Fields have been added denoting if the error is fatal, retriable, or abortable (the latter two only relevant for a transactional producer). |
| 60 | + Some error-specific fields have also been removed. |
| 61 | + |
| 62 | + An exhaustive list of changes is at the bottom of this section. |
14 | 63 |
|
15 |
| - For compability, as many error types as possible have been retained, but it is |
| 64 | + For compatibility, as many error types as possible have been retained, but it is |
16 | 65 | better to switch to checking the `error.code`.
|
17 | 66 |
|
18 | 67 | **Action**: Convert any checks based on `instanceof` and `error.name` or to error
|
19 | 68 | checks based on `error.code` or `error.type`.
|
20 | 69 |
|
21 |
| - **Example:**: |
| 70 | + **Example:** |
22 | 71 | ```javascript
|
23 | 72 | try {
|
24 | 73 | await producer.send(/* args */);
|
|
61 | 110 |
|
62 | 111 | ### Producer
|
63 | 112 |
|
| 113 | +#### Configuration changes |
| 114 | + |
| 115 | + ```javascript |
| 116 | + const producer = kafka.producer({ /* producer-specific configuration changes. */}); |
| 117 | + ``` |
| 118 | + |
| 119 | + There are several changes in the common configuration. Each config property is discussed. |
| 120 | + If there needs to be any change, the property is highlighted. |
| 121 | + |
| 122 | + * **`createPartitioner`**: this is not supported (YET). For behaviour identical to the Java client (the DefaultPartitioner), |
| 123 | + use the `rdKafka` block, and set the property `partitioner` to `murmur2_random`. This is critical |
| 124 | + when planning to produce to topics where messages with certain keys have been produced already. |
| 125 | + * **`retry`**: See the section for retry above. The producer config `retry` takes precedence over the common config `retry`. |
| 126 | + * `metadataMaxAge`: Time in milliseconds after which to refresh metadata for known topics. The default value remains 5min. This |
| 127 | + corresponds to the librdkafka property `topic.metadata.refresh.interval.ms` (and not `metadata.max.age.ms`). |
| 128 | + * `allowAutoTopicCreation`: determines if a topic should be created if it doesn't exist while producing. True by default. |
| 129 | + * `transactionTimeout`: The maximum amount of time in milliseconds that the transaction coordinator will wait for a transaction |
| 130 | + status update from the producer before proactively aborting the ongoing transaction. The default value remains 60000. |
| 131 | + Only applicable when `transactionalId` is set to true. |
| 132 | + * `idempotent`: if set to true, ensures that messages are delivered exactly once and in order. False by default. |
| 133 | + In case this is set to true, certain constraints must be respected for other properties, `maxInFlightRequests <= 5`, `retry.retries >= 0`. |
| 134 | + * **`maxInFlightRequests`**: Maximum number of in-flight requests *per broker connection*. If not set, a very high limit is used. |
| 135 | + * `transactionalId`: if set, turns this into a transactional producer with this identifier. This also automatically sets `idempotent` to true. |
| 136 | + * An `rdKafka` block can be added to the config. It allows directly setting librdkafka properties. |
| 137 | + If you are starting to make the configuration anew, it is best to specify properties using |
| 138 | + the `rdKafka` block. [Complete list of properties here](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
| 139 | + |
| 140 | +#### Semantic and Per-Method Changes |
| 141 | + |
64 | 142 | * `sendBatch` is not supported (YET). However, the actual batching semantics are handled by librdkafka.
|
65 | 143 | * Changes to `send`:
|
66 | 144 | * `acks`, `compression` and `timeout` are not set on a per-send basis. Rather, they must be configured in the configuration.
|
|
103 | 181 |
|
104 | 182 | ### Consumer
|
105 | 183 |
|
| 184 | +#### Configuration changes |
| 185 | + |
| 186 | + ```javascript |
| 187 | + const consumer = kafka.consumer({ /* producer-specific configuration changes. */}); |
| 188 | + ``` |
| 189 | + There are several changes in the common configuration. Each config property is discussed. |
| 190 | + If there needs to be any change, the property is highlighted. The change could be a change in |
| 191 | + the default values, some added/missing features, or a change in semantics. |
| 192 | + |
| 193 | + * **`partitionAssigners`**: The **default value** of this is changed to `[PartitionAssigners.range,PartitionAssigners.roundRobin]`. Support for range, roundRobin and cooperativeSticky |
| 194 | + partition assignors is provided. The cooperative assignor cannot be used along with the other two, and there |
| 195 | + is no support for custom assignors. An alias for these properties is also made available, `partitionAssignors` and `PartitionAssignors` to maintain |
| 196 | + parlance with the Java client's terminology. |
| 197 | + * **`sessionTimeout`**: If no heartbeats are received by the broker for a group member within the session timeout, the broker will remove the consumer from |
| 198 | + the group and trigger a rebalance. The **default value** is changed to 45000. |
| 199 | + * **`rebalanceTimeout`**: The maximum allowed time for each member to join the group once a rebalance has begun. The **default value** is changed to 300000. |
| 200 | + Note, before changing: setting this value *also* changes the max poll interval. Message processing in `eachMessage` must not take more than this time. |
| 201 | + * `heartbeatInterval`: The expected time in milliseconds between heartbeats to the consumer coordinator. The default value remains 3000. |
| 202 | + * `metadataMaxAge`: Time in milliseconds after which to refresh metadata for known topics. The default value remains 5min. This |
| 203 | + corresponds to the librdkafka property `topic.metadata.refresh.interval.ms` (and not `metadata.max.age.ms`). |
| 204 | + * **`allowAutoTopicCreation`**: determines if a topic should be created if it doesn't exist while producing. The **default value** is changed to false. |
| 205 | + * **`maxBytesPerPartition`**: determines how many bytes can be fetched in one request from a single partition. The default value remains 1048576. |
| 206 | + There is a slight change in semantics, this size grows dynamically if a single message larger than this is encountered, |
| 207 | + and the client does not get stuck. |
| 208 | + * `minBytes`: Minimum number of bytes the broker responds with (or wait until `maxWaitTimeInMs`). The default remains 1. |
| 209 | + * **`maxBytes`**: Maximum number of bytes the broker responds with. The **default value** is changed to 52428800 (50MB). |
| 210 | + * **`maxWaitTimeInMs`**: Maximum time in milliseconds the broker waits for the `minBytes` to be fulfilled. The **default value** is changed to 500. |
| 211 | + * **`retry`**: See the section for retry above. The consumer config `retry` takes precedence over the common config `retry`. |
| 212 | + * `readUncommitted`: if true, consumer will read transactional messages which have not been committed. The default value remains false. |
| 213 | + * **`maxInFlightRequests`**: Maximum number of in-flight requests *per broker connection*. If not set, a very high limit is used. |
| 214 | + * `rackId`: Can be set to an arbitrary string which will be used for fetch-from-follower if set up on the cluster. |
| 215 | + * An `rdKafka` block can be added to the config. It allows directly setting librdkafka properties. |
| 216 | + If you are starting to make the configuration anew, it is best to specify properties using |
| 217 | + the `rdKafka` block. [Complete list of properties here](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md). |
| 218 | + |
| 219 | +#### Semantic and Per-Method Changes |
| 220 | + |
| 221 | + |
106 | 222 | * While passing a list of topics to `subscribe`, the `fromBeginning` property is not supported. Instead, the property `auto.offset.reset` needs to be used.
|
107 | 223 | Before:
|
108 | 224 | ```javascript
|
|
129 | 245 | await consumer.subscribe({ topics: ["topic"] });
|
130 | 246 | ```
|
131 | 247 |
|
132 |
| - * For auto-commiting using a consumer, the properties on `run` are no longer used. Instead, corresponding rdKafka properties must be set. |
| 248 | + * For auto-committing using a consumer, the properties on `run` are no longer used. Instead, corresponding rdKafka properties must be set. |
133 | 249 | * `autoCommit` corresponds to `enable.auto.commit`.
|
134 | 250 | * `autoCommitInterval` corresponds to `auto.commit.interval.ms`.
|
135 | 251 | * `autoCommitThreshold` is no longer supported.
|
|
170 | 286 | * The `heartbeat()` no longer needs to be called. Heartbeats are automatically managed by librdkafka.
|
171 | 287 | * The `partitionsConsumedConcurrently` property is not supported (YET).
|
172 | 288 | * The `eachBatch` method is not supported.
|
173 |
| - * `commitOffsets` does not (YET) support sending metadata for topic partitions being commited. |
| 289 | + * `commitOffsets` does not (YET) support sending metadata for topic partitions being committed. |
174 | 290 | * `paused()` is not (YET) supported.
|
175 | 291 | * Custom partition assignors are not supported.
|
176 | 292 |
|
177 |
| - |
178 | 293 | ## node-rdkafka
|
0 commit comments