|
61 | 61 |
|
62 | 62 | ### Producer
|
63 | 63 |
|
64 |
| -* `sendBatch` is currently unsupported - but will be supported. TODO. However, the actual batching semantics are handled by librdkafka. |
| 64 | +* `sendBatch` is not supported (YET). However, the actual batching semantics are handled by librdkafka. |
65 | 65 | * Changes to `send`:
|
66 |
| - 1. `acks`, `compression` and `timeout` are not set on a per-send basis. Rather, they must be configured in the configuration. |
| 66 | + * `acks`, `compression` and `timeout` are not set on a per-send basis. Rather, they must be configured in the configuration. |
67 | 67 | Before:
|
68 | 68 | ```javascript
|
69 | 69 | const kafka = new Kafka({/* ... */});
|
|
99 | 99 | });
|
100 | 100 | ```
|
101 | 101 |
|
102 |
| - * Error-handling for a failed `send` is stricter. While sending multiple messages, even if one of the messages fails, the method throws an error. |
| 102 | + * Error-handling for a failed `send` is stricter. While sending multiple messages, even if one of the messages fails, the method throws an error. |
103 | 103 |
|
104 | 104 | ### Consumer
|
105 | 105 |
|
| 106 | + * While passing a list of topics to `subscribe`, the `fromBeginning` property is not supported. Instead, the property `auto.offset.reset` needs to be used. |
| 107 | + Before: |
| 108 | + ```javascript |
| 109 | + const kafka = new Kafka({ /* ... */ }); |
| 110 | + const consumer = kafka.consumer({ |
| 111 | + groupId: 'test-group', |
| 112 | + }); |
| 113 | + await consumer.connect(); |
| 114 | + await consumer.subscribe({ topics: ["topic"], fromBeginning: true}); |
| 115 | + ``` |
| 116 | + |
| 117 | + After: |
| 118 | + ```javascript |
| 119 | + const kafka = new Kafka({ /* ... */ }); |
| 120 | + const consumer = kafka.consumer({ |
| 121 | + groupId: 'test-group', |
| 122 | + rdKafka: { |
| 123 | + topicConfig: { |
| 124 | + 'auto.offset.reset': 'earliest', |
| 125 | + }, |
| 126 | + } |
| 127 | + }); |
| 128 | + await consumer.connect(); |
| 129 | + await consumer.subscribe({ topics: ["topic"] }); |
| 130 | + ``` |
| 131 | + |
| 132 | + * For auto-commiting using a consumer, the properties on `run` are no longer used. Instead, corresponding rdKafka properties must be set. |
| 133 | + * `autoCommit` corresponds to `enable.auto.commit`. |
| 134 | + * `autoCommitInterval` corresponds to `auto.commit.interval.ms`. |
| 135 | + * `autoCommitThreshold` is no longer supported. |
| 136 | + |
| 137 | + Before: |
| 138 | + ```javascript |
| 139 | + const kafka = new Kafka({ /* ... */ }); |
| 140 | + const consumer = kafka.consumer({ /* ... */ }); |
| 141 | + await consumer.connect(); |
| 142 | + await consumer.subscribe({ topics: ["topic"] }); |
| 143 | + consumer.run({ |
| 144 | + eachMessage: someFunc, |
| 145 | + autoCommit: true, |
| 146 | + autoCommitThreshold: 5000, |
| 147 | + }); |
| 148 | + ``` |
| 149 | + |
| 150 | + After: |
| 151 | + ```javascript |
| 152 | + const kafka = new Kafka({ /* ... */ }); |
| 153 | + const consumer = kafka.consumer({ |
| 154 | + /* ... */, |
| 155 | + rdKafka: { |
| 156 | + globalConfig: { |
| 157 | + "enable.auto.commit": "true", |
| 158 | + "auto.commit.interval.ms": "5000", |
| 159 | + } |
| 160 | + }, |
| 161 | + }); |
| 162 | + await consumer.connect(); |
| 163 | + await consumer.subscribe({ topics: ["topic"] }); |
| 164 | + consumer.run({ |
| 165 | + eachMessage: someFunc, |
| 166 | + }); |
| 167 | + ``` |
| 168 | + |
| 169 | + * For the `eachMessage` method while running the consumer: |
| 170 | + * The `heartbeat()` no longer needs to be called. Heartbeats are automatically managed by librdkafka. |
| 171 | + * The `partitionsConsumedConcurrently` property is not supported (YET). |
| 172 | + * The `eachBatch` method is not supported. |
| 173 | + * `commitOffsets` does not (YET) support sending metadata for topic partitions being commited. |
| 174 | + * `paused()` is not (YET) supported. |
| 175 | + * Custom partition assignors are not supported. |
| 176 | + |
| 177 | + |
106 | 178 | ## node-rdkafka
|
0 commit comments