Skip to content

Commit 2b7b1bd

Browse files
authored
Update to librdkafka 2.3.0 (#1047)
1 parent e7e4c6d commit 2b7b1bd

File tree

8 files changed

+950
-364
lines changed

8 files changed

+950
-364
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -215,12 +215,10 @@ Steps to update:
215215
```
216216
Note: This is ran automatically during CI flows but it's good to run it during the version upgrade pull request.
217217
218-
1. Run `npm install` to build with the new version and fix any build errors that occur.
218+
1. Run `npm install --lockfile-version 2` to build with the new version and fix any build errors that occur.
219219
220220
1. Run unit tests: `npm run test`
221221
222-
1. Run end to end tests: `npm run test:e2e`. This requires running kafka & zookeeper locally.
223-
224222
1. Update the version numbers referenced in the [`README.md`](https://github.com/Blizzard/node-rdkafka/blob/master/README.md) file to the new version.
225223
226224
## Publishing new npm version

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `2.2.0`.__
20+
__This library currently uses `librdkafka` version `2.3.0`.__
2121

2222
## Reference Docs
2323

@@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.2.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.3.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6464

6565
Requirements:
6666
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');
9797

9898
## Configuration
9999

100-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md)
101101

102102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
103103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
132132
const Kafka = require('node-rdkafka');
133133
console.log(Kafka.librdkafkaVersion);
134134

135-
// #=> 2.2.0
135+
// #=> 2.3.0
136136
```
137137

138138
## Sending Messages
@@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
145145
});
146146
```
147147

148-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md) file described previously.
149149

150150
The following example illustrates a list with several `librdkafka` options set.
151151

config.d.ts

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.2.0 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 2.3.0 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -77,9 +77,9 @@ export interface GlobalConfig {
7777
"metadata.max.age.ms"?: number;
7878

7979
/**
80-
* When a topic loses its leader a new metadata request will be enqueued with this initial interval, exponentially increasing until the topic metadata has been refreshed. This is used to recover quickly from transitioning leader brokers.
80+
* When a topic loses its leader a new metadata request will be enqueued immediately and then with this initial interval, exponentially increasing upto `retry.backoff.max.ms`, until the topic metadata has been refreshed. If not set explicitly, it will be defaulted to `retry.backoff.ms`. This is used to recover quickly from transitioning leader brokers.
8181
*
82-
* @default 250
82+
* @default 100
8383
*/
8484
"topic.metadata.refresh.fast.interval.ms"?: number;
8585

@@ -704,12 +704,19 @@ export interface ProducerGlobalConfig extends GlobalConfig {
704704
"retries"?: number;
705705

706706
/**
707-
* The backoff time in milliseconds before retrying a protocol request.
707+
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
708708
*
709709
* @default 100
710710
*/
711711
"retry.backoff.ms"?: number;
712712

713+
/**
714+
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
715+
*
716+
* @default 1000
717+
*/
718+
"retry.backoff.max.ms"?: number;
719+
713720
/**
714721
* The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer's message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
715722
*

deps/librdkafka

errors.d.ts

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
1+
// ====== Generated from librdkafka 2.3.0 file src-cpp/rdkafkacpp.h ======
22
export const CODES: { ERRORS: {
33
/* Internal errors to rdkafka: */
44
/** Begin internal error codes (**-200**) */
@@ -128,10 +128,8 @@ export const CODES: { ERRORS: {
128128
ERR__AUTO_OFFSET_RESET: number,
129129
/** Partition log truncation detected (**-139**) */
130130
ERR__LOG_TRUNCATION: number,
131-
132131
/** End internal error codes (**-100**) */
133132
ERR__END: number,
134-
135133
/* Kafka broker errors: */
136134
/** Unknown broker error (**-1**) */
137135
ERR_UNKNOWN: number,

lib/error.js

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
2727
* @enum {number}
2828
* @constant
2929
*/
30-
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
30+
// ====== Generated from librdkafka 2.3.0 file src-cpp/rdkafkacpp.h ======
3131
LibrdKafkaError.codes = {
3232

3333
/* Internal errors to rdkafka: */
@@ -158,10 +158,8 @@ LibrdKafkaError.codes = {
158158
ERR__AUTO_OFFSET_RESET: -140,
159159
/** Partition log truncation detected */
160160
ERR__LOG_TRUNCATION: -139,
161-
162161
/** End internal error codes */
163162
ERR__END: -100,
164-
165163
/* Kafka broker errors: */
166164
/** Unknown broker error */
167165
ERR_UNKNOWN: -1,

0 commit comments

Comments
 (0)