Skip to content

Commit e7e4c6d

Browse files
authored
Update to librdkafka 2.2.0 (#1033)
1 parent a8c288d commit e7e4c6d

File tree

8 files changed

+28
-34
lines changed

8 files changed

+28
-34
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `2.1.1`.__
20+
__This library currently uses `librdkafka` version `2.2.0`.__
2121

2222
## Reference Docs
2323

@@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.1.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.2.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6464

6565
Requirements:
6666
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');
9797

9898
## Configuration
9999

100-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.1.1/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md)
101101

102102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
103103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
132132
const Kafka = require('node-rdkafka');
133133
console.log(Kafka.librdkafkaVersion);
134134

135-
// #=> 2.1.1
135+
// #=> 2.2.0
136136
```
137137

138138
## Sending Messages
@@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
145145
});
146146
```
147147

148-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.1.1/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md) file described previously.
149149

150150
The following example illustrates a list with several `librdkafka` options set.
151151

config.d.ts

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.1.1 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 2.2.0 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -619,6 +619,13 @@ export interface GlobalConfig {
619619
*/
620620
"client.rack"?: string;
621621

622+
/**
623+
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. NOTE: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
624+
*
625+
* @default use_all_dns_ips
626+
*/
627+
"client.dns.lookup"?: 'use_all_dns_ips' | 'resolve_canonical_bootstrap_servers_only';
628+
622629
/**
623630
* Enables or disables `event.*` emitting.
624631
*
@@ -858,6 +865,13 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
858865
*/
859866
"fetch.wait.max.ms"?: number;
860867

868+
/**
869+
* How long to postpone the next fetch request for a topic+partition in case the current fetch queue thresholds (queued.min.messages or queued.max.messages.kbytes) have been exceded. This property may need to be decreased if the queue thresholds are set low and the application is experiencing long (~1s) delays between messages. Low values may increase CPU utilization.
870+
*
871+
* @default 1000
872+
*/
873+
"fetch.queue.backoff.ms"?: number;
874+
861875
/**
862876
* Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched.
863877
*

deps/librdkafka

errors.d.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.1.1 file src-cpp/rdkafkacpp.h ======
1+
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
22
export const CODES: { ERRORS: {
33
/* Internal errors to rdkafka: */
44
/** Begin internal error codes (**-200**) */

lib/error.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
2727
* @enum {number}
2828
* @constant
2929
*/
30-
// ====== Generated from librdkafka 2.1.1 file src-cpp/rdkafkacpp.h ======
30+
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
3131
LibrdKafkaError.codes = {
3232

3333
/* Internal errors to rdkafka: */

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
{
22
"name": "node-rdkafka",
3-
"version": "v2.16.1",
3+
"version": "v2.17.0",
44
"description": "Node.js bindings for librdkafka",
5-
"librdkafka": "2.1.1",
5+
"librdkafka": "2.2.0",
66
"main": "lib/index.js",
77
"scripts": {
88
"configure": "node-gyp configure",
@@ -45,4 +45,4 @@
4545
"engines": {
4646
"node": ">=6.0.0"
4747
}
48-
}
48+
}

src/binding.cc

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -15,21 +15,8 @@ using NodeKafka::KafkaConsumer;
1515
using NodeKafka::AdminClient;
1616
using NodeKafka::Topic;
1717

18-
using node::AtExit;
1918
using RdKafka::ErrorCode;
2019

21-
static void RdKafkaCleanup(void*) { // NOLINT
22-
/*
23-
* Wait for RdKafka to decommission.
24-
* This is not strictly needed but
25-
* allows RdKafka to clean up all its resources before the application
26-
* exits so that memory profilers such as valgrind wont complain about
27-
* memory leaks.
28-
*/
29-
30-
RdKafka::wait_destroyed(5000);
31-
}
32-
3320
NAN_METHOD(NodeRdKafkaErr2Str) {
3421
int points = Nan::To<int>(info[0]).FromJust();
3522
// Cast to error code
@@ -74,13 +61,6 @@ void ConstantsInit(v8::Local<v8::Object> exports) {
7461
}
7562

7663
void Init(v8::Local<v8::Object> exports, v8::Local<v8::Value> m_, void* v_) {
77-
#if NODE_MAJOR_VERSION <= 9 || (NODE_MAJOR_VERSION == 10 && NODE_MINOR_VERSION <= 15)
78-
AtExit(RdKafkaCleanup);
79-
#else
80-
v8::Local<v8::Context> context = Nan::GetCurrentContext();
81-
node::Environment* env = node::GetCurrentEnvironment(context);
82-
AtExit(env, RdKafkaCleanup, NULL);
83-
#endif
8464
KafkaConsumer::Init(exports);
8565
Producer::Init(exports);
8666
AdminClient::Init(exports);

0 commit comments

Comments
 (0)