Skip to content

Commit 020db59

Browse files
authored
Update to librdkafka 2.5.0 (#1086)
1 parent f594d11 commit 020db59

File tree

7 files changed

+53
-30
lines changed

7 files changed

+53
-30
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `2.3.0`.__
20+
__This library currently uses `librdkafka` version `2.5.0`.__
2121

2222
## Reference Docs
2323

@@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.3.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.5.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6464

6565
Requirements:
6666
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');
9797

9898
## Configuration
9999

100-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.5.0/CONFIGURATION.md)
101101

102102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
103103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
132132
const Kafka = require('node-rdkafka');
133133
console.log(Kafka.librdkafkaVersion);
134134

135-
// #=> 2.3.0
135+
// #=> 2.5.0
136136
```
137137

138138
## Sending Messages
@@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
145145
});
146146
```
147147

148-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.5.0/CONFIGURATION.md) file described previously.
149149

150150
The following example illustrates a list with several `librdkafka` options set.
151151

config.d.ts

Lines changed: 36 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.3.0 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 2.5.0 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -620,12 +620,33 @@ export interface GlobalConfig {
620620
"client.rack"?: string;
621621

622622
/**
623-
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. NOTE: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
623+
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
624+
*
625+
* @default 100
626+
*/
627+
"retry.backoff.ms"?: number;
628+
629+
/**
630+
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
631+
*
632+
* @default 1000
633+
*/
634+
"retry.backoff.max.ms"?: number;
635+
636+
/**
637+
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. **WARNING**: `resolve_canonical_bootstrap_servers_only` must only be used with `GSSAPI` (Kerberos) as `sasl.mechanism`, as it's the only purpose of this configuration value. **NOTE**: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
624638
*
625639
* @default use_all_dns_ips
626640
*/
627641
"client.dns.lookup"?: 'use_all_dns_ips' | 'resolve_canonical_bootstrap_servers_only';
628642

643+
/**
644+
* Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client
645+
*
646+
* @default true
647+
*/
648+
"enable.metrics.push"?: boolean;
649+
629650
/**
630651
* Enables or disables `event.*` emitting.
631652
*
@@ -703,20 +724,6 @@ export interface ProducerGlobalConfig extends GlobalConfig {
703724
*/
704725
"retries"?: number;
705726

706-
/**
707-
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
708-
*
709-
* @default 100
710-
*/
711-
"retry.backoff.ms"?: number;
712-
713-
/**
714-
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
715-
*
716-
* @default 1000
717-
*/
718-
"retry.backoff.max.ms"?: number;
719-
720727
/**
721728
* The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer's message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
722729
*
@@ -810,12 +817,24 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
810817
"heartbeat.interval.ms"?: number;
811818

812819
/**
813-
* Group protocol type. NOTE: Currently, the only supported group protocol type is `consumer`.
820+
* Group protocol type for the `classic` group protocol. NOTE: Currently, the only supported group protocol type is `consumer`.
814821
*
815822
* @default consumer
816823
*/
817824
"group.protocol.type"?: string;
818825

826+
/**
827+
* Group protocol to use. Use `classic` for the original protocol and `consumer` for the new protocol introduced in KIP-848. Available protocols: classic or consumer. Default is `classic`, but will change to `consumer` in next releases.
828+
*
829+
* @default classic
830+
*/
831+
"group.protocol"?: 'classic' | 'consumer';
832+
833+
/**
834+
* Server side assignor to use. Keep it null to make server select a suitable assignor for the group. Available assignors: uniform or range. Default is null
835+
*/
836+
"group.remote.assignor"?: string;
837+
819838
/**
820839
* How often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment.
821840
*

deps/librdkafka

errors.d.ts

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.3.0 file src-cpp/rdkafkacpp.h ======
1+
// ====== Generated from librdkafka 2.5.0 file src-cpp/rdkafkacpp.h ======
22
export const CODES: { ERRORS: {
33
/* Internal errors to rdkafka: */
44
/** Begin internal error codes (**-200**) */
@@ -128,8 +128,10 @@ export const CODES: { ERRORS: {
128128
ERR__AUTO_OFFSET_RESET: number,
129129
/** Partition log truncation detected (**-139**) */
130130
ERR__LOG_TRUNCATION: number,
131+
131132
/** End internal error codes (**-100**) */
132133
ERR__END: number,
134+
133135
/* Kafka broker errors: */
134136
/** Unknown broker error (**-1**) */
135137
ERR_UNKNOWN: number,

lib/error.js

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
2727
* @enum {number}
2828
* @constant
2929
*/
30-
// ====== Generated from librdkafka 2.3.0 file src-cpp/rdkafkacpp.h ======
30+
// ====== Generated from librdkafka 2.5.0 file src-cpp/rdkafkacpp.h ======
3131
LibrdKafkaError.codes = {
3232

3333
/* Internal errors to rdkafka: */
@@ -158,8 +158,10 @@ LibrdKafkaError.codes = {
158158
ERR__AUTO_OFFSET_RESET: -140,
159159
/** Partition log truncation detected */
160160
ERR__LOG_TRUNCATION: -139,
161+
161162
/** End internal error codes */
162163
ERR__END: -100,
164+
163165
/* Kafka broker errors: */
164166
/** Unknown broker error */
165167
ERR_UNKNOWN: -1,

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
{
22
"name": "node-rdkafka",
3-
"version": "v3.0.1",
3+
"version": "v3.1.0",
44
"description": "Node.js bindings for librdkafka",
5-
"librdkafka": "2.3.0",
5+
"librdkafka": "2.5.0",
66
"main": "lib/index.js",
77
"scripts": {
88
"configure": "node-gyp configure",
@@ -45,4 +45,4 @@
4545
"engines": {
4646
"node": ">=16"
4747
}
48-
}
48+
}

0 commit comments

Comments
 (0)