Skip to content

Commit 351964e

Browse files
authored
Update librdkafka to 2.11.1 (#1137)
1 parent ad56f71 commit 351964e

File tree

7 files changed

+170
-16
lines changed

7 files changed

+170
-16
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `2.10.1`.__
20+
__This library currently uses `librdkafka` version `2.11.1`.__
2121

2222
## Reference Docs
2323

@@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.10.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.11.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6464

6565
Requirements:
6666
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');
9797

9898
## Configuration
9999

100-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.10.1/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.11.1/CONFIGURATION.md)
101101

102102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
103103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
132132
const Kafka = require('node-rdkafka');
133133
console.log(Kafka.librdkafkaVersion);
134134

135-
// #=> 2.10.1
135+
// #=> 2.11.1
136136
```
137137

138138
## Sending Messages
@@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
145145
});
146146
```
147147

148-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.10.1/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.11.1/CONFIGURATION.md) file described previously.
149149

150150
The following example illustrates a list with several `librdkafka` options set.
151151

config.d.ts

Lines changed: 105 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.10.1 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 2.11.1 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -63,12 +63,19 @@ export interface GlobalConfig {
6363
"max.in.flight"?: number;
6464

6565
/**
66-
* Controls how the client recovers when none of the brokers known to it is available. If set to `none`, the client fails with a fatal error. If set to `rebootstrap`, the client repeats the bootstrap process using `bootstrap.servers` and brokers added through `rd_kafka_brokers_add()`. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously.
66+
* Controls how the client recovers when none of the brokers known to it is available. If set to `none`, the client doesn't re-bootstrap. If set to `rebootstrap`, the client repeats the bootstrap process using `bootstrap.servers` and brokers added through `rd_kafka_brokers_add()`. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously or the client cannot refresh metadata within `metadata.recovery.rebootstrap.trigger.ms` or it's requested in a metadata response.
6767
*
6868
* @default rebootstrap
6969
*/
7070
"metadata.recovery.strategy"?: 'none' | 'rebootstrap';
7171

72+
/**
73+
* If a client configured to rebootstrap using `metadata.recovery.strategy=rebootstrap` is unable to obtain metadata from any of the brokers for this interval, client repeats the bootstrap process using `bootstrap.servers` configuration and brokers added through `rd_kafka_brokers_add()`.
74+
*
75+
* @default 300000
76+
*/
77+
"metadata.recovery.rebootstrap.trigger.ms"?: number;
78+
7279
/**
7380
* Period of time in milliseconds at which topic and broker metadata is refreshed in order to proactively discover any new brokers, topics, partitions or partition leader changes. Use -1 to disable the intervalled refresh (not recommended). If there are no locally referenced topics (no topic objects created, no messages produced, no subscription or no assignment) then only the broker list will be refreshed every interval but no more often than every 10s.
7481
*
@@ -192,7 +199,7 @@ export interface GlobalConfig {
192199
"socket.connection.setup.timeout.ms"?: number;
193200

194201
/**
195-
* Close broker connections after the specified time of inactivity. Disable with 0. If this property is left at its default value some heuristics are performed to determine a suitable default value, this is currently limited to identifying brokers on Azure (see librdkafka issue #3109 for more info).
202+
* Close broker connections after the specified time of inactivity. Disable with 0. If this property is left at its default value some heuristics are performed to determine a suitable default value, this is currently limited to identifying brokers on Azure (see librdkafka issue #3109 for more info). Actual value can be lower, up to 2s lower, only if `connections.max.idle.ms` >= 4s, as jitter is added to avoid disconnecting all brokers at the same time.
196203
*
197204
* @default 0
198205
*/
@@ -432,6 +439,16 @@ export interface GlobalConfig {
432439
*/
433440
"ssl.ca.location"?: string;
434441

442+
/**
443+
* File or directory path to CA certificate(s) for verifying HTTPS endpoints, like `sasl.oauthbearer.token.endpoint.url` used for OAUTHBEARER/OIDC authentication. Mutually exclusive with `https.ca.pem`. Defaults: On Windows the system's CA certificates are automatically looked up in the Windows Root certificate store. On Mac OSX this configuration defaults to `probe`. It is recommended to install openssl using Homebrew, to provide CA certificates. On Linux install the distribution's ca-certificates package. If OpenSSL is statically linked or `https.ca.location` is set to `probe` a list of standard paths will be probed and the first one found will be used as the default CA certificate location path. If OpenSSL is dynamically linked the OpenSSL library's default path will be used (see `OPENSSLDIR` in `openssl version -a`).
444+
*/
445+
"https.ca.location"?: string;
446+
447+
/**
448+
* CA certificate string (PEM format) for verifying HTTPS endpoints. Mutually exclusive with `https.ca.location`. Optional: see `https.ca.location`.
449+
*/
450+
"https.ca.pem"?: string;
451+
435452
/**
436453
* CA certificate string (PEM format) for verifying the broker's key.
437454
*/
@@ -591,6 +608,16 @@ export interface GlobalConfig {
591608
*/
592609
"sasl.oauthbearer.client.id"?: string;
593610

611+
/**
612+
* Alias for `sasl.oauthbearer.client.id`: Public identifier for the application. Must be unique across all clients that the authorization server handles. Only used when `sasl.oauthbearer.method` is set to "oidc".
613+
*/
614+
"sasl.oauthbearer.client.credentials.client.id"?: string;
615+
616+
/**
617+
* Alias for `sasl.oauthbearer.client.secret`: Client secret only known to the application and the authorization server. This should be a sufficiently random string that is not guessable. Only used when `sasl.oauthbearer.method` is set to "oidc".
618+
*/
619+
"sasl.oauthbearer.client.credentials.client.secret"?: string;
620+
594621
/**
595622
* Client secret only known to the application and the authorization server. This should be a sufficiently random string that is not guessable. Only used when `sasl.oauthbearer.method` is set to "oidc".
596623
*/
@@ -611,6 +638,81 @@ export interface GlobalConfig {
611638
*/
612639
"sasl.oauthbearer.token.endpoint.url"?: string;
613640

641+
/**
642+
* OAuth grant type to use when communicating with the identity provider.
643+
*
644+
* @default client_credentials
645+
*/
646+
"sasl.oauthbearer.grant.type"?: 'client_credentials' | 'urn:ietf:params:oauth:grant-type:jwt-bearer';
647+
648+
/**
649+
* Algorithm the client should use to sign the assertion sent to the identity provider and in the OAuth alg header in the JWT assertion.
650+
*
651+
* @default RS256
652+
*/
653+
"sasl.oauthbearer.assertion.algorithm"?: 'RS256' | 'ES256';
654+
655+
/**
656+
* Path to client's private key (PEM) used for authentication when using the JWT assertion.
657+
*/
658+
"sasl.oauthbearer.assertion.private.key.file"?: string;
659+
660+
/**
661+
* Private key passphrase for `sasl.oauthbearer.assertion.private.key.file` or `sasl.oauthbearer.assertion.private.key.pem`.
662+
*/
663+
"sasl.oauthbearer.assertion.private.key.passphrase"?: string;
664+
665+
/**
666+
* Client's private key (PEM) used for authentication when using the JWT assertion.
667+
*/
668+
"sasl.oauthbearer.assertion.private.key.pem"?: string;
669+
670+
/**
671+
* Path to the assertion file. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
672+
*/
673+
"sasl.oauthbearer.assertion.file"?: string;
674+
675+
/**
676+
* JWT audience claim. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
677+
*/
678+
"sasl.oauthbearer.assertion.claim.aud"?: string;
679+
680+
/**
681+
* Assertion expiration time in seconds. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
682+
*
683+
* @default 300
684+
*/
685+
"sasl.oauthbearer.assertion.claim.exp.seconds"?: number;
686+
687+
/**
688+
* JWT issuer claim. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
689+
*/
690+
"sasl.oauthbearer.assertion.claim.iss"?: string;
691+
692+
/**
693+
* JWT ID claim. When set to `true`, a random UUID is generated. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
694+
*
695+
* @default false
696+
*/
697+
"sasl.oauthbearer.assertion.claim.jti.include"?: boolean;
698+
699+
/**
700+
* Assertion not before time in seconds. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
701+
*
702+
* @default 60
703+
*/
704+
"sasl.oauthbearer.assertion.claim.nbf.seconds"?: number;
705+
706+
/**
707+
* JWT subject claim. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
708+
*/
709+
"sasl.oauthbearer.assertion.claim.sub"?: string;
710+
711+
/**
712+
* Path to the JWT template file. Only used when `sasl.oauthbearer.method` is set to "oidc" and JWT assertion is needed.
713+
*/
714+
"sasl.oauthbearer.assertion.jwt.template.file"?: string;
715+
614716
/**
615717
* List of plugin libraries to load (; separated). The library search path is platform dependent (see dlopen(3) for Unix and LoadLibrary() for Windows). If no filename extension is specified the platform-specific extension (such as .dll or .so) will be appended automatically.
616718
*/

deps/librdkafka

errors.d.ts

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.10.1 file src-cpp/rdkafkacpp.h ======
1+
// ====== Generated from librdkafka 2.11.1 file src-cpp/rdkafkacpp.h ======
22
export const CODES: { ERRORS: {
33
/* Internal errors to rdkafka: */
44
/** Begin internal error codes (**-200**) */
@@ -128,6 +128,11 @@ export const CODES: { ERRORS: {
128128
ERR__AUTO_OFFSET_RESET: number,
129129
/** Partition log truncation detected (**-139**) */
130130
ERR__LOG_TRUNCATION: number,
131+
/** A different record in the batch was invalid
132+
* and this message failed persisting (**-138**) */
133+
ERR__INVALID_DIFFERENT_RECORD: number,
134+
/** Broker is going away but client isn't terminating (**-137**) */
135+
ERR__DESTROY_BROKER: number,
131136

132137
/** End internal error codes (**-100**) */
133138
ERR__END: number,
@@ -346,4 +351,25 @@ export const CODES: { ERRORS: {
346351
ERR_FEATURE_UPDATE_FAILED: number,
347352
/** Request principal deserialization failed during forwarding (**97**) */
348353
ERR_PRINCIPAL_DESERIALIZATION_FAILURE: number,
354+
/** Unknown Topic Id (**100**) */
355+
ERR_UNKNOWN_TOPIC_ID: number,
356+
/** The member epoch is fenced by the group coordinator (**110**) */
357+
ERR_FENCED_MEMBER_EPOCH: number,
358+
/** The instance ID is still used by another member in the
359+
* consumer group (**111**) */
360+
ERR_UNRELEASED_INSTANCE_ID: number,
361+
/** The assignor or its version range is not supported by the consumer
362+
* group (**112**) */
363+
ERR_UNSUPPORTED_ASSIGNOR: number,
364+
/** The member epoch is stale (**113**) */
365+
ERR_STALE_MEMBER_EPOCH: number,
366+
/** Client sent a push telemetry request with an invalid or outdated
367+
* subscription ID (**117**) */
368+
ERR_UNKNOWN_SUBSCRIPTION_ID: number,
369+
/** Client sent a push telemetry request larger than the maximum size
370+
* the broker will accept (**118**) */
371+
ERR_TELEMETRY_TOO_LARGE: number,
372+
/** Client metadata is stale,
373+
* client should rebootstrap to obtain new metadata (**129**) */
374+
ERR_REBOOTSTRAP_REQUIRED: number,
349375
}}

lib/error.js

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
2727
* @enum {number}
2828
* @constant
2929
*/
30-
// ====== Generated from librdkafka 2.10.1 file src-cpp/rdkafkacpp.h ======
30+
// ====== Generated from librdkafka 2.11.1 file src-cpp/rdkafkacpp.h ======
3131
LibrdKafkaError.codes = {
3232

3333
/* Internal errors to rdkafka: */
@@ -158,6 +158,11 @@ LibrdKafkaError.codes = {
158158
ERR__AUTO_OFFSET_RESET: -140,
159159
/** Partition log truncation detected */
160160
ERR__LOG_TRUNCATION: -139,
161+
/** A different record in the batch was invalid
162+
* and this message failed persisting. */
163+
ERR__INVALID_DIFFERENT_RECORD: -138,
164+
/** Broker is going away but client isn't terminating */
165+
ERR__DESTROY_BROKER: -137,
161166

162167
/** End internal error codes */
163168
ERR__END: -100,
@@ -375,7 +380,28 @@ LibrdKafkaError.codes = {
375380
/** Unable to update finalized features due to server error */
376381
ERR_FEATURE_UPDATE_FAILED: 96,
377382
/** Request principal deserialization failed during forwarding */
378-
ERR_PRINCIPAL_DESERIALIZATION_FAILURE: 97
383+
ERR_PRINCIPAL_DESERIALIZATION_FAILURE: 97,
384+
/** Unknown Topic Id */
385+
ERR_UNKNOWN_TOPIC_ID: 100,
386+
/** The member epoch is fenced by the group coordinator */
387+
ERR_FENCED_MEMBER_EPOCH: 110,
388+
/** The instance ID is still used by another member in the
389+
* consumer group */
390+
ERR_UNRELEASED_INSTANCE_ID: 111,
391+
/** The assignor or its version range is not supported by the consumer
392+
* group */
393+
ERR_UNSUPPORTED_ASSIGNOR: 112,
394+
/** The member epoch is stale */
395+
ERR_STALE_MEMBER_EPOCH: 113,
396+
/** Client sent a push telemetry request with an invalid or outdated
397+
* subscription ID. */
398+
ERR_UNKNOWN_SUBSCRIPTION_ID: 117,
399+
/** Client sent a push telemetry request larger than the maximum size
400+
* the broker will accept. */
401+
ERR_TELEMETRY_TOO_LARGE: 118,
402+
/** Client metadata is stale,
403+
* client should rebootstrap to obtain new metadata. */
404+
ERR_REBOOTSTRAP_REQUIRED: 129
379405
};
380406

381407
/**

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
{
22
"name": "node-rdkafka",
3-
"version": "v3.4.1",
3+
"version": "v3.5.0",
44
"description": "Node.js bindings for librdkafka",
5-
"librdkafka": "2.10.1",
5+
"librdkafka": "2.11.1",
66
"main": "lib/index.js",
77
"scripts": {
88
"configure": "node-gyp configure",

0 commit comments

Comments
 (0)