Skip to content

Commit c03a30e

Browse files
Update librdkafka from 1.5.3 -> 1.6.1 (#879)
1 parent 24e1a1a commit c03a30e

File tree

6 files changed

+88
-18
lines changed

6 files changed

+88
-18
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ I am looking for *your* help to make this project even better! If you're interes
1616

1717
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1818

19-
__This library currently uses `librdkafka` version `1.5.3`.__
19+
__This library currently uses `librdkafka` version `1.6.1`.__
2020

2121
## Reference Docs
2222

@@ -59,7 +59,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
5959

6060
### Windows
6161

62-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.1.5.3.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
62+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.1.6.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6363

6464
Requirements:
6565
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows) (the easies way to get it: `npm install --global --production windows-build-tools`, if your node version is 6.x or below, pleasse use `npm install --global --production [email protected]`)
@@ -96,7 +96,7 @@ var Kafka = require('node-rdkafka');
9696

9797
## Configuration
9898

99-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.5.3/CONFIGURATION.md)
99+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.6.1/CONFIGURATION.md)
100100

101101
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
102102
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -131,7 +131,7 @@ You can also get the version of `librdkafka`
131131
const Kafka = require('node-rdkafka');
132132
console.log(Kafka.librdkafkaVersion);
133133

134-
// #=> 1.5.3
134+
// #=> 1.6.1
135135
```
136136

137137
## Sending Messages
@@ -144,7 +144,7 @@ var producer = new Kafka.Producer({
144144
});
145145
```
146146

147-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.5.3/CONFIGURATION.md) file described previously.
147+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.6.1/CONFIGURATION.md) file described previously.
148148

149149
The following example illustrates a list with several `librdkafka` options set.
150150

config.d.ts

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 1.5.3 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 1.6.1 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -261,7 +261,7 @@ export interface GlobalConfig {
261261
"log.thread.name"?: boolean;
262262

263263
/**
264-
* If enabled librdkafka will initialize the POSIX PRNG with srand(current_time.milliseconds) on the first invocation of rd_kafka_new(). If disabled the application must call srand() prior to calling rd_kafka_new().
264+
* If enabled librdkafka will initialize the PRNG with srand(current_time.milliseconds) on the first invocation of rd_kafka_new() (required only if rand_r() is not available on your platform). If disabled the application must call srand() prior to calling rd_kafka_new().
265265
*
266266
* @default true
267267
*/
@@ -402,7 +402,9 @@ export interface GlobalConfig {
402402
"ssl_certificate"?: any;
403403

404404
/**
405-
* File or directory path to CA certificate(s) for verifying the broker's key. Defaults: On Windows the system's CA certificates are automatically looked up in the Windows Root certificate store. On Mac OSX it is recommended to install openssl using Homebrew, to provide CA certificates. On Linux install the distribution's ca-certificates package. If OpenSSL is statically linked or `ssl.ca.location` is set to `probe` a list of standard paths will be probed and the first one found will be used as the default CA certificate location path. If OpenSSL is dynamically linked the OpenSSL library's default path will be used (see `OPENSSLDIR` in `openssl version -a`).
405+
* File or directory path to CA certificate(s) for verifying the broker's key. Defaults: On Windows the system's CA certificates are automatically looked up in the Windows Root certificate store. On Mac OSX this configuration defaults to `probe`. It is recommended to install openssl using Homebrew, to provide CA certificates. On Linux install the distribution's ca-certificates package. If OpenSSL is statically linked or `ssl.ca.location` is set to `probe` a list of standard paths will be probed and the first one found will be used as the default CA certificate location path. If OpenSSL is dynamically linked the OpenSSL library's default path will be used (see `OPENSSLDIR` in `openssl version -a`).
406+
*
407+
* @default probe
406408
*/
407409
"ssl.ca.location"?: string;
408410

@@ -411,6 +413,13 @@ export interface GlobalConfig {
411413
*/
412414
"ssl_ca"?: any;
413415

416+
/**
417+
* Comma-separated list of Windows Certificate stores to load CA certificates from. Certificates will be loaded in the same order as stores are specified. If no certificates can be loaded from any of the specified stores an error is logged and the OpenSSL library's default CA location is used instead. Store names are typically one or more of: MY, Root, Trust, CA.
418+
*
419+
* @default Root
420+
*/
421+
"ssl.ca.certificate.stores"?: string;
422+
414423
/**
415424
* Path to CRL for verifying broker's certificate validity.
416425
*/
@@ -669,6 +678,13 @@ export interface ProducerGlobalConfig extends GlobalConfig {
669678
* Delivery report callback (set with rd_kafka_conf_set_dr_msg_cb())
670679
*/
671680
"dr_msg_cb"?: boolean;
681+
682+
/**
683+
* Delay in milliseconds to wait to assign new sticky partitions for each topic. By default, set to double the time of linger.ms. To disable sticky behavior, set to 0. This behavior affects messages with the key NULL in all cases, and messages with key lengths of zero when the consistent_random partitioner is in use. These messages would otherwise be assigned randomly. A higher value allows for more effective batching of these messages.
684+
*
685+
* @default 10
686+
*/
687+
"sticky.partitioning.linger.ms"?: number;
672688
}
673689

674690
export interface ConsumerGlobalConfig extends GlobalConfig {
@@ -683,7 +699,7 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
683699
"group.instance.id"?: string;
684700

685701
/**
686-
* Name of partition assignment strategy to use when elected group leader assigns partitions to group members.
702+
* The name of one or more partition assignment strategies. The elected group leader will use a strategy supported by all members of the group to assign partitions to group members. If there is more than one eligible strategy, preference is determined by the order of this list (strategies earlier in the list have higher priority). Cooperative and non-cooperative (eager) strategies must not be mixed. Available strategies: range, roundrobin, cooperative-sticky.
687703
*
688704
* @default range,roundrobin
689705
*/
@@ -704,7 +720,7 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
704720
"heartbeat.interval.ms"?: number;
705721

706722
/**
707-
* Group protocol type
723+
* Group protocol type. NOTE: Currently, the only supported group protocol type is `consumer`.
708724
*
709725
* @default consumer
710726
*/
@@ -971,7 +987,7 @@ export interface ConsumerTopicConfig extends TopicConfig {
971987
"auto.commit.interval.ms"?: number;
972988

973989
/**
974-
* Action to take when there is no initial offset in offset store or the desired offset is out of range: 'smallest','earliest' - automatically reset the offset to the smallest offset, 'largest','latest' - automatically reset the offset to the largest offset, 'error' - trigger an error which is retrieved by consuming messages and checking 'message->err'.
990+
* Action to take when there is no initial offset in offset store or the desired offset is out of range: 'smallest','earliest' - automatically reset the offset to the smallest offset, 'largest','latest' - automatically reset the offset to the largest offset, 'error' - trigger an error (ERR__AUTO_OFFSET_RESET) which is retrieved by consuming messages and checking 'message->err'.
975991
*
976992
* @default largest
977993
*/

deps/librdkafka

errors.d.ts

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 1.5.3 file src-cpp/rdkafkacpp.h ======
1+
// ====== Generated from librdkafka 1.6.1 file src-cpp/rdkafkacpp.h ======
22
export const CODES: { ERRORS: {
33
/* Internal errors to rdkafka: */
44
/** Begin internal error codes (**-200**) */
@@ -120,6 +120,12 @@ export const CODES: { ERRORS: {
120120
ERR__FENCED: number,
121121
/** Application generated error (**-143**) */
122122
ERR__APPLICATION: number,
123+
/** Assignment lost (**-142**) */
124+
ERR__ASSIGNMENT_LOST: number,
125+
/** No operation performed (**-141**) */
126+
ERR__NOOP: number,
127+
/** No offset to automatically reset to (**-140**) */
128+
ERR__AUTO_OFFSET_RESET: number,
123129
/** End internal error codes (**-100**) */
124130
ERR__END: number,
125131
/* Kafka broker errors: */
@@ -309,10 +315,31 @@ export const CODES: { ERRORS: {
309315
ERR_ELECTION_NOT_NEEDED: number,
310316
/** No partition reassignment is in progress (**85**) */
311317
ERR_NO_REASSIGNMENT_IN_PROGRESS: number,
312-
/** Deleting offsets of a topic while the consumer group is subscribed to it (**86**) */
318+
/** Deleting offsets of a topic while the consumer group is
319+
* subscribed to it (**86**) */
313320
ERR_GROUP_SUBSCRIBED_TO_TOPIC: number,
314321
/** Broker failed to validate record (**87**) */
315322
ERR_INVALID_RECORD: number,
316323
/** There are unstable offsets that need to be cleared (**88**) */
317324
ERR_UNSTABLE_OFFSET_COMMIT: number,
325+
/** Throttling quota has been exceeded (**89**) */
326+
ERR_THROTTLING_QUOTA_EXCEEDED: number,
327+
/** There is a newer producer with the same transactionalId
328+
* which fences the current one (**90**) */
329+
ERR_PRODUCER_FENCED: number,
330+
/** Request illegally referred to resource that does not exist (**91**) */
331+
ERR_RESOURCE_NOT_FOUND: number,
332+
/** Request illegally referred to the same resource twice (**92**) */
333+
ERR_DUPLICATE_RESOURCE: number,
334+
/** Requested credential would not meet criteria for acceptability (**93**) */
335+
ERR_UNACCEPTABLE_CREDENTIAL: number,
336+
/** Indicates that the either the sender or recipient of a
337+
* voter-only request is not one of the expected voters (**94**) */
338+
ERR_INCONSISTENT_VOTER_SET: number,
339+
/** Invalid update version (**95**) */
340+
ERR_INVALID_UPDATE_VERSION: number,
341+
/** Unable to update finalized features due to server error (**96**) */
342+
ERR_FEATURE_UPDATE_FAILED: number,
343+
/** Request principal deserialization failed during forwarding (**97**) */
344+
ERR_PRINCIPAL_DESERIALIZATION_FAILURE: number,
318345
}}

lib/error.js

Lines changed: 30 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
2727
* @enum {number}
2828
* @constant
2929
*/
30-
// ====== Generated from librdkafka 1.5.3 file src-cpp/rdkafkacpp.h ======
30+
// ====== Generated from librdkafka 1.6.1 file src-cpp/rdkafkacpp.h ======
3131
LibrdKafkaError.codes = {
3232

3333
/* Internal errors to rdkafka: */
@@ -150,6 +150,12 @@ LibrdKafkaError.codes = {
150150
ERR__FENCED: -144,
151151
/** Application generated error */
152152
ERR__APPLICATION: -143,
153+
/** Assignment lost */
154+
ERR__ASSIGNMENT_LOST: -142,
155+
/** No operation performed */
156+
ERR__NOOP: -141,
157+
/** No offset to automatically reset to */
158+
ERR__AUTO_OFFSET_RESET: -140,
153159
/** End internal error codes */
154160
ERR__END: -100,
155161
/* Kafka broker errors: */
@@ -339,12 +345,33 @@ LibrdKafkaError.codes = {
339345
ERR_ELECTION_NOT_NEEDED: 84,
340346
/** No partition reassignment is in progress */
341347
ERR_NO_REASSIGNMENT_IN_PROGRESS: 85,
342-
/** Deleting offsets of a topic while the consumer group is subscribed to it */
348+
/** Deleting offsets of a topic while the consumer group is
349+
* subscribed to it */
343350
ERR_GROUP_SUBSCRIBED_TO_TOPIC: 86,
344351
/** Broker failed to validate record */
345352
ERR_INVALID_RECORD: 87,
346353
/** There are unstable offsets that need to be cleared */
347-
ERR_UNSTABLE_OFFSET_COMMIT: 88
354+
ERR_UNSTABLE_OFFSET_COMMIT: 88,
355+
/** Throttling quota has been exceeded */
356+
ERR_THROTTLING_QUOTA_EXCEEDED: 89,
357+
/** There is a newer producer with the same transactionalId
358+
* which fences the current one */
359+
ERR_PRODUCER_FENCED: 90,
360+
/** Request illegally referred to resource that does not exist */
361+
ERR_RESOURCE_NOT_FOUND: 91,
362+
/** Request illegally referred to the same resource twice */
363+
ERR_DUPLICATE_RESOURCE: 92,
364+
/** Requested credential would not meet criteria for acceptability */
365+
ERR_UNACCEPTABLE_CREDENTIAL: 93,
366+
/** Indicates that the either the sender or recipient of a
367+
* voter-only request is not one of the expected voters */
368+
ERR_INCONSISTENT_VOTER_SET: 94,
369+
/** Invalid update version */
370+
ERR_INVALID_UPDATE_VERSION: 95,
371+
/** Unable to update finalized features due to server error */
372+
ERR_FEATURE_UPDATE_FAILED: 96,
373+
/** Request principal deserialization failed during forwarding */
374+
ERR_PRINCIPAL_DESERIALIZATION_FAILURE: 97
348375
};
349376

350377
/**

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"name": "node-rdkafka",
33
"version": "v2.10.1",
44
"description": "Node.js bindings for librdkafka",
5-
"librdkafka": "1.5.3",
5+
"librdkafka": "1.6.1",
66
"main": "lib/index.js",
77
"scripts": {
88
"configure": "node-gyp configure",

0 commit comments

Comments
 (0)