Skip to content

Commit 14c1e50

Browse files
rashtaoSimran-B
andauthored
[DE-627] Kafka Connector: batch writes (#354)
* Kafka Connector: batch writes * Review --------- Co-authored-by: Simran Spiller <[email protected]>
1 parent c523eac commit 14c1e50

File tree

6 files changed

+33
-15
lines changed

6 files changed

+33
-15
lines changed

site/content/3.10/develop/integrations/kafka-connect-arangodb-sink-connector/_index.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This connector is compatible with:
1818

1919
- Kafka `2.x` (from version `2.6` onward) and Kafka `3.x` (all versions)
2020
- JDK 8 and higher versions
21-
- all the non-EOLed [ArangoDB versions](https://www.arangodb.com/eol-notice)
21+
- ArangoDB 3.11.1 and higher versions
2222

2323
## Installation
2424

@@ -254,13 +254,11 @@ See [SSL configuration](configuration.md#ssl) for further options.
254254
## Limitations
255255

256256
- The `VST` communication protocol (`connection.protocol=VST`) is currently not working (DE-619)
257-
- Documents are inserted one by one, bulk inserts will be implemented in a future release (DE-627)
258-
- In case of transient error, the entire Kafka Connect batch is retried (DE-651)
259257
- Record values are required to be object-like structures (DE-644)
260258
- Auto-creation of ArangoDB collection is not supported (DE-653)
261259
- `ssl.cert.value` does not support multiple certificates (DE-655)
262-
- Batch inserts are not guaranteed to be executed serially (FRB-300)
263-
- Batch inserts may succeed for some documents while failing for others (FRB-300)
260+
- Batch writes are not guaranteed to be executed serially (FRB-300)
261+
- Batch writes may succeed for some documents while failing for others (FRB-300)
264262
This has two important consequences:
265263
- Transient errors might be retried and succeed at a later point
266264
- Data errors might be asynchronously reported to the DLQ

site/content/3.10/develop/integrations/kafka-connect-arangodb-sink-connector/configuration.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,14 @@ to `update`:
173173
- `true`: objects are merged
174174
- `false`: existing document fields are overwritten
175175

176+
### batch.size
177+
178+
- type: _int_
179+
- default: `3_000`
180+
181+
Specifies how many records to attempt to batch together for insertion or deletion
182+
into the destination collection.
183+
176184
### insert.timeout.ms
177185

178186
- type: _int_

site/content/3.11/develop/integrations/kafka-connect-arangodb-sink-connector/_index.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This connector is compatible with:
1818

1919
- Kafka `2.x` (from version `2.6` onward) and Kafka `3.x` (all versions)
2020
- JDK 8 and higher versions
21-
- all the non-EOLed [ArangoDB versions](https://www.arangodb.com/eol-notice)
21+
- ArangoDB 3.11.1 and higher versions
2222

2323
## Installation
2424

@@ -254,13 +254,11 @@ See [SSL configuration](configuration.md#ssl) for further options.
254254
## Limitations
255255

256256
- The `VST` communication protocol (`connection.protocol=VST`) is currently not working (DE-619)
257-
- Documents are inserted one by one, bulk inserts will be implemented in a future release (DE-627)
258-
- In case of transient error, the entire Kafka Connect batch is retried (DE-651)
259257
- Record values are required to be object-like structures (DE-644)
260258
- Auto-creation of ArangoDB collection is not supported (DE-653)
261259
- `ssl.cert.value` does not support multiple certificates (DE-655)
262-
- Batch inserts are not guaranteed to be executed serially (FRB-300)
263-
- Batch inserts may succeed for some documents while failing for others (FRB-300)
260+
- Batch writes are not guaranteed to be executed serially (FRB-300)
261+
- Batch writes may succeed for some documents while failing for others (FRB-300)
264262
This has two important consequences:
265263
- Transient errors might be retried and succeed at a later point
266264
- Data errors might be asynchronously reported to the DLQ

site/content/3.11/develop/integrations/kafka-connect-arangodb-sink-connector/configuration.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,14 @@ to `update`:
173173
- `true`: objects are merged
174174
- `false`: existing document fields are overwritten
175175

176+
### batch.size
177+
178+
- type: _int_
179+
- default: `3_000`
180+
181+
Specifies how many records to attempt to batch together for insertion or deletion
182+
into the destination collection.
183+
176184
### insert.timeout.ms
177185

178186
- type: _int_

site/content/3.12/develop/integrations/kafka-connect-arangodb-sink-connector/_index.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This connector is compatible with:
1818

1919
- Kafka `2.x` (from version `2.6` onward) and Kafka `3.x` (all versions)
2020
- JDK 8 and higher versions
21-
- all the non-EOLed [ArangoDB versions](https://www.arangodb.com/eol-notice)
21+
- ArangoDB 3.11.1 and higher versions
2222

2323
## Installation
2424

@@ -254,13 +254,11 @@ See [SSL configuration](configuration.md#ssl) for further options.
254254
## Limitations
255255

256256
- The `VST` communication protocol (`connection.protocol=VST`) is currently not working (DE-619)
257-
- Documents are inserted one by one, bulk inserts will be implemented in a future release (DE-627)
258-
- In case of transient error, the entire Kafka Connect batch is retried (DE-651)
259257
- Record values are required to be object-like structures (DE-644)
260258
- Auto-creation of ArangoDB collection is not supported (DE-653)
261259
- `ssl.cert.value` does not support multiple certificates (DE-655)
262-
- Batch inserts are not guaranteed to be executed serially (FRB-300)
263-
- Batch inserts may succeed for some documents while failing for others (FRB-300)
260+
- Batch writes are not guaranteed to be executed serially (FRB-300)
261+
- Batch writes may succeed for some documents while failing for others (FRB-300)
264262
This has two important consequences:
265263
- Transient errors might be retried and succeed at a later point
266264
- Data errors might be asynchronously reported to the DLQ

site/content/3.12/develop/integrations/kafka-connect-arangodb-sink-connector/configuration.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,14 @@ to `update`:
173173
- `true`: objects are merged
174174
- `false`: existing document fields are overwritten
175175

176+
### batch.size
177+
178+
- type: _int_
179+
- default: `3_000`
180+
181+
Specifies how many records to attempt to batch together for insertion or deletion
182+
into the destination collection.
183+
176184
### insert.timeout.ms
177185

178186
- type: _int_

0 commit comments

Comments
 (0)