Skip to content

Commit b87c73f

Browse files
Added Kafka Connect configuration summary (#324)
1 parent d1a1cd6 commit b87c73f

File tree

1 file changed

+68
-1
lines changed

1 file changed

+68
-1
lines changed

doc/asciidoc/kafka-connect/index.adoc

Lines changed: 68 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,6 +253,7 @@ Following an example:
253253
}
254254
----
255255

256+
[[kafka_connect_error_handling]]
256257
=== How deal with bad data
257258

258259
In Kafka Connect plugin, in the creation phase of the Sink instance, in addition to the properties
@@ -289,4 +290,70 @@ Below you see the data that has been ingested into Neo4j. During my testing I go
289290

290291
image::../../images/confluent-imported-data.png[title="Confluent Platform Management", align="center"]
291292

292-
include::config-override-policy/index.adoc[]
293+
include::config-override-policy/index.adoc[]
294+
295+
=== Configuration Summary
296+
297+
Following a summary of all the configuration parameters you can use for the Kafka Connect plugin:
298+
299+
.Kafka Connect configuration parameters
300+
[%autowidth,cols="m,m,m,a", opts=header]
301+
|===
302+
| Name
303+
| Value
304+
| Mandatory
305+
| Note
306+
307+
| database | <DATABASE_NAME> | false | Specify a database name only if you want to use a non-default database. Default value is 'neo4j'
308+
| topics | <topicA,topicB> | true | A list of comma-separated topics
309+
| connector.class | streams.kafka.connect.sink.Neo4jSinkConnector | true |
310+
| key.converter | org.apache.kafka.connect.storage.StringConverter | false | Converter class for key Connect data
311+
| value.converter | org.apache.kafka.connect.json.JsonConverter | false | Converter class for value Connect data
312+
| key.converter.schemas.enable | true/false | false | If true the key will be treated as a composite JSON object containing schema and the data. Default value is false
313+
| value.converter.schemas.enable | true/false | false | If true the value will be treated as a composite JSON object containing schema and the data. Default value is false
314+
| key.converter.schema.registry.url | http://localhost:8081 | false | The Schema Registry URL has to be provide only when you decide to use AvroConverter
315+
| value.converter.schema.registry.url | http://localhost:8081 | false | The Schema Registry URL has to be provide only when you decide to use AvroConverter
316+
| kafka.bootstrap.servers | <localhost:9092> | false | The Broker URI is mandatory only when if you have configured DLQ
317+
| kafka.<any_other_kafka_property | | false |
318+
| errors.tolerance | all/none | false | all == lenient, silently ignore bad messages. none (default) means that any error will result in a connector failure
319+
| errors.log.enable | false/true | false | log errors (default: false)
320+
| errors.log.include.messages | false/true | false | log bad messages too (default: false)
321+
| errors.deadletterqueue.topic.name | topic-name | false | dead letter queue topic name, if left off no DLQ, default: not set
322+
| errors.deadletterqueue.context.headers.enable | false/true | false | enrich messages with metadata headers like exception, timestamp, org. topic, org.part, default:false
323+
| errors.deadletterqueue.context.headers.prefix | prefix-text | false | common prefix for header entries, e.g. `"__streams.errors."` , default: not set
324+
| errors.deadletterqueue.topic.replication.factor | 3/1 | false | replication factor, need to set to 1 for single partition, default:3
325+
| neo4j.server.uri | "bolt://neo4j:7687" | true | Neo4j Server URI
326+
| neo4j.authentication.basic.username | your_neo4j_user | true | Neo4j username
327+
| neo4j.authentication.basic.password | your_neo4j_password | true | Neo4j password
328+
| neo4j.encryption.enabled | true/false | false |
329+
| neo4j.topic.cdc.sourceId | <list of topics separated by semicolon> | false |
330+
| neo4j.topic.cdc.sourceId.labelName | <the label attached to the node> | false | default value is *SourceEvent*
331+
| neo4j.topic.cdc.sourceId.idName | <the id name given to the CDC id field> | false | default value is *sourceId*
332+
| neo4j.topic.cdc.schema | <list of topics separated by semicolon> | false |
333+
| neo4j.topic.pattern.node.<TOPIC_NAME> | <node extraction pattern> | false |
334+
| neo4j.topic.pattern.relationship.<TOPIC_NAME> | <relationship extraction pattern> | false |
335+
| neo4j.topic.cud | <list of topics separated by semicolon> | false |
336+
|===
337+
338+
[NOTE]
339+
====
340+
If you need to manage data in JSON format without using the Schema Registry, then you can use the
341+
`org.apache.kafka.connect.json.JsonConverter` and disabling both `key.converter.schemas.enable` and
342+
`value.converter.schemas.enable`.
343+
344+
Other supported converters are:
345+
346+
* *org.apache.kafka.connect.storage.StringConverter*
347+
* *org.apache.kafka.connect.converters.ByteArrayConverter*
348+
* *io.confluent.connect.avro.AvroConverter*
349+
350+
Please see the following for further details: https://docs.confluent.io/current/connect/userguide.html#configuring-key-and-value-converters
351+
352+
For further information about Kafka Connect properties, please checkout the following:
353+
354+
* https://docs.confluent.io/current/installation/configuration/connect/sink-connect-configs.html#kconnect-sink-configurations
355+
* https://docs.confluent.io/current/installation/configuration/connect/source-connect-configs.html#kconnect-source-configurations
356+
357+
For further details about error handling properties refers to <<kafka_connect_error_handling, How deal with bad data>> section
358+
359+
====

0 commit comments

Comments
 (0)