You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Datadog Kafka Connect Logs
2
2
3
-
`datadog-kafka-connect-logs` is a [Kafka Connector](http://kafka.apache.org/documentation.html#connect) for sending
3
+
`datadog-kafka-connect-logs` is a [Kafka Connector](http://kafka.apache.org/documentation.html#connect) for sending
4
4
records from Kafka as logs to the [Datadog Logs Intake API](https://docs.datadoghq.com/api/v1/logs/).
5
5
6
6
It is a plugin meant to be installed on a [Kafka Connect Cluster](https://docs.confluent.io/current/connect/) running
@@ -12,7 +12,7 @@ besides a [Kafka Broker](https://www.confluent.io/what-is-apache-kafka/).
12
12
2. Java 8 and above.
13
13
3. Confluent Platform 4.0.x and above (optional).
14
14
15
-
To install the plugin, one must have a working instance of Kafka Connect connected to a Kafka Broker. See also
15
+
To install the plugin, one must have a working instance of Kafka Connect connected to a Kafka Broker. See also
16
16
[Confluent's](https://www.confluent.io/product/confluent-platform/) documentation for easily setting this up.
17
17
18
18
## Installation and Setup
@@ -24,25 +24,24 @@ See [Confluent's documentation](https://docs.confluent.io/current/connect/managi
24
24
### Download from Github
25
25
26
26
Download the latest version from the GitHub [releases page](https://github.com/DataDog/datadog-kafka-connect-logs/releases).
27
-
Also see [Confluent's documentation](https://docs.confluent.io/current/connect/managing/community.html) on installing
27
+
Also see [Confluent's documentation](https://docs.confluent.io/current/connect/managing/community.html) on installing
28
28
community connectors.
29
29
30
30
### Build from Source
31
31
32
32
1. Clone the repo from https://github.com/DataDog/datadog-kafka-connect-logs
33
33
2. Verify that Java8 JRE or JDK is installed.
34
-
3. Run `mvn clean compile package`. This will build the jar in the `/target` directory. The name will be
35
-
`datadog-kafka-connect-logs-[VERSION].jar`.
34
+
3. Run `mvn clean compile package`. This builds the jar in the `/target` directory. The file name has the format `datadog-kafka-connect-logs-[VERSION].jar`.
36
35
4. The zip file for use on [Confluent Hub](https://www.confluent.io/hub/) can be found in `target/components/packages`.
37
36
38
37
## Quick Start
39
38
40
39
1. To install the plugin, place the plugin's jar file (see [previous section](#installation-and-setup) on how to download or build it)
41
-
in or under the location specified in `plugin.path` . If you use Confluent Platform, simply run
|`datadog.site`| The site of the Datadog intake to send logs to (for example 'datadoghq.eu' to send data to the EU site) |`datadoghq.com`|
101
100
|`datadog.url`| Custom Datadog URL endpoint where your logs will be sent. `datadog.url` takes precedence over `datadog.site`. Example: `http-intake.logs.datadoghq.com:443`||
102
-
|`datadog.tags`| Tags associated with your logs in a comma separated tag:value format. ||
103
-
|`datadog.service`| The name of the application or service generating the log events. ||
104
-
|`datadog.hostname`| The name of the originating host of the log. ||
105
-
|`datadog.proxy.url`| Proxy endpoint when logs are not directly forwarded to Datadog. ||
106
-
|`datadog.proxy.port`| Proxy port when logs are not directly forwarded to Datadog. ||
107
-
|`datadog.retry.max`| The number of retries before the output plugin stops. |`5`||
108
-
|`datadog.retry.backoff_ms`| The time in milliseconds to wait following an error before a retry attempt is made. |`3000`||
109
-
|`datadog.add_published_date`| Valid settings are true or false. When set to `true`, The timestamp is retrieved from the Kafka record and passed to Datadog as `published_date`||
101
+
|`datadog.tags`| Tags associated with your logs in a comma separated tag:value format. ||
102
+
|`datadog.service`| The name of the application or service generating the log events. ||
103
+
|`datadog.hostname`| The name of the originating host of the log. ||
104
+
|`datadog.proxy.url`| Proxy endpoint when logs are not directly forwarded to Datadog. ||
105
+
|`datadog.proxy.port`| Proxy port when logs are not directly forwarded to Datadog. ||
106
+
|`datadog.retry.max`| The number of retries before the output plugin stops. |`5`||
107
+
|`datadog.retry.backoff_ms`| The time in milliseconds to wait following an error before a retry attempt is made. |`3000`||
108
+
|`datadog.add_published_date`| Valid settings are true or false. When set to `true`, The timestamp is retrieved from the Kafka record and passed to Datadog as `published_date`||
109
+
|`datadog.parse_record_headers`| Valid settings are true or false. When set to `true`, Kafka Record Headers are parsed and passed to DataDog as a `kafkaheaders` object |`false`|
110
110
111
111
### Troubleshooting performance
112
112
@@ -126,7 +126,7 @@ To improve performance of the connector, you can try the following options:
126
126
127
127
## Single Message Transforms
128
128
129
-
Kafka Connect supports Single Message Transforms that let you change the structure or content of a message. To
129
+
Kafka Connect supports Single Message Transforms that let you change the structure or content of a message. To
130
130
experiment with this feature, try adding these lines to your sink connector configuration:
0 commit comments