You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md
+22-21Lines changed: 22 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.custom: hdinsightactive
9
9
ms.topic: tutorial
10
-
ms.date: 06/24/2019
10
+
ms.date: 10/08/2019
11
11
#Customer intent: As a developer, I need to create an application that uses the Kafka consumer/producer API with Kafka on HDInsight
12
12
---
13
13
@@ -55,9 +55,9 @@ The important things to understand in the `pom.xml` file are:
55
55
```xml
56
56
<!-- Kafka client for producer/consumer operations -->
57
57
<dependency>
58
-
<groupId>org.apache.kafka</groupId>
59
-
<artifactId>kafka-clients</artifactId>
60
-
<version>${kafka.version}</version>
58
+
<groupId>org.apache.kafka</groupId>
59
+
<artifactId>kafka-clients</artifactId>
60
+
<version>${kafka.version}</version>
61
61
</dependency>
62
62
```
63
63
@@ -108,7 +108,7 @@ In this code, the consumer is configured to read from the start of the topic (`a
108
108
109
109
### Run.java
110
110
111
-
The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Run.java) file provides a command-line interface that runs either the producer or consumer code. You must provide the Kafka broker host information as a parameter. You can optionally include a group ID value, which is used by the consumer process. If you create multiple consumer instances using the same group ID, they will load balance reading from the topic.
111
+
The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Run.java) file provides a command-line interface that runs either the producer or consumer code. You must provide the Kafka broker host information as a parameter. You can optionally include a group ID value, which is used by the consumer process. If you create multiple consumer instances using the same group ID, they'll load balance reading from the topic.
112
112
113
113
## Build and deploy the example
114
114
@@ -136,55 +136,56 @@ The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started
2. Install [jq](https://stedolan.github.io/jq/), a command-line JSON processor. From the open SSH connection, enter following command to install `jq`:
139
+
1. Install [jq](https://stedolan.github.io/jq/), a command-line JSON processor. From the open SSH connection, enter following command to install `jq`:
140
140
141
141
```bash
142
142
sudo apt -y install jq
143
143
```
144
144
145
-
3. Set up environment variables. Replace `PASSWORD` and `CLUSTERNAME` with the cluster login password and cluster name respectively, then enter the command:
145
+
1. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command:
146
146
147
147
```bash
148
148
export password='PASSWORD'
149
-
export clusterNameA='CLUSTERNAME'
150
149
```
151
150
152
-
4. Extract correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, store it in a variable, and then display the correctly cased name, and the name you provided earlier. Enter the following command:
151
+
1. Extract correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command:
> If you're doing this process from outside the cluster, there is a different procedure for storing the cluster name. Get the cluster name in lower case from the Azure portal. Then, substitute the cluster name for `<clustername>` in the following command and execute it: `export clusterName='<clustername>'`.
159
158
160
-
5. To get the Kafka broker hosts and the Apache Zookeeper hosts, use the following command:
159
+
1. To get the Kafka broker hosts, use the following command:
The records read, along with a count of records, is displayed.
186
187
187
-
9. Use __Ctrl + C__ to exit the consumer.
188
+
1. Use __Ctrl + C__ to exit the consumer.
188
189
189
190
### Multiple consumers
190
191
@@ -213,7 +214,7 @@ Consumption by clients within the same group is handled through the partitions f
213
214
> [!IMPORTANT]
214
215
> There cannot be more consumer instances in a consumer group than partitions. In this example, one consumer group can contain up to eight consumers since that is the number of partitions in the topic. Or you can have multiple consumer groups, each with no more than eight consumers.
215
216
216
-
Records stored in Kafka are stored in the order they are received within a partition. To achieve in-ordered delivery for records *within a partition*, create a consumer group where the number of consumer instances matches the number of partitions. To achieve in-ordered delivery for records *within the topic*, create a consumer group with only one consumer instance.
217
+
Records stored in Kafka are stored in the order they're received within a partition. To achieve in-ordered delivery for records *within a partition*, create a consumer group where the number of consumer instances matches the number of partitions. To achieve in-ordered delivery for records *within the topic*, create a consumer group with only one consumer instance.
217
218
218
219
## Clean up resources
219
220
@@ -230,4 +231,4 @@ To remove the resource group using the Azure portal:
230
231
In this document, you learned how to use the Apache Kafka Producer and Consumer API with Kafka on HDInsight. Use the following to learn more about working with Kafka:
0 commit comments