Skip to content

Commit 32dbf25

Browse files
committed
devops3b
1 parent d7cce9d commit 32dbf25

File tree

1 file changed

+11
-17
lines changed

1 file changed

+11
-17
lines changed

articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md

Lines changed: 11 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.custom: hdinsightactive
99
ms.topic: tutorial
10-
ms.date: 10/02/2019
10+
ms.date: 10/08/2019
1111
#Customer intent: As a developer, I need to create an application that uses the Kafka consumer/producer API with Kafka on HDInsight
1212
---
1313

@@ -55,9 +55,9 @@ The important things to understand in the `pom.xml` file are:
5555
```xml
5656
<!-- Kafka client for producer/consumer operations -->
5757
<dependency>
58-
<groupId>org.apache.kafka</groupId>
59-
<artifactId>kafka-clients</artifactId>
60-
<version>${kafka.version}</version>
58+
<groupId>org.apache.kafka</groupId>
59+
<artifactId>kafka-clients</artifactId>
60+
<version>${kafka.version}</version>
6161
</dependency>
6262
```
6363

@@ -136,52 +136,46 @@ The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started
136136
137137
```
138138
139-
2. Install [jq](https://stedolan.github.io/jq/), a command-line JSON processor. From the open SSH connection, enter following command to install `jq`:
139+
1. Install [jq](https://stedolan.github.io/jq/), a command-line JSON processor. From the open SSH connection, enter following command to install `jq`:
140140
141141
```bash
142142
sudo apt -y install jq
143143
```
144144
145-
3. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command:
145+
1. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command:
146146
147147
```bash
148148
export password='PASSWORD'
149149
```
150150
151-
4. Extract correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command:
152-
153-
```bash
154-
export clusterName=$(curl -u admin:$password -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
155-
```
156-
157-
5. To get the Kafka broker hosts and the Apache Zookeeper hosts, use the following command:
151+
1. To get the Kafka broker hosts and the Apache Zookeeper hosts, use the following command:
158152
159153
```bash
160154
export KAFKABROKERS=`curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER \
161155
| jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2`
162156
```
163157
164-
6. Create Kafka topic, `myTest`, by entering the following command:
158+
1. Create Kafka topic, `myTest`, by entering the following command:
165159
166160
```bash
167161
java -jar kafka-producer-consumer.jar create myTest $KAFKABROKERS
168162
```
169163
170-
7. To run the producer and write data to the topic, use the following command:
164+
1. To run the producer and write data to the topic, use the following command:
171165
172166
```bash
173167
java -jar kafka-producer-consumer.jar producer myTest $KAFKABROKERS
174168
```
175169
176-
8. Once the producer has finished, use the following command to read from the topic:
170+
1. Once the producer has finished, use the following command to read from the topic:
177171
178172
```bash
179173
java -jar kafka-producer-consumer.jar consumer myTest $KAFKABROKERS
180174
```
181175
182176
The records read, along with a count of records, is displayed.
183177
184-
9. Use __Ctrl + C__ to exit the consumer.
178+
1. Use __Ctrl + C__ to exit the consumer.
185179
186180
### Multiple consumers
187181

0 commit comments

Comments
 (0)