Skip to content

Commit e6ba743

Browse files
committed
Merge branch 'patch-4' of https://github.com/anusricorp/azure-docs into hdi_kafkaapis
2 parents 3cb0d8f + a25352e commit e6ba743

File tree

1 file changed

+33
-6
lines changed

1 file changed

+33
-6
lines changed

articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md

Lines changed: 33 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ The important things to understand in the `pom.xml` file are:
6969

7070
### Producer.java
7171

72-
The producer communicates with the Kafka broker hosts (worker nodes) and sends data to a Kafka topic. The following code snippet is from the [Producer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Producer.java) file from the [GitHub repository](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started) and shows how to set the producer properties:
72+
The producer communicates with the Kafka broker hosts (worker nodes) and sends data to a Kafka topic. The following code snippet is from the [Producer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Producer.java) file from the [GitHub repository](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started) and shows how to set the producer properties. For Enterprise Security Enabled clusters an additional property must be added "properties.setProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");"
7373

7474
```java
7575
Properties properties = new Properties();
@@ -83,7 +83,7 @@ KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
8383

8484
### Consumer.java
8585

86-
The consumer communicates with the Kafka broker hosts (worker nodes), and reads records in a loop. The following code snippet from the [Consumer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Consumer.java) file sets the consumer properties:
86+
The consumer communicates with the Kafka broker hosts (worker nodes), and reads records in a loop. The following code snippet from the [Consumer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Consumer.java) file sets the consumer properties. For Enterprise Security Enabled clusters an additional property must be added "properties.setProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");"
8787

8888
```java
8989
KafkaConsumer<String, String> consumer;
@@ -109,7 +109,17 @@ In this code, the consumer is configured to read from the start of the topic (`a
109109

110110
The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Run.java) file provides a command-line interface that runs either the producer or consumer code. You must provide the Kafka broker host information as a parameter. You can optionally include a group ID value, which is used by the consumer process. If you create multiple consumer instances using the same group ID, they'll load balance reading from the topic.
111111

112-
## Build and deploy the example
112+
## Use Pre-built JAR files
113+
114+
Download the jars from the [Kafka Get Started Azure sample](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/master/Prebuilt-Jars). If your cluster is **Enterprise Security Package (ESP)** enabled, use kafka-producer-consumer-esp.jar. Use the command below to copy the jars to your cluster.
115+
116+
117+
```cmd
118+
scp kafka-producer-consumer*.jar [email protected]:kafka-producer-consumer.jar
119+
```
120+
121+
## Build the JAR files from code
122+
113123

114124
If you would like to skip this step, prebuilt jars can be downloaded from the `Prebuilt-Jars` subdirectory. Download the kafka-producer-consumer.jar. If your cluster is **Enterprise Security Package (ESP)** enabled, use kafka-producer-consumer-esp.jar. Execute step 3 to copy the jar to your HDInsight cluster.
115125

@@ -121,15 +131,16 @@ If you would like to skip this step, prebuilt jars can be downloaded from the `P
121131
mvn clean package
122132
```
123133
124-
This command creates a directory named `target`, that contains a file named `kafka-producer-consumer-1.0-SNAPSHOT.jar`.
134+
This command creates a directory named `target`, that contains a file named `kafka-producer-consumer-1.0-SNAPSHOT.jar`. For ESP clusters the file will be `kafka-producer-consumer-esp-1.0-SNAPSHOT.jar`
125135
126136
3. Replace `sshuser` with the SSH user for your cluster, and replace `CLUSTERNAME` with the name of your cluster. Enter the following command to copy the `kafka-producer-consumer-1.0-SNAPSHOT.jar` file to your HDInsight cluster. When prompted enter the password for the SSH user.
127137
128138
```cmd
129-
scp ./target/kafka-producer-consumer-1.0-SNAPSHOT.jar [email protected]:kafka-producer-consumer.jar
139+
scp ./target/kafka-producer-consumer*.jar [email protected]:kafka-producer-consumer.jar
130140
```
131141
132142
## <a id="run"></a> Run the example
143+
This conversation was marked as resolved by anusricorp
133144
134145
1. Replace `sshuser` with the SSH user for your cluster, and replace `CLUSTERNAME` with the name of your cluster. Open an SSH connection to the cluster, by entering the following command. If prompted, enter the password for the SSH user account.
135146
@@ -148,7 +159,6 @@ If you would like to skip this step, prebuilt jars can be downloaded from the `P
148159
149160
> [!Note]
150161
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
151-
152162
1. Create Kafka topic, `myTest`, by entering the following command:
153163
154164
```bash
@@ -165,6 +175,7 @@ If you would like to skip this step, prebuilt jars can be downloaded from the `P
165175
166176
```bash
167177
java -jar kafka-producer-consumer.jar consumer myTest $KAFKABROKERS
178+
scp ./target/kafka-producer-consumer*.jar [email protected]:kafka-producer-consumer.jar
168179
```
169180
170181
The records read, along with a count of records, is displayed.
@@ -200,6 +211,22 @@ Consumption by clients within the same group is handled through the partitions f
200211
201212
Records stored in Kafka are stored in the order they're received within a partition. To achieve in-ordered delivery for records *within a partition*, create a consumer group where the number of consumer instances matches the number of partitions. To achieve in-ordered delivery for records *within the topic*, create a consumer group with only one consumer instance.
202213

214+
## Common Issues faced
215+
216+
1. Topic creation fails
217+
218+
219+
If your cluster is Enterprise Security Pack enabled, use the [pre-built JAR files for producer and consumer](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Prebuilt-Jars/kafka-producer-consumer-esp.jar).
220+
221+
222+
The ESP jar can be built from from the code in the [`DomainJoined-Producer-Consumer` subdirectory](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/master/DomainJoined-Producer-Consumer). Note that the producer and consumer properties ave an additional property `CommonClientConfigs.SECURITY_PROTOCOL_CONFIG` for ESP enabled clusters.
223+
224+
225+
2. Facing issue with ESP enabled clusters
226+
227+
If produce and consume operations fail and you are using an ESP enabled cluster, check that the user `kafka` is present in all Ranger policies. If it is not present, add it to all Ranger policies.
228+
229+
203230
## Clean up resources
204231

205232
To clean up the resources created by this tutorial, you can delete the resource group. Deleting the resource group also deletes the associated HDInsight cluster, and any other resources associated with the resource group.

0 commit comments

Comments
 (0)