Skip to content

Commit acfd13c

Browse files
authored
Updating documentation to put common issues
1 parent 78e38c0 commit acfd13c

File tree

1 file changed

+20
-47
lines changed

1 file changed

+20
-47
lines changed

articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md

Lines changed: 20 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ The important things to understand in the `pom.xml` file are:
6969

7070
### Producer.java
7171

72-
The producer communicates with the Kafka broker hosts (worker nodes) and sends data to a Kafka topic. The following code snippet is from the [Producer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Producer.java) file from the [GitHub repository](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started) and shows how to set the producer properties:
72+
The producer communicates with the Kafka broker hosts (worker nodes) and sends data to a Kafka topic. The following code snippet is from the [Producer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Producer.java) file from the [GitHub repository](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started) and shows how to set the producer properties. For Enterprise Security Enabled clusters an additional property must be added "properties.setProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");"
7373

7474
```java
7575
Properties properties = new Properties();
@@ -83,7 +83,7 @@ KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
8383

8484
### Consumer.java
8585

86-
The consumer communicates with the Kafka broker hosts (worker nodes), and reads records in a loop. The following code snippet from the [Consumer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Consumer.java) file sets the consumer properties:
86+
The consumer communicates with the Kafka broker hosts (worker nodes), and reads records in a loop. The following code snippet from the [Consumer.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Consumer.java) file sets the consumer properties. For Enterprise Security Enabled clusters an additional property must be added "properties.setProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");"
8787

8888
```java
8989
KafkaConsumer<String, String> consumer;
@@ -109,7 +109,14 @@ In this code, the consumer is configured to read from the start of the topic (`a
109109

110110
The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Producer-Consumer/src/main/java/com/microsoft/example/Run.java) file provides a command-line interface that runs either the producer or consumer code. You must provide the Kafka broker host information as a parameter. You can optionally include a group ID value, which is used by the consumer process. If you create multiple consumer instances using the same group ID, they'll load balance reading from the topic.
111111

112-
## Build and deploy the example
112+
## Use Prebuilt-Jars
113+
Download the jars from here: https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/master/Prebuilt-Jars. If your cluster is **Enterprise Security Package (ESP)** enabled, use kafka-producer-consumer-esp.jar. Use the command below to copy the jars to your cluster.
114+
115+
```cmd
116+
scp kafka-producer-consumer*.jar [email protected]:kafka-producer-consumer.jar
117+
```
118+
119+
## Build the jars from code
113120

114121
If you would like to skip this step, prebuilt jars can be downloaded from the `Prebuilt-Jars` subdirectory. Download the kafka-producer-consumer.jar. If your cluster is **Enterprise Security Package (ESP)** enabled, use kafka-producer-consumer-esp.jar. Execute step 3 to copy the jar to your HDInsight cluster.
115122

@@ -121,56 +128,14 @@ If you would like to skip this step, prebuilt jars can be downloaded from the `P
121128
mvn clean package
122129
```
123130
124-
This command creates a directory named `target`, that contains a file named `kafka-producer-consumer-1.0-SNAPSHOT.jar`.
131+
This command creates a directory named `target`, that contains a file named `kafka-producer-consumer-1.0-SNAPSHOT.jar`. For ESP clusters the file will be `kafka-producer-consumer-esp-1.0-SNAPSHOT.jar`
125132
126133
3. Replace `sshuser` with the SSH user for your cluster, and replace `CLUSTERNAME` with the name of your cluster. Enter the following command to copy the `kafka-producer-consumer-1.0-SNAPSHOT.jar` file to your HDInsight cluster. When prompted enter the password for the SSH user.
127134
128135
```cmd
129-
scp ./target/kafka-producer-consumer-1.0-SNAPSHOT.jar [email protected]:kafka-producer-consumer.jar
130-
```
131-
132-
## <a id="run"></a> Run the example
133-
134-
1. Replace `sshuser` with the SSH user for your cluster, and replace `CLUSTERNAME` with the name of your cluster. Open an SSH connection to the cluster, by entering the following command. If prompted, enter the password for the SSH user account.
135-
136-
```cmd
137-
136+
scp ./target/kafka-producer-consumer*.jar [email protected]:kafka-producer-consumer.jar
138137
```
139138
140-
1. To get the Kafka broker hosts, substitute the values for `<clustername>` and `<password>` in the following command and execute it. Use the same casing for `<clustername>` as shown in the Azure portal. Replace `<password>` with the cluster login password, then execute:
141-
142-
```bash
143-
sudo apt -y install jq
144-
export clusterName='<clustername>'
145-
export password='<password>'
146-
export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
147-
```
148-
149-
> [!Note]
150-
> This command requires Ambari access. If your cluster is behind an NSG, run this command from a machine that can access Ambari.
151-
152-
1. Create Kafka topic, `myTest`, by entering the following command:
153-
154-
```bash
155-
java -jar kafka-producer-consumer.jar create myTest $KAFKABROKERS
156-
```
157-
158-
1. To run the producer and write data to the topic, use the following command:
159-
160-
```bash
161-
java -jar kafka-producer-consumer.jar producer myTest $KAFKABROKERS
162-
```
163-
164-
1. Once the producer has finished, use the following command to read from the topic:
165-
166-
```bash
167-
java -jar kafka-producer-consumer.jar consumer myTest $KAFKABROKERS
168-
```
169-
170-
The records read, along with a count of records, is displayed.
171-
172-
1. Use __Ctrl + C__ to exit the consumer.
173-
174139
### Multiple consumers
175140
176141
Kafka consumers use a consumer group when reading records. Using the same group with multiple consumers results in load balanced reads from a topic. Each consumer in the group receives a portion of the records.
@@ -200,6 +165,14 @@ Consumption by clients within the same group is handled through the partitions f
200165
201166
Records stored in Kafka are stored in the order they're received within a partition. To achieve in-ordered delivery for records *within a partition*, create a consumer group where the number of consumer instances matches the number of partitions. To achieve in-ordered delivery for records *within the topic*, create a consumer group with only one consumer instance.
202167

168+
## Common Issues faced
169+
170+
1. I am unable to create topic.
171+
172+
If your cluster is Enterprise Security Pack Enabled, you need to use the Prebuilt Jar: https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/blob/master/Prebuilt-Jars/kafka-producer-consumer-esp.jar.
173+
174+
ESP jar can be built from from `DomainJoined-Producer-Consumer`subdirectory. https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/master/DomainJoined-Producer-Consumer . Note the Producer & Consumer properties. They have an additional property CommonClientConfigs.SECURITY_PROTOCOL_CONFIG for ESP enabled clusters
175+
203176
## Clean up resources
204177

205178
To clean up the resources created by this tutorial, you can delete the resource group. Deleting the resource group also deletes the associated HDInsight cluster, and any other resources associated with the resource group.

0 commit comments

Comments
 (0)