Skip to content

Commit b551a3e

Browse files
authored
Merge pull request #47557 from anushreeringne/patch-2
Updating documentation to include ESP details
2 parents 2bf443a + 1f62f18 commit b551a3e

File tree

1 file changed

+11
-27
lines changed

1 file changed

+11
-27
lines changed

articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md

Lines changed: 11 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -29,21 +29,20 @@ For more information on the APIs, see Apache documentation on the [Producer API]
2929

3030
## Prerequisites
3131

32-
* Apache Kafka on HDInsight 3.6. To learn how to create a Kafka on HDInsight cluster, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).
33-
32+
* Apache Kafka on HDInsight cluster. To learn how to create the cluster, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).
3433
* [Java Developer Kit (JDK) version 8](https://aka.ms/azure-jdks) or an equivalent, such as OpenJDK.
35-
3634
* [Apache Maven](https://maven.apache.org/download.cgi) properly [installed](https://maven.apache.org/install.html) according to Apache. Maven is a project build system for Java projects.
37-
38-
* An SSH client. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
35+
* An SSH client like Putty. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
3936

4037
## Understand the code
4138

42-
The example application is located at [https://github.com/Azure-Samples/hdinsight-kafka-java-get-started](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started), in the `Producer-Consumer` subdirectory. The application consists primarily of four files:
39+
The example application is located at [https://github.com/Azure-Samples/hdinsight-kafka-java-get-started](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started), in the `Producer-Consumer` subdirectory. If you're using **Enterprise Security Package (ESP)** enabled Kafka cluster, you should use the application version located in the `DomainJoined-Producer-Consumer` subdirectory.
4340

41+
The application consists primarily of four files:
4442
* `pom.xml`: This file defines the project dependencies, Java version, and packaging methods.
4543
* `Producer.java`: This file sends random sentences to Kafka using the producer API.
4644
* `Consumer.java`: This file uses the consumer API to read data from Kafka and emit it to STDOUT.
45+
* `AdminClientWrapper.java`: This file uses the admin API to create, describe, and delete Kafka topics.
4746
* `Run.java`: The command-line interface used to run the producer and consumer code.
4847

4948
### Pom.xml
@@ -112,9 +111,11 @@ The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started
112111

113112
## Build and deploy the example
114113

114+
If you would like to skip this step, prebuilt jars can be downloaded from the `Prebuilt-Jars` subdirectory. Download the kafka-producer-consumer.jar. If your cluster is **Enterprise Security Package (ESP)** enabled, use kafka-producer-consumer-esp.jar. Execute step 3 to copy the jar to your HDInsight cluster.
115+
115116
1. Download and extract the examples from [https://github.com/Azure-Samples/hdinsight-kafka-java-get-started](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started).
116117

117-
2. Set your current directory to the location of the `hdinsight-kafka-java-get-started\Producer-Consumer` directory and use the following command:
118+
2. Set your current directory to the location of the `hdinsight-kafka-java-get-started\Producer-Consumer` directory. If you are using **Enterprise Security Package (ESP)** enabled Kafka cluster, you should set the location to `DomainJoined-Producer-Consumer`subdirectory. Use the following command to build the application:
118119

119120
```cmd
120121
mvn clean package
@@ -136,29 +137,12 @@ The [Run.java](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started
136137
137138
```
138139
139-
1. Install [jq](https://stedolan.github.io/jq/), a command-line JSON processor. From the open SSH connection, enter following command to install `jq`:
140+
1. To get the Kafka broker hosts, substitute the values for `<clustername>` and `<password>` in the following command and execute it. Use the same casing for `<clustername>` as shown in the Azure portal. Replace `<password>` with the cluster login password, then execute:
140141
141142
```bash
142143
sudo apt -y install jq
143-
```
144-
145-
1. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command:
146-
147-
```bash
148-
export password='PASSWORD'
149-
```
150-
151-
1. Extract correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command:
152-
153-
```bash
154-
export clusterName=$(curl -u admin:$password -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
155-
```
156-
> [!Note]
157-
> If you're doing this process from outside the cluster, there is a different procedure for storing the cluster name. Get the cluster name in lower case from the Azure portal. Then, substitute the cluster name for `<clustername>` in the following command and execute it: `export clusterName='<clustername>'`.
158-
159-
1. To get the Kafka broker hosts, use the following command:
160-
161-
```bash
144+
export clusterName='<clustername>'
145+
export password='<password>'
162146
export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
163147
```
164148

0 commit comments

Comments
 (0)