Skip to content

Commit 06cd888

Browse files
Merge pull request #271170 from sreekzz/Change-Kafka-Version
Replaced text Kafka 2.4 to 3.2
2 parents 3413c05 + 949ff38 commit 06cd888

6 files changed

+21
-31
lines changed

articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ Flink provides an Apache Kafka connector for reading data from and writing data
103103
*abfsGen2.java*
104104

105105
> [!Note]
106-
> Replace [Apache Kafka on HDInsight cluster](../../hdinsight/kafka/apache-kafka-get-started.md) bootStrapServers with your own brokers for Kafka 2.4 or 3.2
106+
> Replace [Apache Kafka on HDInsight cluster](../../hdinsight/kafka/apache-kafka-get-started.md) bootStrapServers with your own brokers for Kafka 3.2
107107
108108
``` java
109109
package contoso.example;

articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ GO
138138
```
139139
##### Maven source code on IdeaJ
140140

141-
In the below snippet, we use Kafka 2.4.1. Based on your usage, update the version of Kafka on `<kafka.version>`.
141+
Based on your usage, update the version of Kafka on `<kafka.version>`.
142142

143143
**maven pom.xml**
144144

articles/hdinsight-aks/flink/join-stream-kafka-table-filesystem.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,6 @@ We're creating a topic called `user_events`.
3131
timestamp,
3232
```
3333

34-
**Kafka 2.4.1**
35-
```
36-
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events --zookeeper zk0-contos:2181
37-
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events_output --zookeeper zk0-contos:2181
38-
```
39-
4034
**Kafka 3.2.0**
4135
```
4236
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events --bootstrap-server wn0-contsk:9092
@@ -84,7 +78,7 @@ In this step, we perform the following activities
8478
<flink.version>1.17.0</flink.version>
8579
<java.version>1.8</java.version>
8680
<scala.binary.version>2.12</scala.binary.version>
87-
<kafka.version>3.2.0</kafka.version> //replace with 2.4.1 if you are using HDInsight Kafka 2.4.1
81+
<kafka.version>3.2.0</kafka.version>
8882
</properties>
8983
<dependencies>
9084
<dependency>

articles/hdinsight-aks/flink/sink-kafka-to-kibana.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: Use Elasticsearch along with Apache Flink® on HDInsight on AKS
3-
description: Learn how to use Elasticsearch along Apache Flink® on HDInsight on AKS
3+
description: Learn how to use Elasticsearch along Apache Flink® on HDInsight on AKS.
44
ms.service: hdinsight-aks
55
ms.topic: how-to
6-
ms.date: 10/27/2023
6+
ms.date: 04/04/2024
77
---
88

99
# Using Elasticsearch with Apache Flink® on HDInsight on AKS
@@ -18,7 +18,8 @@ In this article, learn how to Use Elastic along Apache Flink® on HDInsight on A
1818

1919
## Elasticsearch and Kibana
2020

21-
Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including
21+
Elasticsearch is a distributed, free, and open search and analytics engine for all types of data, including.
22+
2223
* Textual
2324
* Numerical
2425
* Geospatial
@@ -27,17 +28,17 @@ Elasticsearch is a distributed, free and open search and analytics engine for al
2728

2829
Kibana is a free and open frontend application that sits on top of the elastic stack, providing search and data visualization capabilities for data indexed in Elasticsearch.
2930

30-
For more information, refer
31+
For more information, see.
3132
* [Elasticsearch](https://www.elastic.co)
3233
* [Kibana](https://www.elastic.co/guide/en/kibana/current/index.html)
3334

3435

3536
## Prerequisites
3637

37-
* [Create Flink 1.16.0 cluster](./flink-create-cluster-portal.md)
38+
* [Create Flink 1.17.0 cluster](./flink-create-cluster-portal.md)
3839
* Elasticsearch-7.13.2
3940
* Kibana-7.13.2
40-
* [HDInsight 5.0 - Kafka 2.4.1](../../hdinsight/kafka/apache-kafka-get-started.md)
41+
* [HDInsight 5.0 - Kafka 3.2.0](../../hdinsight/kafka/apache-kafka-get-started.md)
4142
* IntelliJ IDEA for development on an Azure VM which in the same Vnet
4243

4344

@@ -90,7 +91,7 @@ sudo apt install elasticsearch
9091
9192
For installing and configuring Kibana Dashboard, we don’t need to add any other repository because the packages are available through the already added ElasticSearch.
9293
93-
We use the following command to install Kibana
94+
We use the following command to install Kibana.
9495
9596
```
9697
sudo apt install kibana
@@ -111,9 +112,9 @@ sudo apt install kibana
111112
```
112113
### Access the Kibana Dashboard web interface
113114
114-
In order to make Kibana accessible from output, need to set network.host to 0.0.0.0
115+
In order to make Kibana accessible from output, need to set network.host to 0.0.0.0.
115116
116-
configure /etc/kibana/kibana.yml on Ubuntu VM
117+
Configure `/etc/kibana/kibana.yml` on Ubuntu VM
117118
118119
> [!NOTE]
119120
> 10.0.1.4 is a local private IP, that we have used which can be accessed in maven project develop Windows VM. You're required to make modifications according to your network security requirements. We use the same IP later to demo for performing analytics on Kibana.
@@ -129,12 +130,12 @@ elasticsearch.hosts: ["http://10.0.1.4:9200"]
129130
130131
## Prepare Click Events on HDInsight Kafka
131132
132-
We use python output as input to produce the streaming data
133+
We use python output as input to produce the streaming data.
133134
134135
```
135136
sshuser@hn0-contsk:~$ python weblog.py | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --bootstrap-server wn0-contsk:9092 --topic click_events
136137
```
137-
Now, lets check messages in this topic
138+
Now, lets check messages in this topic.
138139
139140
```
140141
sshuser@hn0-contsk:~$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server wn0-contsk:9092 --topic click_events
@@ -149,7 +150,7 @@ sshuser@hn0-contsk:~$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.s
149150
150151
## Creating Kafka Sink to Elastic
151152
152-
Let us write maven source code on the Windows VM
153+
Let us write maven source code on the Windows VM.
153154
154155
**Main: kafkaSinkToElastic.java**
155156
``` java
@@ -289,7 +290,7 @@ public class kafkaSinkToElastic {
289290

290291
**Package the jar and submit to Flink to run on WebSSH**
291292

292-
On [Secure Shell for Flink](./flink-web-ssh-on-portal-to-flink-sql.md), you can use the following commands
293+
On [Secure Shell for Flink](./flink-web-ssh-on-portal-to-flink-sql.md), you can use the following commands.
293294

294295
```
295296
msdata@pod-0 [ ~ ]$ ls -l FlinkElasticSearch-1.0-SNAPSHOT.jar
@@ -329,7 +330,7 @@ Job has been submitted with JobID e0eba72d5143cea53bcf072335a4b1cb
329330

330331
## Validation on Apache Flink Job UI
331332

332-
You can find the job in running state on your Flink Web UI
333+
You can find the job in running state on your Flink Web UI.
333334

334335
:::image type="content" source="./media/sink-kafka-to-kibana/flink-elastic-job.png" alt-text="Screenshot showing Kibana UI to start Elasticsearch and Kibana and perform analytics on Kibana." lightbox="./media/sink-kafka-to-kibana/flink-elastic-job.png":::
335336

articles/hdinsight-aks/flink/use-apache-nifi-with-datastream-api.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,8 @@ By combining the low latency streaming features of Apache Flink and the dataflow
3131
For purposes of this demonstration, we're using a HDInsight Kafka Cluster. Let us prepare HDInsight Kafka topic for the demo.
3232

3333
> [!NOTE]
34-
> Setup a HDInsight cluster with [Apache Kafka](../../hdinsight/kafka/apache-kafka-get-started.md) and replace broker list with your own list before you get started for both Kafka 2.4 and 3.2.
34+
> Setup a HDInsight cluster with [Apache Kafka](../../hdinsight/kafka/apache-kafka-get-started.md) and replace broker list with your own list before you get started for both Kafka 3.2.
3535
36-
**Kafka 2.4.1**
37-
```
38-
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic click_events --zookeeper zk0-contsk:2181
39-
```
4036

4137
**Kafka 3.2.0**
4238
```
@@ -182,7 +178,6 @@ public class ClickSource implements SourceFunction<Event> {
182178
```
183179
**Maven pom.xml**
184180

185-
You can replace 2.4.1 with 3.2.0 in case you're using Kafka 3.2.0 on HDInsight, where applicable on the pom.xml.
186181

187182
``` xml
188183
<?xml version="1.0" encoding="UTF-8"?>
@@ -200,7 +195,7 @@ You can replace 2.4.1 with 3.2.0 in case you're using Kafka 3.2.0 on HDInsight,
200195
<flink.version>1.17.0</flink.version>
201196
<java.version>1.8</java.version>
202197
<scala.binary.version>2.12</scala.binary.version>
203-
<kafka.version>3.2.0</kafka.version> ---> Replace 2.4.1 with 3.2.0 , in case you're using HDInsight Kafka 3.2.0
198+
<kafka.version>3.2.0</kafka.version>
204199
</properties>
205200
<dependencies>
206201
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->

articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ hbase:002:0>
170170
<java.version>1.8</java.version>
171171
<scala.binary.version>2.12</scala.binary.version>
172172
<hbase.version>2.4.11</hbase.version>
173-
<kafka.version>3.2.0</kafka.version> // Replace with 2.4.0 for HDInsight Kafka 2.4
173+
<kafka.version>3.2.0</kafka.version>
174174
</properties>
175175
<dependencies>
176176
<dependency>

0 commit comments

Comments
 (0)