Skip to content

Commit 6b0d814

Browse files
authored
Improved Acrolinx Score
Improved Acrolinx Score
1 parent f9a6f11 commit 6b0d814

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
3-
description: Learn how to write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
3+
description: Learn how to write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API.
44
ms.service: hdinsight-aks
55
ms.topic: how-to
66
ms.date: 03/14/2024
@@ -22,11 +22,11 @@ Apache Flink uses file systems to consume and persistently store data, both for
2222

2323
## Apache Flink FileSystem connector
2424

25-
This filesystem connector provides the same guarantees for both BATCH and STREAMING and is designed to provide exactly once semantics for STREAMING execution. For more information, see [Flink DataStream Filesystem](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/filesystem)
25+
This filesystem connector provides the same guarantees for both BATCH and STREAMING and is designed to provide exactly once semantics for STREAMING execution. For more information, see [Flink DataStream Filesystem](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/filesystem).
2626

2727
## Apache Kafka Connector
2828

29-
Flink provides an Apache Kafka connector for reading data from and writing data to Kafka topics with exactly once guarantees. For more information, see [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka)
29+
Flink provides an Apache Kafka connector for reading data from and writing data to Kafka topics with exactly once guarantees. For more information, see [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka).
3030

3131
## Build the project for Apache Flink
3232

@@ -163,17 +163,17 @@ public class KafkaSinkToGen2 {
163163

164164
**Submit the job on Flink Dashboard UI**
165165

166-
We are using Maven to package a jar onto local and submitting to Flink, and using Kafka to sink into ADLS Gen2
166+
We are using Maven to package a jar onto local and submitting to Flink, and using Kafka to sink into ADLS Gen2.
167167

168168
:::image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/submit-the-job-flink-ui.png" alt-text="Screenshot showing jar submission to Flink dashboard.":::
169-
:::image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/submit-the-job-flink-ui-2.png" alt-text="Screenshot showing job running on Flink dashboard.":::
169+
:::Image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/submit-the-job-flink-ui-2.png" alt-text="Screenshot showing job running on Flink dashboard.":::
170170

171171
**Validate streaming data on ADLS Gen2**
172172

173173
We are seeing the `click_events` streaming into ADLS Gen2
174174

175175
:::image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/validate-stream-azure-data-lake-storage-gen2-1.png" alt-text="Screenshot showing ADLS Gen2 output.":::
176-
:::image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/validate-stream-azure-data-lake-storage-gen2-2.png" alt-text="Screenshot showing Flink click event output.":::
176+
:::Image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/validate-stream-azure-data-lake-storage-gen2-2.png" alt-text="Screenshot showing Flink click event output.":::
177177

178178
You can specify a rolling policy that rolls the in-progress part file on any of the following three conditions:
179179

0 commit comments

Comments
 (0)