You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Table API and SQL - Use Iceberg Catalog type with Hive in Apache Flink®
3
3
description: Learn how to create Iceberg Catalog in Apache Flink® on HDInsight on AKS
4
4
ms.service: hdinsight-aks
5
5
ms.topic: how-to
6
-
ms.date: 10/27/2023
6
+
ms.date: 3/28/2024
7
7
---
8
8
9
9
# Create Iceberg Catalog in Apache Flink® on HDInsight on AKS
@@ -23,8 +23,10 @@ In this article, we learn how to use Iceberg Table managed in Hive catalog, with
23
23
Once you launch the Secure Shell (SSH), let us start downloading the dependencies required to the SSH node, to illustrate the Iceberg table managed in Hive catalog.
Copy file name to clipboardExpand all lines: articles/hdinsight-aks/flink/monitor-changes-postgres-table-flink.md
+72-26Lines changed: 72 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
3
3
description: Learn how to perform CDC on PostgreSQL table using Apache Flink®
4
4
ms.service: hdinsight-aks
5
5
ms.topic: how-to
6
-
ms.date: 10/27/2023
6
+
ms.date: 03/28/2024
7
7
---
8
8
9
9
# Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
@@ -12,7 +12,7 @@ ms.date: 10/27/2023
12
12
13
13
Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes.
14
14
15
-
Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Apache Flink SQL system.
15
+
Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Apache Flink SQL system.
16
16
17
17
This support is useful in many cases to:
18
18
@@ -91,7 +91,7 @@ Now, let's learn how to monitor changes on PostgreSQL table using Flink-SQL CDC.
Copy file name to clipboardExpand all lines: articles/hdinsight-aks/flink/process-and-consume-data.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
3
3
description: Learn how to use Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
4
4
ms.service: hdinsight-aks
5
5
ms.topic: how-to
6
-
ms.date: 10/27/2023
6
+
ms.date: 03/28/2024
7
7
---
8
8
9
9
# Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
@@ -12,7 +12,7 @@ ms.date: 10/27/2023
12
12
13
13
A well known use case for Apache Flink is stream analytics. The popular choice by many users to use the data streams, which are ingested using Apache Kafka. Typical installations of Flink and Kafka start with event streams being pushed to Kafka, which can be consumed by Flink jobs.
14
14
15
-
This example uses HDInsight on AKS clusters running Flink 1.16.0 to process streaming data consuming and producing Kafka topic.
15
+
This example uses HDInsight on AKS clusters running Flink 1.17.0 to process streaming data consuming and producing Kafka topic.
16
16
17
17
> [!NOTE]
18
18
> FlinkKafkaConsumer is deprecated and will be removed with Flink 1.17, please use KafkaSource instead.
@@ -39,7 +39,7 @@ Flink provides an [Apache Kafka Connector](https://nightlies.apache.org/flink/fl
0 commit comments