Skip to content

Commit 5506651

Browse files
author
Sreekanth Iyer (Ushta Te Consultancy Services)
committed
Added Retiremenr Banner
1 parent f3dfe8c commit 5506651

File tree

75 files changed

+150
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+150
-0
lines changed

articles/hdinsight-aks/flink/application-mode-cluster-on-hdinsight-on-aks.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,11 @@ ms.date: 03/21/2024
88

99
# Apache Flink Application Mode cluster on HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

1314

15+
1416
HDInsight on AKS now offers a Flink Application mode cluster. This cluster lets you manage cluster Flink application mode lifecycle using the Azure portal with easy-to-use interface and Azure Resource Management Rest APIs. Application mode clusters are designed to support large and long-running jobs with dedicated resources, and handle resource-intensive or extensive data processing tasks.
1517

1618
This deployment mode enables you to assign dedicated resources for specific Flink applications, ensuring that they have enough computing power and memory to handle large workloads efficiently. 

articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 03/29/2024
88

99
# Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery. In this article, learn how to write event messages into Azure Data Lake Storage Gen2 with DataStream API.
1416

1517
## Prerequisites

articles/hdinsight-aks/flink/azure-service-bus-demo.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,10 @@ ms.date: 04/02/2024
77
---
88
# Use Apache Flink on HDInsight on AKS with Azure Service Bus
99

10+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1011
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1112

13+
1214
This article provides an overview and demonstration of Apache Flink DataStream API on HDInsight on AKS for Azure Service Bus. A Flink job demonstration is designed to read messages from an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) and writes them to [Azure Data Lake Storage Gen2](./assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md) (ADLS Gen2).
1315

1416
## Prerequisites

articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 04/02/2024
88

99
# Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source on HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes.
1416

1517
In this article, learn how to perform Change Data Capture of SQL Server using Datastream API. The SQLServer CDC connector can also be a DataStream source.

articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 04/02/2024
88

99
# Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
This example uses [Apache Flink](../flink/flink-overview.md) to sink [HDInsight for Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction).
1416

1517
This example is prominent when Engineers prefer real-time aggregated data for analysis. With access to historical aggregated data, you can build machine learning (ML) models to build insights or actions. You can also ingest IoT data into Apache Flink to aggregate data in real-time and store it in Apache Cassandra.

articles/hdinsight-aks/flink/create-kafka-table-flink-kafka-sql-connector.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 03/14/2024
88

99
# Create Apache Kafka® table on Apache Flink® on HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
Using this example, learn how to Create Kafka table on Apache FlinkSQL.
1416

1517
## Prerequisites

articles/hdinsight-aks/flink/datastream-api-mongodb.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 03/22/2024
88

99
# Use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
Apache Flink provides a MongoDB connector for reading and writing data from and to MongoDB collections with at-least-once guarantees.
1416

1517
This example demonstrates on how to use Apache Flink 1.17.0 on HDInsight on AKS along with your existing MongoDB as Sink and Source with Flink DataStream API MongoDB connector.

articles/hdinsight-aks/flink/fabric-lakehouse-flink-datastream-api.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,10 @@ ms.date: 03/23/2024
77
---
88
# Connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink®
99

10+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1011
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1112

13+
1214
This example demonstrates on how to use HDInsight on AKS cluster for Apache Flink® with [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview).
1315

1416
[Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place.

articles/hdinsight-aks/flink/flink-catalog-delta-hive.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 03/29/2024
88

99
# Create Delta Catalog with Apache Flink® on Azure HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
[Delta Lake](https://docs.delta.io/latest/delta-intro.html) is an open source project that enables building a Lakehouse architecture on top of data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing on top of existing data lakes.
1416

1517
In this article, we learn how Apache Flink SQL/TableAPI is used to implement a Delta catalog for Apache Flink, with Hive Catalog. Delta Catalog delegates all metastore communication to Hive Catalog. It uses the existing logic for Hive or In-Memory metastore communication that is already implemented in Flink.

articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@ ms.date: 04/19/2024
88

99
# Create Iceberg Catalog in Apache Flink® on HDInsight on AKS
1010

11+
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)]
1112
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
1213

14+
1315
[Apache Iceberg](https://iceberg.apache.org/) is an open table format for huge analytic datasets. Iceberg adds tables to compute engines like Apache Flink, using a high-performance table format that works just like a SQL table. Apache Iceberg [supports](https://iceberg.apache.org/multi-engine-support/#apache-flink) both Apache Flink’s DataStream API and Table API.
1416

1517
In this article, we learn how to use Iceberg Table managed in Hive catalog, with Apache Flink on HDInsight on AKS cluster.

0 commit comments

Comments
 (0)