Skip to content

Commit c011f03

Browse files
authored
Merge pull request #168684 from kromerm/retireco3
Retire compute optimized
2 parents 85bb422 + a43385d commit c011f03

File tree

2 files changed

+38
-0
lines changed

2 files changed

+38
-0
lines changed

articles/data-factory/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -914,6 +914,8 @@ items:
914914
href: policy-reference.md
915915
- name: Azure CLI
916916
href: /cli/azure/datafactory
917+
- name: Compute optimized data flows retired
918+
href: compute-optimized-data-flow-retire.md
917919
- name: Resources
918920
items:
919921
- name: Whitepapers
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
title: Compute optimized retirement
3+
description: Data flow compute optimized option is being retired
4+
author: kromerm
5+
ms.author: makromer
6+
ms.service: data-factory
7+
ms.topic: tutorial
8+
ms.date: 06/29/2021
9+
---
10+
11+
# Retirement of data flow compute optimized option
12+
13+
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
14+
15+
Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mechanism to transform data in ETL jobs at scale using a graphical design paradigm. Data flows execute on the Azure Data Factory and Azure Synapse Analytics serverless Integration Runtime facility. The scalable nature of Azure Data Factory and Azure Synapse Analytics Integration Runtimes enabled three different compute options for the Azure Databricks Spark environment that is utilized to execute data flows at scale: Memory Optimized, General Purpose, and Compute Optimized. Memory Optimized and General Purpose are the recommended classes of data flow compute to use with your Integration Runtime for production workloads. Because Compute Optimized will often not suffice for common use cases with data flows, we recommend using General Purpose or Memory Optimized data flows in production workloads.
16+
17+
## Migration steps
18+
19+
From now through 31 August 2024, your Compute Optimized data flows will continue to work in your existing pipelines. To avoid service disruption, please remove your existing Compute Optimized data flows before 31 August 2024 and follow the steps below to create a new Azure Integration Runtime and data flow activity. When creating a new data flow activity:
20+
21+
1. Create a new Azure Integration Runtime with “General Purpose” or “Memory Optimized” as the compute type.
22+
2. Set your data flow activity using either of those compute types.
23+
24+
![Compute types](media/data-flow/compute-types.png)
25+
26+
## Comparison between different compute options
27+
28+
| Compute Option | Performance |
29+
| :-------------------- | :----------------------------------------------------------- |
30+
| General Purpose Data Flows (Basic) | Good for general use cases in production workloads |
31+
| Memory Optimized Data Flows (Standard) | Best performing runtime for data flows when working with large datasets and many calculations |
32+
| Compute Optimized Data Flows (Deprecated) | Not recommended for production workloads |
33+
34+
* [Visit the Azure Data Factory pricing page for the latest updated pricing available for General Purpose and Memory Optimized data flows](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/)
35+
* [Find more detailed information at the data flows FAQ here](https://aka.ms/dataflowsqa)
36+
* [Post questions and find answers on data flows on Microsoft Q&A](https://aka.ms/datafactoryqa)

0 commit comments

Comments
 (0)