Skip to content

Commit 105cbd3

Browse files
authored
Update concepts-data-flow-performance-sinks.md
1 parent cd79719 commit 105cbd3

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

articles/data-factory/concepts-data-flow-performance-sinks.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
2-
title: Optimizing sink performance in mapping data flow
2+
title: Sink performance and best practices in mapping data flow
33
titleSuffix: Azure Data Factory & Azure Synapse
4-
description: Learn about optimizing sink performance in mapping data flows in Azure Data Factory and Azure Synapse Analytics pipelines.
4+
description: Learn about optimizing sink performance and best practices in mapping data flows in Azure Data Factory and Azure Synapse Analytics pipelines.
55
author: kromerm
66
ms.topic: conceptual
77
ms.author: makromer
88
ms.service: data-factory
99
ms.subservice: data-flows
1010
ms.custom: synapse
11-
ms.date: 09/29/2021
11+
ms.date: 10/06/2021
1212
---
1313

1414
# Optimizing sinks
@@ -19,6 +19,10 @@ When data flows write to sinks, any custom partitioning will happen immediately
1919

2020
With Azure SQL Database, the default partitioning should work in most cases. There is a chance that your sink may have too many partitions for your SQL database to handle. If you are running into this, reduce the number of partitions outputted by your SQL Database sink.
2121

22+
### Best practice for deleting rows in sink based on missing rows in source
23+
24+
Here is a video walk through of how to use data flows with exits, alter row, and sink transformations to achieve this common pattern: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5]
25+
2226
### Impact of error row handling to performance
2327

2428
When you enable error row handling ("continue on error") in the sink transformation, the service will take an additional step before writing the compatible rows to your destination table. This additional step will have a small performance penalty that can be in the range of 5% added for this step with an additional small performance hit also added if you set the option to also with the incompatible rows to a log file.

0 commit comments

Comments
 (0)