Skip to content

Commit 2847873

Browse files
authored
Merge pull request #174806 from kromerm/pqstringupdates
Pqstringupdates
2 parents a61c491 + cf18204 commit 2847873

File tree

2 files changed

+24
-4
lines changed

2 files changed

+24
-4
lines changed

articles/data-factory/concepts-data-flow-performance-sinks.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
2-
title: Optimizing sink performance in mapping data flow
2+
title: Sink performance and best practices in mapping data flow
33
titleSuffix: Azure Data Factory & Azure Synapse
4-
description: Learn about optimizing sink performance in mapping data flows in Azure Data Factory and Azure Synapse Analytics pipelines.
4+
description: Learn about optimizing sink performance and best practices in mapping data flows in Azure Data Factory and Azure Synapse Analytics pipelines.
55
author: kromerm
66
ms.topic: conceptual
77
ms.author: makromer
88
ms.service: data-factory
99
ms.subservice: data-flows
1010
ms.custom: synapse
11-
ms.date: 09/29/2021
11+
ms.date: 10/06/2021
1212
---
1313

1414
# Optimizing sinks
@@ -19,6 +19,10 @@ When data flows write to sinks, any custom partitioning will happen immediately
1919

2020
With Azure SQL Database, the default partitioning should work in most cases. There is a chance that your sink may have too many partitions for your SQL database to handle. If you are running into this, reduce the number of partitions outputted by your SQL Database sink.
2121

22+
### Best practice for deleting rows in sink based on missing rows in source
23+
24+
Here is a video walk through of how to use data flows with exits, alter row, and sink transformations to achieve this common pattern: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5]
25+
2226
### Impact of error row handling to performance
2327

2428
When you enable error row handling ("continue on error") in the sink transformation, the service will take an additional step before writing the compatible rows to your destination table. This additional step will have a small performance penalty that can be in the range of 5% added for this step with an additional small performance hit also added if you set the option to also with the incompatible rows to a log file.

articles/data-factory/wrangling-functions.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: makromer
66
ms.service: data-factory
77
ms.subservice: data-flows
88
ms.topic: conceptual
9-
ms.date: 04/16/2021
9+
ms.date: 10/06/2021
1010
---
1111

1212
# Transformation functions in Power Query for data wrangling
@@ -164,6 +164,22 @@ in
164164
#"Pivoted column"
165165
```
166166

167+
### Formatting date/time columns
168+
169+
To set the date/time format when using Power Query ADF, please follow these sets to set the format.
170+
171+
![Power Query Change Type](media/data-flow/power-query-date-2.png)
172+
173+
1. Select the column in the Power Query UI and choose Change Type > Date/Time
174+
2. You will see a warning message
175+
3. Open Advanced Editor and change ```TransformColumnTypes``` to ```TransformColumns```. Specify the format and culture based on the input data.
176+
177+
![Power Query Editor](media/data-flow/power-query-date-3.png)
178+
179+
```
180+
#"Changed column type 1" = Table.TransformColumns(#"Duplicated column", {{"start - Copy", each DateTime.FromText(_, [Format = "yyyy-MM-dd HH:mm:ss", Culture = "en-us"]), type datetime}})
181+
```
182+
167183
## Next steps
168184

169185
Learn how to [create a data wrangling Power Query in ADF](wrangling-tutorial.md).

0 commit comments

Comments
 (0)