Skip to content

Commit 6f1a929

Browse files
authored
Update data-flow-sink.md
1 parent 95de394 commit 6f1a929

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

articles/data-factory/data-flow-sink.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.service: data-factory
99
ms.subservice: data-flows
1010
ms.topic: conceptual
1111
ms.custom: seo-lt-2019, ignite-2022
12-
ms.date: 10/26/2022
12+
ms.date: 11/01/2022
1313
---
1414

1515
# Sink transformation in mapping data flow
@@ -106,6 +106,10 @@ For example, if I specify a single key column of `column1` in a cache sink calle
106106
107107
**Write to activity output** The cached sink can optionally write your output data to the input of the next pipeline activity. This will allow you to quickly and easily pass data out of your data flow activity without needing to persist the data in a data store.
108108

109+
## Update method
110+
111+
For database sink types, the Settings tab will include an "Update method" property. The default is insert but also includes checkbox options for update, upsert, and delete. To utilize those additional options, you will need to add an [Alter Row transformation](data-flow-alter-row.md) before the sink. The Alter Row will allow you to define the conditions for each of the database actions. If your source is a native CDC enable source, then you can set the update methods without an Alter Row as ADF is already aware of the row markers for insert, update, upsert, and delete.
112+
109113
## Field mapping
110114

111115
Similar to a select transformation, on the **Mapping** tab of the sink, you can decide which incoming columns will get written. By default, all input columns, including drifted columns, are mapped. This behavior is known as *automapping*.

0 commit comments

Comments
 (0)