Skip to content

Commit df03586

Browse files
authored
Update how-to-data-flow-error-rows.md
1 parent 90fa66f commit df03586

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/data-factory/how-to-data-flow-error-rows.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.author: makromer
1414

1515
# Handle SQL truncation error rows in Data Factory mapping data flows
1616

17-
A very common scenario in Data Factory when using mapping data flows, is to write your transformed data to an Azure SQL database. In this scenario, a common error condition that you must prevent against is possible column truncation. Follow these steps to provide logging of columns that won't fit into a target string column, allowing your data flow to continue in those scenarios.
17+
A common scenario in Data Factory when using mapping data flows, is to write your transformed data to an Azure SQL database. In this scenario, a common error condition that you must prevent against is possible column truncation. Follow these steps to provide logging of columns that won't fit into a target string column, allowing your data flow to continue in those scenarios.
1818

1919
## Scenario
2020

@@ -32,9 +32,9 @@ A very common scenario in Data Factory when using mapping data flows, is to writ
3232

3333
![conditional split](media/data-flow/error1.png)
3434

35-
2. This conditional split transformation defines the maximum length of "title" to be 5. Any row that is less than or equal to five will go into the ```GoodRows``` stream. Any row that is larger than five will go into the ```BadRows``` stream.
35+
2. This conditional split transformation defines the maximum length of "title" to be five. Any row that is less than or equal to five will go into the ```GoodRows``` stream. Any row that is larger than five will go into the ```BadRows``` stream.
3636

37-
3. Now we need to log the rows that failed. Add a sink transformation to the ```BadRows``` stream for logging. Here, we'll "auto-map" all of the fields so that we have logging of the complete transaction record. This is a text delimited CSV file output to a single file in Blob Storage. We'll call the log file "badrows.csv".
37+
3. Now we need to log the rows that failed. Add a sink transformation to the ```BadRows``` stream for logging. Here, we'll "auto-map" all of the fields so that we have logging of the complete transaction record. This is a text-delimited CSV file output to a single file in Blob Storage. We'll call the log file "badrows.csv".
3838

3939
![Bad rows](media/data-flow/error3.png)
4040

0 commit comments

Comments
 (0)