You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/copy-activity-data-consistency.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,9 +24,9 @@ When you move data from source to destination store, Azure Data Factory copy act
24
24
25
25
> [!IMPORTANT]
26
26
> This feature is currently in preview with the following limitations we are actively working on:
27
-
>1. Data consistency verification is available only on binary files copying between file-based stores with 'PreserveHierarchy' behavior in copy activity. For copying tabular data, data consistency verification is not available in copy activity yet.
28
-
>2. When you enable session log setting in copy activity to log the inconsistent files being skipped, the completeness of log file can not be 100% guaranteed if copy activity failed.
29
-
>3. The session log contains inconsistent files only, where the successfully copied files are not logged so far.
27
+
>- Data consistency verification is available only on binary files copying between file-based stores with 'PreserveHierarchy' behavior in copy activity. For copying tabular data, data consistency verification is not available in copy activity yet.
28
+
>- When you enable session log setting in copy activity to log the inconsistent files being skipped, the completeness of log file can not be 100% guaranteed if copy activity failed.
29
+
>- The session log contains inconsistent files only, where the successfully copied files are not logged so far.
Copy file name to clipboardExpand all lines: articles/data-factory/copy-activity-fault-tolerance.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -127,7 +127,7 @@ From the log above, you can see bigfile.csv has been skipped due to another appl
127
127
## Copying tabular data
128
128
129
129
### Supported scenarios
130
-
Copy Activity supports three scenarios for detecting, skipping, and logging incompatible tabular data:
130
+
Copy activity supports three scenarios for detecting, skipping, and logging incompatible tabular data:
131
131
132
132
-**Incompatibility between the source data type and the sink native type**.
133
133
@@ -139,15 +139,15 @@ Copy Activity supports three scenarios for detecting, skipping, and logging inco
139
139
140
140
-**Primary key violation when writing to SQL Server/Azure SQL Database/Azure Cosmos DB**.
141
141
142
-
For example: Copy data from a SQL server to a SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy Activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
142
+
For example: Copy data from a SQL server to a SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
143
143
144
144
>[!NOTE]
145
145
>- For loading data into SQL Data Warehouse using PolyBase, configure PolyBase's native fault tolerance settings by specifying reject policies via "[polyBaseSettings](connector-azure-sql-data-warehouse.md#azure-sql-data-warehouse-as-sink)" in copy activity. You can still enable redirecting PolyBase incompatible rows to Blob or ADLS as normal as shown below.
146
146
>- This feature doesn't apply when copy activity is configured to invoke [Amazon Redshift Unload](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).
147
147
>- This feature doesn't apply when copy activity is configured to invoke a [stored procedure from a SQL sink](https://docs.microsoft.com/azure/data-factory/connector-azure-sql-database#invoke-a-stored-procedure-from-a-sql-sink).
148
148
149
149
### Configuration
150
-
The following example provides a JSON definition to configure skipping the incompatible rows in Copy Activity:
150
+
The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:
From the sample log file above, you can see one row "data1, data2, data3" has been skipped due to type conversion issue from source to destination store. Another row "data4, data5, data6" has been skipped due to PK violation issue from source to destination store.
217
217
218
218
219
-
## Copying tabular data (Legacy):
219
+
## Copying tabular data (legacy):
220
220
221
221
The following is the legacy way to enable fault tolerance for copying tabular data only. If you are creating new pipeline or activity, you are encouraged to start from [here](#copying-tabular-data) instead.
222
222
223
223
### Configuration
224
-
The following example provides a JSON definition to configure skipping the incompatible rows in Copy Activity:
224
+
The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:
0 commit comments