Skip to content

Commit 887f216

Browse files
committed
address comments from reviewer
1 parent 38c2565 commit 887f216

File tree

2 files changed

+9
-9
lines changed

2 files changed

+9
-9
lines changed

articles/data-factory/copy-activity-data-consistency.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,9 @@ When you move data from source to destination store, Azure Data Factory copy act
2424

2525
> [!IMPORTANT]
2626
> This feature is currently in preview with the following limitations we are actively working on:
27-
> 1. Data consistency verification is available only on binary files copying between file-based stores with 'PreserveHierarchy' behavior in copy activity. For copying tabular data, data consistency verification is not available in copy activity yet.
28-
> 2. When you enable session log setting in copy activity to log the inconsistent files being skipped, the completeness of log file can not be 100% guaranteed if copy activity failed.
29-
> 3. The session log contains inconsistent files only, where the successfully copied files are not logged so far.
27+
>- Data consistency verification is available only on binary files copying between file-based stores with 'PreserveHierarchy' behavior in copy activity. For copying tabular data, data consistency verification is not available in copy activity yet.
28+
>- When you enable session log setting in copy activity to log the inconsistent files being skipped, the completeness of log file can not be 100% guaranteed if copy activity failed.
29+
>- The session log contains inconsistent files only, where the successfully copied files are not logged so far.
3030
3131
## Supported data stores
3232

articles/data-factory/copy-activity-fault-tolerance.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ From the log above, you can see bigfile.csv has been skipped due to another appl
127127
## Copying tabular data
128128

129129
### Supported scenarios
130-
Copy Activity supports three scenarios for detecting, skipping, and logging incompatible tabular data:
130+
Copy activity supports three scenarios for detecting, skipping, and logging incompatible tabular data:
131131

132132
- **Incompatibility between the source data type and the sink native type**.
133133

@@ -139,15 +139,15 @@ Copy Activity supports three scenarios for detecting, skipping, and logging inco
139139

140140
- **Primary key violation when writing to SQL Server/Azure SQL Database/Azure Cosmos DB**.
141141

142-
For example: Copy data from a SQL server to a SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy Activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
142+
For example: Copy data from a SQL server to a SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
143143

144144
>[!NOTE]
145145
>- For loading data into SQL Data Warehouse using PolyBase, configure PolyBase's native fault tolerance settings by specifying reject policies via "[polyBaseSettings](connector-azure-sql-data-warehouse.md#azure-sql-data-warehouse-as-sink)" in copy activity. You can still enable redirecting PolyBase incompatible rows to Blob or ADLS as normal as shown below.
146146
>- This feature doesn't apply when copy activity is configured to invoke [Amazon Redshift Unload](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).
147147
>- This feature doesn't apply when copy activity is configured to invoke a [stored procedure from a SQL sink](https://docs.microsoft.com/azure/data-factory/connector-azure-sql-database#invoke-a-stored-procedure-from-a-sql-sink).
148148
149149
### Configuration
150-
The following example provides a JSON definition to configure skipping the incompatible rows in Copy Activity:
150+
The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:
151151

152152
```json
153153
"typeProperties": {
@@ -216,12 +216,12 @@ Timestamp, Level, OperationName, OperationItem, Message
216216
From the sample log file above, you can see one row "data1, data2, data3" has been skipped due to type conversion issue from source to destination store. Another row "data4, data5, data6" has been skipped due to PK violation issue from source to destination store.
217217

218218

219-
## Copying tabular data (Legacy):
219+
## Copying tabular data (legacy):
220220

221221
The following is the legacy way to enable fault tolerance for copying tabular data only. If you are creating new pipeline or activity, you are encouraged to start from [here](#copying-tabular-data) instead.
222222

223223
### Configuration
224-
The following example provides a JSON definition to configure skipping the incompatible rows in Copy Activity:
224+
The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:
225225

226226
```json
227227
"typeProperties": {
@@ -277,7 +277,7 @@ data4, data5, data6, "2627", "Violation of PRIMARY KEY constraint 'PK_tblintstrd
277277
```
278278

279279
## Next steps
280-
See the other Copy Activity articles:
280+
See the other copy activity articles:
281281

282282
- [Copy activity overview](copy-activity-overview.md)
283283
- [Copy activity performance](copy-activity-performance.md)

0 commit comments

Comments
 (0)