Skip to content

Commit 09f46f9

Browse files
authored
Update data-migration-guidance-s3-azure-storage.md
1 parent 2122cf0 commit 09f46f9

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/data-factory/data-migration-guidance-s3-azure-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The picture above illustrates how you can achieve great data movement speeds thr
4242

4343
Within a single copy activity run, ADF has built-in retry mechanism so it can handle a certain level of transient failures in the data stores or in the underlying network.
4444

45-
When doing binary copying from S3 to Blob and from S3 to ADLS Gen2, ADF automatically performs checkpointing. If a copy activity run has failed or timed out, on a subsequent retry (make sure to retry count > 1), the copy resumes from the last failure point instead of starting from the beginning.
45+
When doing binary copying from S3 to Blob and from S3 to ADLS Gen2, ADF automatically performs checkpointing. If a copy activity run has failed or timed out, on a subsequent retry, the copy resumes from the last failure point instead of starting from the beginning.
4646

4747
## Network security
4848

@@ -81,7 +81,7 @@ Migrate data over private link:
8181

8282
### Initial snapshot data migration
8383

84-
Data partition is recommended especially when migrating more than 10 TB of data. To partition the data, leverage the ‘prefix’ setting to filter the folders and files in Amazon S3 by name, and then each ADF copy job can copy one partition at a time. You can run multiple ADF copy jobs concurrently for better throughput.
84+
Data partition is recommended especially when migrating more than 100 TB of data. To partition the data, leverage the ‘prefix’ setting to filter the folders and files in Amazon S3 by name, and then each ADF copy job can copy one partition at a time. You can run multiple ADF copy jobs concurrently for better throughput.
8585

8686
If any of the copy jobs fail due to network or data store transient issue, you can rerun the failed copy job to reload that specific partition again from AWS S3. All other copy jobs loading other partitions will not be impacted.
8787

@@ -149,4 +149,4 @@ Here is the [template](solution-template-migration-s3-azure.md) to start with to
149149

150150
## Next steps
151151

152-
- [Copy files from multiple containers with Azure Data Factory](solution-template-copy-files-multiple-containers.md)
152+
- [Copy files from multiple containers with Azure Data Factory](solution-template-copy-files-multiple-containers.md)

0 commit comments

Comments
 (0)