Skip to content

Commit 81a18a3

Browse files
authored
Merge pull request #111466 from linda33wj/master
Update ADF copy related articles
2 parents d389c95 + d6a2f08 commit 81a18a3

File tree

3 files changed

+5
-8
lines changed

3 files changed

+5
-8
lines changed

articles/data-factory/connector-teradata.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,7 @@ You are suggested to enable parallel copy with data partitioning especially when
254254

255255
| Scenario | Suggested settings |
256256
| ------------------------------------------------------------ | ------------------------------------------------------------ |
257-
| Full load from large table. | **Partition option**: Hash. <br><br/>During execution, Data Factory automatically detects the PK column, applies a hash against it, and copies data by partitions. |
257+
| Full load from large table. | **Partition option**: Hash. <br><br/>During execution, Data Factory automatically detects the primary index column, applies a hash against it, and copies data by partitions. |
258258
| Load large amount of data by using a custom query. | **Partition option**: Hash.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHashPartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used for apply hash partition. If not specified, Data Factory automatically detects the PK column of the table you specified in the Teradata dataset.<br><br>During execution, Data Factory replaces `?AdfHashPartitionCondition` with the hash partition logic, and sends to Teradata. |
259259
| Load large amount of data by using a custom query, having an integer column with evenly distributed value for range partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data. You can partition against the column with integer data type.<br>**Partition upper bound** and **partition lower bound**: Specify if you want to filter against the partition column to retrieve data only between the lower and upper range.<br><br>During execution, Data Factory replaces `?AdfRangePartitionColumnName`, `?AdfRangePartitionUpbound`, and `?AdfRangePartitionLowbound` with the actual column name and value ranges for each partition, and sends to Teradata. <br>For example, if your partition column "ID" set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, Data Factory retrieves data by 4 partitions. Their IDs are between [1,20], [21, 40], [41, 60], and [61, 80], respectively. |
260260

articles/data-factory/control-flow-get-metadata-activity.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: data-services
1313

1414

1515
ms.topic: conceptual
16-
ms.date: 03/02/2020
16+
ms.date: 04/15/2020
1717
ms.author: jingwang
1818

1919
---
@@ -52,6 +52,7 @@ The Get Metadata activity takes a dataset as an input and returns metadata infor
5252
| [SFTP](connector-sftp.md) | √/√ | √/√ || x/x | √/√ || x ||| √/√ |
5353
| [FTP](connector-ftp.md) | √/√ | √/√ || x/x | x/x || x ||| √/√ |
5454

55+
- When using Get Metadata activity against a folder, make sure you have LIST/EXECUTE permission to the given folder.
5556
- For Amazon S3 and Google Cloud Storage, `lastModified` applies to the bucket and the key but not to the virtual folder, and `exists` applies to the bucket and the key but not to the prefix or virtual folder.
5657
- For Azure Blob storage, `lastModified` applies to the container and the blob but not to the virtual folder.
5758
- `lastModified` filter currently applies to filter child items but not the specified folder/file itself.

articles/data-factory/copy-activity-schema-and-type-mapping.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 02/13/2020
15+
ms.date: 04/15/2020
1616
ms.author: jingwang
1717

1818
---
@@ -276,11 +276,7 @@ Copy activity performs source types to sink types mapping with the following 2-s
276276
1. Convert from native source types to Azure Data Factory interim data types
277277
2. Convert from Azure Data Factory interim data types to native sink type
278278

279-
You can find the mapping between native type to interim type in the "Data type mapping" section in each connector topic.
280-
281-
### Supported data types
282-
283-
Data Factory supports the following interim data types: You can specify below values when configuring type information in [dataset structure](concepts-datasets-linked-services.md#dataset-structure-or-schema) configuration:
279+
Copy activity supports the following interim data types:
284280

285281
* Byte[]
286282
* Boolean

0 commit comments

Comments
 (0)