Skip to content

Commit 9d541b7

Browse files
committed
Address several connector/copy doc feedback
1 parent 318118f commit 9d541b7

File tree

7 files changed

+12
-11
lines changed

7 files changed

+12
-11
lines changed

articles/data-factory/connector-rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,7 @@ The following properties are supported in the copy activity **source** section:
212212
| requestInterval | The time to wait before sending the request for next page. The default value is **00:00:01** | No |
213213

214214
>[!NOTE]
215-
>REST connector ignores any "Accept" header specified in `additionalHeaders`. As REST connector only support response in JSON, tt will auto generate a header of `Accept: application/json`.
215+
>REST connector ignores any "Accept" header specified in `additionalHeaders`. As REST connector only support response in JSON, it will auto generate a header of `Accept: application/json`.
216216
217217
**Example 1: Using the Get method with pagination**
218218

articles/data-factory/control-flow-get-metadata-activity.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: data-services
1313
ms.tgt_pltfrm: na
1414

1515
ms.topic: conceptual
16-
ms.date: 11/20/2019
16+
ms.date: 11/26/2019
1717
ms.author: jingwang
1818

1919
---
@@ -50,10 +50,11 @@ The Get Metadata activity takes a dataset as an input and returns metadata infor
5050
| [Azure Files](connector-azure-file-storage.md) | √/√ | √/√ || √/√ | √/√ || x ||| √/√ |
5151
| [File system](connector-file-system.md) | √/√ | √/√ || √/√ | √/√ || x ||| √/√ |
5252
| [SFTP](connector-sftp.md) | √/√ | √/√ || x/x | √/√ || x ||| √/√ |
53-
| [FTP](connector-ftp.md) | √/√ | √/√ || x/x | √/√ || x ||| √/√ |
53+
| [FTP](connector-ftp.md) | √/√ | √/√ || x/x | x/x || x ||| √/√ |
5454

5555
- For Amazon S3 and Google Cloud Storage, `lastModified` applies to the bucket and the key but not to the virtual folder, and `exists` applies to the bucket and the key but not to the prefix or virtual folder.
5656
- For Azure Blob storage, `lastModified` applies to the container and the blob but not to the virtual folder.
57+
- `lastModified` filter currently applies to filter child items but not the specified folder/file itself.
5758
- Wildcard filter on folders/files is not supported for Get Metadata activity.
5859

5960
**Relational database**

articles/data-factory/format-binary.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 08/06/2019
11+
ms.date: 11/26/2019
1212
ms.author: jingwang
1313

1414
---
@@ -31,7 +31,7 @@ For a full list of sections and properties available for defining datasets, see
3131
| type | The type property of the dataset must be set to **Binary**. | Yes |
3232
| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes |
3333
| compression | Group of properties to configure file compression. Configure this section when you want to do compression/decompression during activity execution. | No |
34-
| type | The compression codec used to read/write binary files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**. to use when saving the file. | No |
34+
| type | The compression codec used to read/write binary files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**. to use when saving the file.<br>Note when decompressing ZipDefalte file(s) and writing to file-based sink data store, files will be extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No |
3535
| level | The compression ratio. Apply when dataset is used in Copy activity sink.<br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](https://msdn.microsoft.com/library/system.io.compression.compressionlevel.aspx) topic. | No |
3636

3737
Below is an example of Binary dataset on Azure Blob Storage:

articles/data-factory/format-delimited-text.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 04/29/2019
11+
ms.date: 11/26/2019
1212
ms.author: jingwang
1313

1414
---
@@ -34,7 +34,7 @@ For a full list of sections and properties available for defining datasets, see
3434
| firstRowAsHeader | Specifies whether to treat/make the first row as a header line with names of columns.<br>Allowed values are **true** and **false** (default). | No |
3535
| nullValue | Specifies the string representation of null value. <br>The default value is **empty string**. | No |
3636
| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", “UTF-7”, "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258”.<br>Note mapping data flow doesn’t support UTF-7 encoding. | No |
37-
| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **snappy**, or **lz4**. to use when saving the file. <br>Note currently Copy activity doesn’t support snappy & lz4, and mapping data flow doesn’t support ZipDeflate. | No |
37+
| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **snappy**, or **lz4**. to use when saving the file. <br>Note currently Copy activity doesn’t support "snappy" & "lz4", and mapping data flow doesn’t support "ZipDeflate". <br>Note when decompressing ZipDefalte file(s) and writing to file-based sink data store, files will be extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No |
3838
| compressionLevel | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](https://msdn.microsoft.com/library/system.io.compression.compressionlevel.aspx) topic. | No |
3939

4040
Below is an example of delimited text dataset on Azure Blob Storage:

articles/data-factory/format-json.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 10/24/2019
11+
ms.date: 11/26/2019
1212
ms.author: jingwang
1313

1414
---
@@ -28,7 +28,7 @@ For a full list of sections and properties available for defining datasets, see
2828
| type | The type property of the dataset must be set to **Json**. | Yes |
2929
| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes |
3030
| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", "UTF-7", "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258".| No |
31-
| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **snappy**, or **lz4**. to use when saving the file. <br>Note currently Copy activity doesn’t support "snappy" & "lz4". | No |
31+
| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **snappy**, or **lz4**. to use when saving the file. <br>Note currently Copy activity doesn’t support "snappy" & "lz4".<br>Note when decompressing ZipDefalte file(s) and writing to file-based sink data store, files will be extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No |
3232
| compressionLevel | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](https://msdn.microsoft.com/library/system.io.compression.compressionlevel.aspx) topic. | No |
3333

3434
Below is an example of JSON dataset on Azure Blob Storage:

articles/data-factory/tutorial-bulk-copy-portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Create an Azure SQL Database with Adventure Works LT sample data following [Crea
5858

5959
1. If you don't have an Azure SQL Data Warehouse, see the [Create a SQL Data Warehouse](../sql-data-warehouse/sql-data-warehouse-get-started-tutorial.md) article for steps to create one.
6060

61-
1. Create corresponding table schemas in SQL Data Warehouse. You can use [Migration Utility](https://www.microsoft.com/download/details.aspx?id=49100) to **migrate schema** from Azure SQL Database to Azure SQL Data Warehouse. You use Azure Data Factory to migrate/copy data in a later step.
61+
1. Create corresponding table schemas in SQL Data Warehouse. You use Azure Data Factory to migrate/copy data in a later step.
6262

6363
## Azure services to access SQL server
6464

articles/data-factory/tutorial-bulk-copy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Create an Azure SQL Database with Adventure Works LT sample data following [Crea
5757

5858
1. If you don't have an Azure SQL Data Warehouse, see the [Create a SQL Data Warehouse](../sql-data-warehouse/sql-data-warehouse-get-started-tutorial.md) article for steps to create one.
5959

60-
2. Create corresponding table schemas in SQL Data Warehouse. You can use [Migration Utility](https://www.microsoft.com/download/details.aspx?id=49100) to **migrate schema** from Azure SQL Database to Azure SQL Data Warehouse. You use Azure Data Factory to migrate/copy data in a later step.
60+
2. Create corresponding table schemas in SQL Data Warehouse. You use Azure Data Factory to migrate/copy data in a later step.
6161

6262
## Azure services to access SQL server
6363

0 commit comments

Comments
 (0)