Skip to content

Commit 093f83c

Browse files
authored
Merge pull request #104603 from linda33wj/master
Update ADF copy content
2 parents de0c3d7 + 9866283 commit 093f83c

File tree

6 files changed

+16
-14
lines changed

6 files changed

+16
-14
lines changed

articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: List of services that support managed identities for Azure resource
44
services: active-directory
55
author: MarkusVi
66
ms.author: markvi
7-
ms.date: 09/24/2019
7+
ms.date: 02/13/2020
88
ms.topic: conceptual
99
ms.service: active-directory
1010
ms.subservice: msi
@@ -109,7 +109,7 @@ Refer to the following list to configure managed identity for Azure Logic Apps (
109109

110110
Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
111111
| --- | --- | --- | --- | --- |
112-
| System assigned | Available | Not available | Not available | Not available |
112+
| System assigned | Available | Available | Not available | Available |
113113
| User assigned | Not available | Not available | Not available | Not available |
114114

115115
Refer to the following list to configure managed identity for Azure Data Factory V2 (in regions where available):

articles/data-factory/connector-db2.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 01/14/2020
15+
ms.date: 02/17/2020
1616

1717
ms.author: jingwang
1818

@@ -45,6 +45,9 @@ Specifically, this DB2 connector supports the following IBM DB2 platforms and ve
4545
* IBM DB2 for LUW 10.5
4646
* IBM DB2 for LUW 10.1
4747

48+
>[!TIP]
49+
>DB2 connector is built on top of Microsoft OLE DB Provider for DB2. To troubleshoot DB2 connector errors, refer to [Data Provider Error Codes](https://docs.microsoft.com/host-integration-server/db2oledbv/data-provider-error-codes#drda-protocol-errors).
50+
4851
## Prerequisites
4952

5053
[!INCLUDE [data-factory-v2-integration-runtime-requirements](../../includes/data-factory-v2-integration-runtime-requirements.md)]

articles/data-factory/connector-oracle.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 01/09/2020
15+
ms.date: 02/13/2020
1616
ms.author: jingwang
1717

1818
---
@@ -42,7 +42,6 @@ Specifically, this Oracle connector supports:
4242
- Oracle 9i R2 (9.2) and higher
4343
- Oracle 8i R3 (8.1.7) and higher
4444
- Oracle Database Cloud Exadata Service
45-
- Copying data by using Basic or OID authentications.
4645
- Parallel copying from an Oracle source. See the [Parallel copy from Oracle](#parallel-copy-from-oracle) section for details.
4746

4847
> [!Note]

articles/data-factory/copy-activity-schema-and-type-mapping.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 04/29/2019
15+
ms.date: 02/13/2020
1616
ms.author: jingwang
1717

1818
---
@@ -257,11 +257,11 @@ Configure the schema-mapping rule as the following copy activity JSON sample:
257257
"translator": {
258258
"type": "TabularTranslator",
259259
"schemaMapping": {
260-
"orderNumber": "$.number",
261-
"orderDate": "$.date",
262-
"order_pd": "prod",
263-
"order_price": "price",
264-
"city": " $.city[0].name"
260+
"$.number": "orderNumber",
261+
"$.date": "orderDate",
262+
"prod": "order_pd",
263+
"price": "order_price",
264+
"$.city[0].name": "city"
265265
},
266266
"collectionReference": "$.orders"
267267
}

articles/data-factory/format-avro.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 09/04/2019
11+
ms.date: 02/13/2020
1212
ms.author: jingwang
1313

1414
---
@@ -27,7 +27,7 @@ For a full list of sections and properties available for defining datasets, see
2727
| ---------------- | ------------------------------------------------------------ | -------- |
2828
| type | The type property of the dataset must be set to **Avro**. | Yes |
2929
| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes |
30-
| avroCompressionCodec | The compression codec to use when writing to Avro files. When reading from Avro files, Data Factory automatically determine the compression codec based on the file metadata.<br>Supported types are "**none**" (default), "**deflate**", "**snappy**". | No |
30+
| avroCompressionCodec | The compression codec to use when writing to Avro files. When reading from Avro files, Data Factory automatically determine the compression codec based on the file metadata.<br>Supported types are "**none**" (default), "**deflate**", "**snappy**". Note currently Copy activity doesn't support Snappy when read/write Avro files. | No |
3131

3232
> [!NOTE]
3333
> White space in column name is not supported for Avro files.

articles/data-factory/format-parquet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ For a full list of sections and properties available for defining datasets, see
2727
| ---------------- | ------------------------------------------------------------ | -------- |
2828
| type | The type property of the dataset must be set to **Parquet**. | Yes |
2929
| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes |
30-
| compressionCodec | The compression codec to use when writing to Parquet files. When reading from Parquet files, Data Factory automatically determine the compression codec based on the file metadata.<br>Supported types are “**none**”, “**gzip**”, “**snappy**” (default), and "**lzo**". Note currently Copy activity doesn't support LZO. | No |
30+
| compressionCodec | The compression codec to use when writing to Parquet files. When reading from Parquet files, Data Factory automatically determine the compression codec based on the file metadata.<br>Supported types are “**none**”, “**gzip**”, “**snappy**” (default), and "**lzo**". Note currently Copy activity doesn't support LZO when read/write Parquet files. | No |
3131

3232
> [!NOTE]
3333
> White space in column name is not supported for Parquet files.

0 commit comments

Comments
 (0)