Skip to content

Commit 867de3a

Browse files
authored
Merge pull request #114219 from linda33wj/master
Update ADF connector articles
2 parents 4b73f62 + 4a016b5 commit 867de3a

8 files changed

+107
-57
lines changed

articles/data-factory/concepts-integration-runtime.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,14 +123,19 @@ The IR Location defines the location of its back-end compute, and essentially th
123123

124124
### Azure IR location
125125

126+
You can set a certain location of an Azure IR, in which case the activity execution or dispatch will happen in that specific region.
127+
128+
If you choose to use the auto-resolve Azure IR, which is the default,
129+
126130
- For copy activity, ADF will make a best effort to automatically detect your sink data store's location, then use the IR in either the same region if available or the closest one in the same geography; if the sink data store's region is not detectable, IR in the data factory region as alternative is used.
127131

128132
For example, you have your factory created in East US,
129133

130134
- When copy data to Azure Blob in West US, if ADF successfully detected that the Blob is in West US, copy activity is executed on IR in West US; if the region detection fails, copy activity is executed on IR in East US.
131135
- When copy data to Salesforce of which the region is not detectable, copy activity is executed on IR in East US.
132136

133-
- For copy activity, ADF makes a best effort to automatically detect your sink and source data store to choose the best location, either in the same region (if available), or the closest one in the same geography, or if not detectable to use the data factory region as alternative.
137+
>[!TIP]
138+
>If you have strict data compliance requirements and need ensure that data do not leave a certain geography, you can explicitly create an Azure IR in a certain region and point the Linked Service to this IR using ConnectVia property. For example, if you want to copy data from Blob in UK South to SQL DW in UK South and want to ensure data do not leave UK, create an Azure IR in UK South and link both Linked Services to this IR.
134139
135140
- For Lookup/GetMetadata/Delete activity execution (also known as Pipeline activities), transformation activity dispatching (also known as External activities), and authoring operations (test connection, browse folder list and table list, preview data), ADF uses the IR in the data factory region.
136141

articles/data-factory/connector-azure-blob-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
1111
ms.custom: seo-lt-2019
12-
ms.date: 04/09/2020
12+
ms.date: 05/06/2020
1313
---
1414

1515
# Copy and transform data in Azure Blob storage by using Azure Data Factory
@@ -156,7 +156,7 @@ To use shared access signature authentication, the following properties are supp
156156
"typeProperties": {
157157
"sasUri": {
158158
"type": "SecureString",
159-
"value": "<SAS URI of the Azure Storage resource e.g. https://<container>.blob.core.windows.net/?sv=<storage version>&amp;st=<start time>&amp;se=<expire time>&amp;sr=<resource>&amp;sp=<permissions>&amp;sip=<ip range>&amp;spr=<protocol>&amp;sig=<signature>>"
159+
"value": "<SAS URI of the Azure Storage resource e.g. https://<accountname>.blob.core.windows.net/?sv=<storage version>&amp;st=<start time>&amp;se=<expire time>&amp;sr=<resource>&amp;sp=<permissions>&amp;sip=<ip range>&amp;spr=<protocol>&amp;sig=<signature>>"
160160
}
161161
},
162162
"connectVia": {
@@ -177,7 +177,7 @@ To use shared access signature authentication, the following properties are supp
177177
"typeProperties": {
178178
"sasUri": {
179179
"type": "SecureString",
180-
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<container>.blob.core.windows.net/>"
180+
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<accountname>.blob.core.windows.net/>"
181181
},
182182
"sasToken": { 
183183
"type": "AzureKeyVaultSecret", 

articles/data-factory/connector-db2.md

Lines changed: 53 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 02/17/2020
15+
ms.date: 05/07/2020
1616

1717
ms.author: jingwang
1818

@@ -69,20 +69,71 @@ The following properties are supported for DB2 linked service:
6969
| Property | Description | Required |
7070
|:--- |:--- |:--- |
7171
| type | The type property must be set to: **Db2** | Yes |
72+
| connectionString | Specify information needed to connect to the DB2 instance.<br/> You can also put password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
73+
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
74+
75+
Typical properties inside the connection string:
76+
77+
| Property | Description | Required |
78+
|:--- |:--- |:--- |
7279
| server |Name of the DB2 server. You can specify the port number following the server name delimited by colon e.g. `server:port`. |Yes |
7380
| database |Name of the DB2 database. |Yes |
7481
| authenticationType |Type of authentication used to connect to the DB2 database.<br/>Allowed value is: **Basic**. |Yes |
7582
| username |Specify user name to connect to the DB2 database. |Yes |
7683
| password |Specify password for the user account you specified for the username. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
7784
| packageCollection | Specify under where the needed packages are auto created by ADF when querying the database. | No |
7885
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No |
79-
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
8086

8187
> [!TIP]
8288
> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, ADF will try to create a the package under collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want ADF to create the needed packages when querying the database.
8389
8490
**Example:**
8591

92+
```json
93+
{
94+
"name": "Db2LinkedService",
95+
"properties": {
96+
"type": "Db2",
97+
"typeProperties": {
98+
"connectionString": "server=<server:port>; database=<database>; authenticationType=Basic;username=<username>; password=<password>; packageCollection=<packagecollection>;certificateCommonName=<certname>;"
99+
},
100+
"connectVia": {
101+
"referenceName": "<name of Integration Runtime>",
102+
"type": "IntegrationRuntimeReference"
103+
}
104+
}
105+
}
106+
```
107+
**Example: store password in Azure Key Vault**
108+
109+
```json
110+
{
111+
"name": "Db2LinkedService",
112+
"properties": {
113+
"type": "Db2",
114+
"typeProperties": {
115+
"connectionString": "server=<server:port>; database=<database>; authenticationType=Basic;username=<username>; packageCollection=<packagecollection>;certificateCommonName=<certname>;",
116+
"password": { 
117+
"type": "AzureKeyVaultSecret", 
118+
"store": { 
119+
"referenceName": "<Azure Key Vault linked service name>", 
120+
"type": "LinkedServiceReference" 
121+
}, 
122+
"secretName": "<secretName>" 
123+
}
124+
},
125+
"connectVia": {
126+
"referenceName": "<name of Integration Runtime>",
127+
"type": "IntegrationRuntimeReference"
128+
}
129+
}
130+
}
131+
```
132+
133+
If you were using DB2 linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward.
134+
135+
**Previous payload:**
136+
86137
```json
87138
{
88139
"name": "Db2LinkedService",

articles/data-factory/connector-dynamics-crm-office-365.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ author: linda33wj
1212
manager: shwang
1313
ms.reviewer: douglasl
1414
ms.custom: seo-lt-2019
15-
ms.date: 11/20/2019
15+
ms.date: 05/06/2020
1616
---
1717

1818
# Copy data from and to Dynamics 365 (Common Data Service) or Dynamics CRM by using Azure Data Factory
@@ -57,6 +57,10 @@ This Dynamics connector is built on top of [Dynamics XRM tooling](https://docs.m
5757
>[!TIP]
5858
>To copy data from **Dynamics 365 Finance and Operations**, you can use the [Dynamics AX connector](connector-dynamics-ax.md).
5959
60+
## Prerequisites
61+
62+
To use this connector with AAD service principal authentication, you need to set up Server-to-Server (S2S) authentication in Common Data Service or Dynamics. Refer to [this article](https://docs.microsoft.com/powerapps/developer/common-data-service/build-web-applications-server-server-s2s-authentication) on detailed steps.
63+
6064
## Get started
6165

6266
[!INCLUDE [data-factory-v2-connector-get-started](../../includes/data-factory-v2-connector-get-started.md)]

articles/data-factory/connector-odbc.md

Lines changed: 2 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 01/09/2020
15+
ms.date: 04/22/2020
1616
ms.author: jingwang
1717

1818
---
@@ -33,7 +33,7 @@ This ODBC connector is supported for the following activities:
3333

3434
You can copy data from ODBC source to any supported sink data store, or copy from any supported source data store to ODBC sink. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
3535

36-
Specifically, this ODBC connector supports copying data from/to **any ODBC-compatible data stores** using **Basic** or **Anonymous** authentication. A **64-bit ODBC driver** is required.
36+
Specifically, this ODBC connector supports copying data from/to **any ODBC-compatible data stores** using **Basic** or **Anonymous** authentication. A **64-bit ODBC driver** is required. For ODBC sink, ADF support ODBC version 2.0 standard.
3737

3838
## Prerequisites
3939

@@ -234,49 +234,10 @@ To copy data to ODBC-compatible data store, set the sink type in the copy activi
234234
]
235235
```
236236

237-
## SAP HANA sink
238-
239-
>[!NOTE]
240-
>To copy data from SAP HANA data store, refer to native [SAP HANA connector](connector-sap-hana.md). To copy data to SAP HANA, please follow this instruction to use ODBC connector. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
241-
>
242-
243-
You can copy data to SAP HANA database using the generic ODBC connector.
244-
245-
Set up a Self-hosted Integration Runtime on a machine with access to your data store. The Integration Runtime uses the ODBC driver for SAP HANA to connect to the data store. Therefore, install the driver if it is not already installed on the same machine. See [Prerequisites](#prerequisites) section for details.
246-
247-
Before you use the SAP HANA sink in a Data Factory solution, verify whether the Integration Runtime can connect to the data store using instructions in [Troubleshoot connectivity issues](#troubleshoot-connectivity-issues) section.
248-
249-
Create an ODBC linked service to link a SAP HANA data store to an Azure data factory as shown in the following example:
250-
251-
```json
252-
{
253-
"name": "SAPHANAViaODBCLinkedService",
254-
"properties": {
255-
"type": "Odbc",
256-
"typeProperties": {
257-
"connectionString": "Driver={HDBODBC};servernode=<HANA server>.clouddatahub-int.net:30015",
258-
"authenticationType": "Basic",
259-
"userName": "<username>",
260-
"password": {
261-
"type": "SecureString",
262-
"value": "<password>"
263-
}
264-
},
265-
"connectVia": {
266-
"referenceName": "<name of Integration Runtime>",
267-
"type": "IntegrationRuntimeReference"
268-
}
269-
}
270-
}
271-
```
272-
273-
Read the article from the beginning for a detailed overview of using ODBC data stores as source/sink data stores in a copy operation.
274-
275237
## Lookup activity properties
276238

277239
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
278240

279-
280241
## Troubleshoot connectivity issues
281242

282243
To troubleshoot connection issues, use the **Diagnostics** tab of **Integration Runtime Configuration Manager**.

articles/data-factory/connector-sap-hana.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: data-factory
1010
ms.workload: data-services
1111
ms.topic: conceptual
1212
ms.custom: seo-lt-2019
13-
ms.date: 02/17/2020
13+
ms.date: 04/22/2020
1414
---
1515

1616
# Copy data from SAP HANA using Azure Data Factory
@@ -41,7 +41,7 @@ Specifically, this SAP HANA connector supports:
4141
- Parallel copying from a SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
4242

4343
> [!TIP]
44-
> To copy data **into** SAP HANA data store, use generic ODBC connector. See [SAP HANA sink](connector-odbc.md#sap-hana-sink) with details. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
44+
> To copy data **into** SAP HANA data store, use generic ODBC connector. See [SAP HANA sink](#sap-hana-sink) section with details. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
4545
4646
## Prerequisites
4747

@@ -294,6 +294,34 @@ When copying data from SAP HANA, the following mappings are used from SAP HANA d
294294
| TIMESTAMP | DateTime |
295295
| VARBINARY | Byte[] |
296296

297+
### SAP HANA sink
298+
299+
Currently, the SAP HANA connector is not supported as sink, while you can use generic ODBC connector with SAP HANA driver to write data into SAP HANA.
300+
301+
Follow the [Prerequisites](#prerequisites) to set up Self-hosted Integration Runtime and install SAP HANA ODBC driver first. Create an ODBC linked service to connect to your SAP HANA data store as shown in the following example, then create dataset and copy activity sink with ODBC type accordingly. Learn more from [ODBC connector](connector-odbc.md) article.
302+
303+
```json
304+
{
305+
"name": "SAPHANAViaODBCLinkedService",
306+
"properties": {
307+
"type": "Odbc",
308+
"typeProperties": {
309+
"connectionString": "Driver={HDBODBC};servernode=<HANA server>.clouddatahub-int.net:30015",
310+
"authenticationType": "Basic",
311+
"userName": "<username>",
312+
"password": {
313+
"type": "SecureString",
314+
"value": "<password>"
315+
}
316+
},
317+
"connectVia": {
318+
"referenceName": "<name of Integration Runtime>",
319+
"type": "IntegrationRuntimeReference"
320+
}
321+
}
322+
}
323+
```
324+
297325
## Lookup activity properties
298326

299327
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).

articles/data-factory/copy-activity-preserve-metadata.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.reviewer: douglasl
1010
ms.service: data-factory
1111
ms.workload: data-services
1212
ms.topic: conceptual
13-
ms.date: 03/24/2020
13+
ms.date: 05/06/2020
1414
ms.author: jingwang
1515

1616
---
@@ -29,6 +29,8 @@ Copy activity supports preserving the following attributes during data copy:
2929
- **All the customer specified metadata**
3030
- And the following **five data store built-in system properties**: `contentType`, `contentLanguage` (except for Amazon S3), `contentEncoding`, `contentDisposition`, `cacheControl`.
3131

32+
**Handle differences in metadata:** Amazon S3 and Azure Storage allow different sets of characters in the keys of customer specified metadata. When you choose to preserve metadata using copy acivity, ADF automatically replaces the invalid characters with '_'.
33+
3234
When you copy files as-is from Amazon S3/Azure Data Lake Storage Gen2/Azure Blob to Azure Data Lake Storage Gen2/Azure Blob with binary format, you can find the **Preserve** option on the **Copy Activity** > **Settings** tab for activity authoring or the **Settings** page in Copy Data Tool.
3335

3436
![Copy activity preserve metadata](./media/copy-activity-preserve-metadata/copy-activity-preserve-metadata.png)

articles/data-factory/format-json.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 02/05/2020
11+
ms.date: 05/05/2020
1212
ms.author: jingwang
1313

1414
---
@@ -87,16 +87,15 @@ Supported **JSON write settings** under `formatSettings`:
8787
| Property | Description | Required |
8888
| ------------- | ------------------------------------------------------------ | ----------------------------------------------------- |
8989
| type | The type of formatSettings must be set to **JsonWriteSettings**. | Yes |
90-
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects** and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
90+
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects** (JSON Lines) and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
9191

9292
### JSON file patterns
9393

94-
Copy activity can automatically detect and parse the following patterns of JSON files.
94+
When copying data from JSON files, copy activity can automatically detect and parse the following patterns of JSON files. When writing data to JSON files, you can configure the file pattern on copy activity sink.
9595

9696
- **Type I: setOfObjects**
9797

98-
Each file contains single object, or line-delimited/concatenated multiple objects.
99-
When this option is chosen in copy activity sink, copy activity produces a single JSON file with each object per line (line-delimited).
98+
Each file contains single object, JSON lines, or concatenated objects.
10099

101100
* **single object JSON example**
102101

@@ -111,7 +110,7 @@ Copy activity can automatically detect and parse the following patterns of JSON
111110
}
112111
```
113112

114-
* **line-delimited JSON example**
113+
* **JSON Lines (default for sink)**
115114

116115
```json
117116
{"time":"2015-04-29T07:12:20.9100000Z","callingimsi":"466920403025604","callingnum1":"678948008","callingnum2":"567834760","switch1":"China","switch2":"Germany"}

0 commit comments

Comments
 (0)