You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-integration-runtime.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,15 +123,20 @@ The IR Location defines the location of its back-end compute, and essentially th
123
123
124
124
### Azure IR location
125
125
126
+
You can set a certain location of an Azure IR, in which case the data movement or activity dispatch will happen in that specific region.
127
+
128
+
>[!TIP]
129
+
>If you have strict data compliance requirements and need ensure that data do not leave a certain geography, you can explicitly create an Azure IR in a certain region and point the Linked Service to this IR using ConnectVia property. For example, if you want to copy data from Blob in UK South to SQL DW in UK South and want to ensure data do not leave UK, create an Azure IR in UK South and link both Linked Services to this IR.
130
+
131
+
If you choose to use the auto-resolve Azure IR, which is the default,
132
+
126
133
- For copy activity, ADF will make a best effort to automatically detect your sink data store's location, then use the IR in either the same region if available or the closest one in the same geography; if the sink data store's region is not detectable, IR in the data factory region as alternative is used.
127
134
128
135
For example, you have your factory created in East US,
129
136
130
137
- When copy data to Azure Blob in West US, if ADF successfully detected that the Blob is in West US, copy activity is executed on IR in West US; if the region detection fails, copy activity is executed on IR in East US.
131
138
- When copy data to Salesforce of which the region is not detectable, copy activity is executed on IR in East US.
132
139
133
-
- For copy activity, ADF makes a best effort to automatically detect your sink and source data store to choose the best location, either in the same region (if available), or the closest one in the same geography, or if not detectable to use the data factory region as alternative.
134
-
135
140
- For Lookup/GetMetadata/Delete activity execution (also known as Pipeline activities), transformation activity dispatching (also known as External activities), and authoring operations (test connection, browse folder list and table list, preview data), ADF uses the IR in the data factory region.
136
141
137
142
- For Data Flow, ADF uses the IR in the data factory region.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-azure-blob-storage.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.service: data-factory
9
9
ms.workload: data-services
10
10
ms.topic: conceptual
11
11
ms.custom: seo-lt-2019
12
-
ms.date: 04/09/2020
12
+
ms.date: 05/06/2020
13
13
---
14
14
15
15
# Copy and transform data in Azure Blob storage by using Azure Data Factory
@@ -156,7 +156,7 @@ To use shared access signature authentication, the following properties are supp
156
156
"typeProperties": {
157
157
"sasUri": {
158
158
"type": "SecureString",
159
-
"value": "<SAS URI of the Azure Storage resource e.g. https://<container>.blob.core.windows.net/?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
159
+
"value": "<SAS URI of the Azure Storage resource e.g. https://<accountname>.blob.core.windows.net/?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
160
160
}
161
161
},
162
162
"connectVia": {
@@ -177,7 +177,7 @@ To use shared access signature authentication, the following properties are supp
177
177
"typeProperties": {
178
178
"sasUri": {
179
179
"type": "SecureString",
180
-
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<container>.blob.core.windows.net/>"
180
+
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<accountname>.blob.core.windows.net/>"
Copy file name to clipboardExpand all lines: articles/data-factory/connector-db2.md
+53-2Lines changed: 53 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
12
12
13
13
14
14
ms.topic: conceptual
15
-
ms.date: 02/17/2020
15
+
ms.date: 05/07/2020
16
16
17
17
ms.author: jingwang
18
18
@@ -69,20 +69,71 @@ The following properties are supported for DB2 linked service:
69
69
| Property | Description | Required |
70
70
|:--- |:--- |:--- |
71
71
| type | The type property must be set to: **Db2**| Yes |
72
+
| connectionString | Specify information needed to connect to the DB2 instance.<br/> You can also put password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
73
+
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
74
+
75
+
Typical properties inside the connection string:
76
+
77
+
| Property | Description | Required |
78
+
|:--- |:--- |:--- |
72
79
| server |Name of the DB2 server. You can specify the port number following the server name delimited by colon e.g. `server:port`. |Yes |
73
80
| database |Name of the DB2 database. |Yes |
74
81
| authenticationType |Type of authentication used to connect to the DB2 database.<br/>Allowed value is: **Basic**. |Yes |
75
82
| username |Specify user name to connect to the DB2 database. |Yes |
76
83
| password |Specify password for the user account you specified for the username. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
77
84
| packageCollection | Specify under where the needed packages are auto created by ADF when querying the database. | No |
78
85
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No |
79
-
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
80
86
81
87
> [!TIP]
82
88
> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, ADF will try to create a the package under collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want ADF to create the needed packages when querying the database.
"referenceName": "<Azure Key Vault linked service name>",
120
+
"type": "LinkedServiceReference"
121
+
},
122
+
"secretName": "<secretName>"
123
+
}
124
+
},
125
+
"connectVia": {
126
+
"referenceName": "<name of Integration Runtime>",
127
+
"type": "IntegrationRuntimeReference"
128
+
}
129
+
}
130
+
}
131
+
```
132
+
133
+
If you were using DB2 linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-dynamics-crm-office-365.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ author: linda33wj
12
12
manager: shwang
13
13
ms.reviewer: douglasl
14
14
ms.custom: seo-lt-2019
15
-
ms.date: 11/20/2019
15
+
ms.date: 05/06/2020
16
16
---
17
17
18
18
# Copy data from and to Dynamics 365 (Common Data Service) or Dynamics CRM by using Azure Data Factory
@@ -57,6 +57,10 @@ This Dynamics connector is built on top of [Dynamics XRM tooling](https://docs.m
57
57
>[!TIP]
58
58
>To copy data from **Dynamics 365 Finance and Operations**, you can use the [Dynamics AX connector](connector-dynamics-ax.md).
59
59
60
+
## Prerequisites
61
+
62
+
To use this connector with AAD service principal authentication, you need to set up Server-to-Server (S2S) authentication in Common Data Service or Dynamics. Refer to [this article](https://docs.microsoft.com/powerapps/developer/common-data-service/build-web-applications-server-server-s2s-authentication) on detailed steps.
Copy file name to clipboardExpand all lines: articles/data-factory/copy-activity-preserve-metadata.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.reviewer: douglasl
10
10
ms.service: data-factory
11
11
ms.workload: data-services
12
12
ms.topic: conceptual
13
-
ms.date: 03/24/2020
13
+
ms.date: 05/06/2020
14
14
ms.author: jingwang
15
15
16
16
---
@@ -29,6 +29,8 @@ Copy activity supports preserving the following attributes during data copy:
29
29
-**All the customer specified metadata**
30
30
- And the following **five data store built-in system properties**: `contentType`, `contentLanguage` (except for Amazon S3), `contentEncoding`, `contentDisposition`, `cacheControl`.
31
31
32
+
**Handle differences in metadata:** Amazon S3 and Azure Storage allow different sets of characters in the keys of customer specified metadata. When you choose to preserve metadata using copy acivity, ADF automatically replaces the invalid characters with '_'.
33
+
32
34
When you copy files as-is from Amazon S3/Azure Data Lake Storage Gen2/Azure Blob to Azure Data Lake Storage Gen2/Azure Blob with binary format, you can find the **Preserve** option on the **Copy Activity** > **Settings** tab for activity authoring or the **Settings** page in Copy Data Tool.
| type | The type of formatSettings must be set to **JsonWriteSettings**. | Yes |
90
-
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects** and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
90
+
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects**(JSON Lines) and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
91
91
92
92
### JSON file patterns
93
93
94
-
Copy activity can automatically detect and parse the following patterns of JSON files.
94
+
When copying data from JSON files, copy activity can automatically detect and parse the following patterns of JSON files. When writing data to JSON files, you can configure the file pattern on copy activity sink.
95
95
96
96
-**Type I: setOfObjects**
97
97
98
-
Each file contains single object, or line-delimited/concatenated multiple objects.
99
-
When this option is chosen in copy activity sink, copy activity produces a single JSON file with each object per line (line-delimited).
98
+
Each file contains single object, JSON lines, or concatenated objects.
100
99
101
100
***single object JSON example**
102
101
@@ -111,7 +110,7 @@ Copy activity can automatically detect and parse the following patterns of JSON
0 commit comments