Skip to content

Commit 1b2c3a4

Browse files
committed
Doc bug fixes and refinement per feedback
1 parent 8d49b98 commit 1b2c3a4

File tree

6 files changed

+76
-15
lines changed

6 files changed

+76
-15
lines changed

articles/data-factory/concepts-integration-runtime.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,15 +123,20 @@ The IR Location defines the location of its back-end compute, and essentially th
123123

124124
### Azure IR location
125125

126+
You can set a certain location of an Azure IR, in which case the data movement or activity dispatch will happen in that specific region.
127+
128+
>[!TIP]
129+
>If you have strict data compliance requirements and need ensure that data do not leave a certain geography, you can explicitly create an Azure IR in a certain region and point the Linked Service to this IR using ConnectVia property. For example, if you want to copy data from Blob in UK South to SQL DW in UK South and want to ensure data do not leave UK, create an Azure IR in UK South and link both Linked Services to this IR.
130+
131+
If you choose to use the auto-resolve Azure IR, which is the default,
132+
126133
- For copy activity, ADF will make a best effort to automatically detect your sink data store's location, then use the IR in either the same region if available or the closest one in the same geography; if the sink data store's region is not detectable, IR in the data factory region as alternative is used.
127134

128135
For example, you have your factory created in East US,
129136

130137
- When copy data to Azure Blob in West US, if ADF successfully detected that the Blob is in West US, copy activity is executed on IR in West US; if the region detection fails, copy activity is executed on IR in East US.
131138
- When copy data to Salesforce of which the region is not detectable, copy activity is executed on IR in East US.
132139

133-
- For copy activity, ADF makes a best effort to automatically detect your sink and source data store to choose the best location, either in the same region (if available), or the closest one in the same geography, or if not detectable to use the data factory region as alternative.
134-
135140
- For Lookup/GetMetadata/Delete activity execution (also known as Pipeline activities), transformation activity dispatching (also known as External activities), and authoring operations (test connection, browse folder list and table list, preview data), ADF uses the IR in the data factory region.
136141

137142
- For Data Flow, ADF uses the IR in the data factory region.

articles/data-factory/connector-azure-blob-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
1111
ms.custom: seo-lt-2019
12-
ms.date: 04/09/2020
12+
ms.date: 05/06/2020
1313
---
1414

1515
# Copy and transform data in Azure Blob storage by using Azure Data Factory
@@ -156,7 +156,7 @@ To use shared access signature authentication, the following properties are supp
156156
"typeProperties": {
157157
"sasUri": {
158158
"type": "SecureString",
159-
"value": "<SAS URI of the Azure Storage resource e.g. https://<container>.blob.core.windows.net/?sv=<storage version>&amp;st=<start time>&amp;se=<expire time>&amp;sr=<resource>&amp;sp=<permissions>&amp;sip=<ip range>&amp;spr=<protocol>&amp;sig=<signature>>"
159+
"value": "<SAS URI of the Azure Storage resource e.g. https://<accountname>.blob.core.windows.net/?sv=<storage version>&amp;st=<start time>&amp;se=<expire time>&amp;sr=<resource>&amp;sp=<permissions>&amp;sip=<ip range>&amp;spr=<protocol>&amp;sig=<signature>>"
160160
}
161161
},
162162
"connectVia": {
@@ -177,7 +177,7 @@ To use shared access signature authentication, the following properties are supp
177177
"typeProperties": {
178178
"sasUri": {
179179
"type": "SecureString",
180-
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<container>.blob.core.windows.net/>"
180+
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<accountname>.blob.core.windows.net/>"
181181
},
182182
"sasToken": { 
183183
"type": "AzureKeyVaultSecret", 

articles/data-factory/connector-db2.md

Lines changed: 53 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.workload: data-services
1212

1313

1414
ms.topic: conceptual
15-
ms.date: 02/17/2020
15+
ms.date: 05/07/2020
1616

1717
ms.author: jingwang
1818

@@ -69,20 +69,71 @@ The following properties are supported for DB2 linked service:
6969
| Property | Description | Required |
7070
|:--- |:--- |:--- |
7171
| type | The type property must be set to: **Db2** | Yes |
72+
| connectionString | Specify information needed to connect to the DB2 instance.<br/> You can also put password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
73+
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
74+
75+
Typical properties inside the connection string:
76+
77+
| Property | Description | Required |
78+
|:--- |:--- |:--- |
7279
| server |Name of the DB2 server. You can specify the port number following the server name delimited by colon e.g. `server:port`. |Yes |
7380
| database |Name of the DB2 database. |Yes |
7481
| authenticationType |Type of authentication used to connect to the DB2 database.<br/>Allowed value is: **Basic**. |Yes |
7582
| username |Specify user name to connect to the DB2 database. |Yes |
7683
| password |Specify password for the user account you specified for the username. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
7784
| packageCollection | Specify under where the needed packages are auto created by ADF when querying the database. | No |
7885
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No |
79-
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
8086

8187
> [!TIP]
8288
> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, ADF will try to create a the package under collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want ADF to create the needed packages when querying the database.
8389
8490
**Example:**
8591

92+
```json
93+
{
94+
"name": "Db2LinkedService",
95+
"properties": {
96+
"type": "Db2",
97+
"typeProperties": {
98+
"connectionString": "server=<server:port>; database=<database>; authenticationType=Basic;username=<username>; password=<password>; packageCollection=<packagecollection>;certificateCommonName=<certname>;"
99+
},
100+
"connectVia": {
101+
"referenceName": "<name of Integration Runtime>",
102+
"type": "IntegrationRuntimeReference"
103+
}
104+
}
105+
}
106+
```
107+
**Example: store password in Azure Key Vault**
108+
109+
```json
110+
{
111+
"name": "Db2LinkedService",
112+
"properties": {
113+
"type": "Db2",
114+
"typeProperties": {
115+
"connectionString": "server=<server:port>; database=<database>; authenticationType=Basic;username=<username>; packageCollection=<packagecollection>;certificateCommonName=<certname>;",
116+
"password": { 
117+
"type": "AzureKeyVaultSecret", 
118+
"store": { 
119+
"referenceName": "<Azure Key Vault linked service name>", 
120+
"type": "LinkedServiceReference" 
121+
}, 
122+
"secretName": "<secretName>" 
123+
}
124+
},
125+
"connectVia": {
126+
"referenceName": "<name of Integration Runtime>",
127+
"type": "IntegrationRuntimeReference"
128+
}
129+
}
130+
}
131+
```
132+
133+
If you were using DB2 linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward.
134+
135+
**Previous payload:**
136+
86137
```json
87138
{
88139
"name": "Db2LinkedService",

articles/data-factory/connector-dynamics-crm-office-365.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ author: linda33wj
1212
manager: shwang
1313
ms.reviewer: douglasl
1414
ms.custom: seo-lt-2019
15-
ms.date: 11/20/2019
15+
ms.date: 05/06/2020
1616
---
1717

1818
# Copy data from and to Dynamics 365 (Common Data Service) or Dynamics CRM by using Azure Data Factory
@@ -57,6 +57,10 @@ This Dynamics connector is built on top of [Dynamics XRM tooling](https://docs.m
5757
>[!TIP]
5858
>To copy data from **Dynamics 365 Finance and Operations**, you can use the [Dynamics AX connector](connector-dynamics-ax.md).
5959
60+
## Prerequisites
61+
62+
To use this connector with AAD service principal authentication, you need to set up Server-to-Server (S2S) authentication in Common Data Service or Dynamics. Refer to [this article](https://docs.microsoft.com/powerapps/developer/common-data-service/build-web-applications-server-server-s2s-authentication) on detailed steps.
63+
6064
## Get started
6165

6266
[!INCLUDE [data-factory-v2-connector-get-started](../../includes/data-factory-v2-connector-get-started.md)]

articles/data-factory/copy-activity-preserve-metadata.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.reviewer: douglasl
1010
ms.service: data-factory
1111
ms.workload: data-services
1212
ms.topic: conceptual
13-
ms.date: 03/24/2020
13+
ms.date: 05/06/2020
1414
ms.author: jingwang
1515

1616
---
@@ -29,6 +29,8 @@ Copy activity supports preserving the following attributes during data copy:
2929
- **All the customer specified metadata**
3030
- And the following **five data store built-in system properties**: `contentType`, `contentLanguage` (except for Amazon S3), `contentEncoding`, `contentDisposition`, `cacheControl`.
3131

32+
**Handle differences in metadata:** Amazon S3 and Azure Storage allow different sets of characters in the keys of customer specified metadata. When you choose to preserve metadata using copy acivity, ADF automatically replaces the invalid characters with '_'.
33+
3234
When you copy files as-is from Amazon S3/Azure Data Lake Storage Gen2/Azure Blob to Azure Data Lake Storage Gen2/Azure Blob with binary format, you can find the **Preserve** option on the **Copy Activity** > **Settings** tab for activity authoring or the **Settings** page in Copy Data Tool.
3335

3436
![Copy activity preserve metadata](./media/copy-activity-preserve-metadata/copy-activity-preserve-metadata.png)

articles/data-factory/format-json.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: craigg
88
ms.service: data-factory
99
ms.workload: data-services
1010
ms.topic: conceptual
11-
ms.date: 02/05/2020
11+
ms.date: 05/05/2020
1212
ms.author: jingwang
1313

1414
---
@@ -87,16 +87,15 @@ Supported **JSON write settings** under `formatSettings`:
8787
| Property | Description | Required |
8888
| ------------- | ------------------------------------------------------------ | ----------------------------------------------------- |
8989
| type | The type of formatSettings must be set to **JsonWriteSettings**. | Yes |
90-
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects** and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
90+
| filePattern |Indicate the pattern of data stored in each JSON file. Allowed values are: **setOfObjects** (JSON Lines) and **arrayOfObjects**. The **default** value is **setOfObjects**. See [JSON file patterns](#json-file-patterns) section for details about these patterns. |No |
9191

9292
### JSON file patterns
9393

94-
Copy activity can automatically detect and parse the following patterns of JSON files.
94+
When copying data from JSON files, copy activity can automatically detect and parse the following patterns of JSON files. When writing data to JSON files, you can configure the file pattern on copy activity sink.
9595

9696
- **Type I: setOfObjects**
9797

98-
Each file contains single object, or line-delimited/concatenated multiple objects.
99-
When this option is chosen in copy activity sink, copy activity produces a single JSON file with each object per line (line-delimited).
98+
Each file contains single object, JSON lines, or concatenated objects.
10099

101100
* **single object JSON example**
102101

@@ -111,7 +110,7 @@ Copy activity can automatically detect and parse the following patterns of JSON
111110
}
112111
```
113112

114-
* **line-delimited JSON example**
113+
* **JSON Lines (default for sink)**
115114

116115
```json
117116
{"time":"2015-04-29T07:12:20.9100000Z","callingimsi":"466920403025604","callingnum1":"678948008","callingnum2":"567834760","switch1":"China","switch2":"Germany"}

0 commit comments

Comments
 (0)