Skip to content

Commit fbc7b1f

Browse files
committed
PR review fixes
1 parent 1cf6e8f commit fbc7b1f

7 files changed

+23
-25
lines changed

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ The **Inspect** tab provides a view into the metadata of the data stream that yo
6060

6161
:::image type="content" source="media/data-flow/inspect1.png" alt-text="Inspect":::
6262

63-
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
63+
As you change the shape of your data through transformations, you can see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata isn't visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
6464

6565
#### Data preview
6666

@@ -96,7 +96,7 @@ Mapping data flows are operationalized within ADF pipelines using the [data flow
9696

9797
## Debug mode
9898

99-
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both in when building your data flow logic and running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
99+
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both when building your data flow logic and when running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
100100

101101
## Monitoring data flows
102102

articles/data-factory/connector-sap-hana.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Specifically, this SAP HANA connector supports:
3636
- Copying data from any version of SAP HANA database.
3737
- Copying data from **HANA information models** (such as Analytic and Calculation views) and **Row/Column tables**.
3838
- Copying data using **Basic** or **Windows** authentication.
39-
- Parallel copying from a SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
39+
- Parallel copying from an SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
4040

4141
> [!TIP]
4242
> To copy data **into** SAP HANA data store, use generic ODBC connector. See [SAP HANA sink](#sap-hana-sink) section with details. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
@@ -258,7 +258,7 @@ You are suggested to enable parallel copy with data partitioning especially when
258258

259259
| Scenario | Suggested settings |
260260
| -------------------------------------------------- | ------------------------------------------------------------ |
261-
| Full load from large table. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and choose the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
261+
| Full load from large table. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and chooses the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
262262
| Load large amount of data by using a custom query. | **Partition option**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHanaDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to apply dynamic range partition. <br><br>During execution, the service first calculates the value ranges of the specified partition column, by evenly distributes the rows in a number of buckets according to the number of distinct partition column values the parallel copy setting, then replaces `?AdfHanaDynamicRangePartitionCondition` with filtering the partition column value range for each partition, and sends to SAP HANA.<br><br>If you want to use multiple columns as partition column, you can concatenate the values of each column as one column in the query and specify it as the partition column, like `SELECT * FROM (SELECT *, CONCAT(<KeyColumn1>, <KeyColumn2>) AS PARTITIONCOLUMN FROM <TABLENAME>) WHERE ?AdfHanaDynamicRangePartitionCondition`. |
263263

264264
**Example: query with physical partitions of a table**

articles/data-factory/connector-troubleshoot-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ The following errors are general to the copy activity and could occur with any c
105105

106106
#### Error code: 20152
107107

108-
- **Message**: `The toke type '%tokenType;' from your authorization server is not supported, supported types: '%tokenTypes;'.`
108+
- **Message**: `The token type '%tokenType;' from your authorization server is not supported, supported types: '%tokenTypes;'.`
109109

110110
- **Cause**: Your authorization server isn't supported.
111111

@@ -251,7 +251,7 @@ The following errors are general to the copy activity and could occur with any c
251251

252252
- **Message**: `Failed to connect to your instance of Azure Database for PostgreSQL flexible server. '%'`
253253

254-
- **Cause**: Exact cause depends on the text returned in `'%'`. If it's **The operation has timed out**, it can be because the instance of PostgreSQL is stopped or because the network connectivity method configured for your instance doesn't allow connections from the Integration Runtime selected. User or password provided are incorrect. If it's **28P01: password authentication failed for user &lt;youruser&gt;**, it means that the user provided doesn't exist in the instance or that the password is incorrect. If it's **28000: no pg_hba.conf entry for host "*###.###.###.###*", user "&lt;youruser&gt;", database "&lt;yourdatabase&gt;", no encryption**, it means that the encryption method selected isn't compatible with the configuration of the server.
254+
- **Cause**: Exact cause depends on the text returned in `'%'`. If it's **The operation has timed out**, it can be because the instance of PostgreSQL is stopped or because the network connectivity method configured for your instance doesn't allow connections from the Integration Runtime selected. User or password provided is incorrect. If it's **28P01: password authentication failed for user &lt;youruser&gt;**, it means that the user provided doesn't exist in the instance or that the password is incorrect. If it's **28000: no pg_hba.conf entry for host "*###.###.###.###*", user "&lt;youruser&gt;", database "&lt;yourdatabase&gt;", no encryption**, it means that the encryption method selected isn't compatible with the configuration of the server.
255255

256256
- **Recommendation**: Confirm that the user provided exists in your instance of PostgreSQL and that the password corresponds to the one currently assigned to that user. Make sure that the encryption method selected is accepted by your instance of PostgreSQL, based on its current configuration. If the network connectivity method of your instance is configured for Private access (virtual network integration), use a Self-Hosted Integration Runtime (IR) to connect to it. If it's configured for Public access (allowed IP addresses), it's recommended to use an Azure IR with managed virtual network and deploy a managed private endpoint to connect to your instance. When it's configured for Public access (allowed IP addresses) a less recommended alternative consists in creating firewall rules in your instance to allow traffic originating on the IP addresses used by the Azure IR you're using.
257257

articles/data-factory/control-flow-for-each-activity.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: jburchel
88
ms.subservice: orchestration
99
ms.custom: synapse
1010
ms.topic: conceptual
11-
ms.date: 09/25/2024
11+
ms.date: 09/26/2024
1212
---
1313

1414
# ForEach activity in Azure Data Factory and Azure Synapse Analytics
@@ -97,7 +97,7 @@ Property | Description | Allowed values | Required
9797
name | Name of the for-each activity. | String | Yes
9898
type | Must be set to **ForEach** | String | Yes
9999
isSequential | Specifies whether the loop should be executed sequentially or in parallel. Maximum of 50 loop iterations can be executed at once in parallel). For example, if you have a ForEach activity iterating over a copy activity with 10 different source and sink datasets with **isSequential** set to False, all copies are executed at once. Default is False. <br/><br/> If "isSequential" is set to False, ensure that there is a correct configuration to run multiple executables. Otherwise, this property should be used with caution to avoid incurring write conflicts. For more information, see [Parallel execution](#parallel-execution) section. | Boolean | No. Default is False.
100-
batchCount | Batch count to be used for controlling the number of parallel execution (when isSequential is set to false). This is the upper concurrency limit, but the for-each activity will not always execute at this number | Integer (maximum 50) | No. Default is 20.
100+
batchCount | Batch count to be used for controlling the number of parallel executions (when isSequential is set to false). This is the upper concurrency limit, but the for-each activity will not always execute at this number | Integer (maximum 50) | No. Default is 20.
101101
Items | An expression that returns a JSON Array to be iterated over. | Expression (which returns a JSON Array) | Yes
102102
Activities | The activities to be executed. | List of Activities | Yes
103103

articles/data-factory/control-flow-web-activity.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,12 +83,12 @@ Property | Description | Allowed values | Required
8383
name | Name of the web activity | String | Yes
8484
type | Must be set to **WebActivity**. | String | Yes
8585
method | REST API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT", "PATCH", "DELETE" | Yes
86-
url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. You can increase this response timeout up to 10 mins by updating the httpRequestTimeout property | Yes
86+
url | Target endpoint and path | String (or expression with resultType of string). The activity will time out at 1 minute with an error if it does not receive a response from the endpoint. You can increase this response timeout up to 10 mins by updating the httpRequestTimeout property | Yes
8787
httpRequestTimeout | Response timeout duration | hh:mm:ss with the max value as 00:10:00. If not explicitly specified defaults to 00:01:00 | No
8888
headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | No
8989
body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT/PATCH methods. Optional for DELETE method.
9090
authentication | Authentication method used for calling the endpoint. Supported Types are "Basic, Client Certificate, System-assigned Managed Identity, User-assigned Managed Identity, Service Principal." For more information, see [Authentication](#authentication) section. If authentication is not required, exclude this property. | String (or expression with resultType of string) | No
91-
turnOffAsync | Option to disable invoking HTTP GET on location field in the response header of a HTTP 202 Response. If set true, it stops invoking HTTP GET on http location given in response header. If set false then it continues to invoke HTTP GET call on location given in http response headers. | Allowed values are false (default) and true. | No
91+
turnOffAsync | Option to disable invoking HTTP GET on location field in the response header of an HTTP 202 Response. If set true, it stops invoking HTTP GET on http location given in response header. If set false then it continues to invoke HTTP GET call on location given in http response headers. | Allowed values are false (default) and true. | No
9292
disableCertValidation | Removes server side certificate validation (not recommended unless you are connecting to a trusted server that does not use a standard CA cert). | Allowed values are false (default) and true. | No
9393
datasets | List of datasets passed to the endpoint. | Array of dataset references. Can be an empty array. | Yes
9494
linkedServices | List of linked services passed to endpoint. | Array of linked service references. Can be an empty array. | Yes

articles/data-factory/copy-activity-schema-and-type-mapping.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: jianleishen
1212
# Schema and data type mapping in copy activity
1313
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
1414

15-
This article describes how the Azure Data Factory copy activity perform schema mapping and data type mapping from source data to sink data.
15+
This article describes how the Azure Data Factory copy activity performs schema mapping and data type mapping from source data to sink data.
1616

1717
## Schema mapping
1818

0 commit comments

Comments
 (0)