Skip to content

Commit b9fbcb9

Browse files
Merge pull request #287352 from jonburchel/2024-09-25-adf-q1-freshness-update
Docs > 2000 page views not requiring changes
2 parents fd965e5 + fbc7b1f commit b9fbcb9

33 files changed

+286
-288
lines changed

articles/data-factory/choose-the-right-integration-runtime-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.subservice: integration-runtime
55
ms.topic: conceptual
66
author: lrtoyou1223
77
ms.author: lle
8-
ms.date: 10/20/2023
8+
ms.date: 09/25/2024
99
---
1010

1111
# Choose the right integration runtime configuration for your scenario

articles/data-factory/concepts-data-flow-expression-builder.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: daperlov
88
ms.subservice: data-flows
99
ms.custom: synapse
1010
ms.topic: conceptual
11-
ms.date: 10/20/2023
11+
ms.date: 09/25/2024
1212
---
1313

1414
# Build expressions in mapping data flow

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: makromer
66
ms.subservice: data-flows
77
ms.topic: conceptual
88
ms.custom: references_regions
9-
ms.date: 10/20/2023
9+
ms.date: 09/25/2024
1010
---
1111

1212
# Mapping data flows in Azure Data Factory
@@ -60,7 +60,7 @@ The **Inspect** tab provides a view into the metadata of the data stream that yo
6060

6161
:::image type="content" source="media/data-flow/inspect1.png" alt-text="Inspect":::
6262

63-
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
63+
As you change the shape of your data through transformations, you can see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata isn't visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
6464

6565
#### Data preview
6666

@@ -96,7 +96,7 @@ Mapping data flows are operationalized within ADF pipelines using the [data flow
9696

9797
## Debug mode
9898

99-
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both in when building your data flow logic and running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
99+
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both when building your data flow logic and when running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
100100

101101
## Monitoring data flows
102102

articles/data-factory/concepts-linked-services.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: jburchel
88
ms.subservice: data-movement
99
ms.custom: synapse
1010
ms.topic: conceptual
11-
ms.date: 10/20/2023
11+
ms.date: 09/25/2024
1212
---
1313

1414
# Linked services in Azure Data Factory and Azure Synapse Analytics
@@ -17,7 +17,7 @@ ms.date: 10/20/2023
1717

1818
This article describes what linked services are, how they're defined in JSON format, and how they're used in Azure Data Factory and Azure Synapse Analytics.
1919

20-
To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse](../synapse-analytics/overview-what-is.md).
20+
To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse](../synapse-analytics/overview-what-is.md).
2121

2222
## Overview
2323

@@ -27,7 +27,7 @@ Now, a **dataset** is a named view of data that simply points to or references t
2727

2828
Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way: the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
2929

30-
Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked services: Azure Storage and Azure SQL Database. Then, create two datasets: Azure Blob dataset (which refers to the Azure Storage linked service) and Azure SQL Table dataset (which refers to the Azure SQL Database linked service). The Azure Storage and Azure SQL Database linked services contain connection strings that the service uses at runtime to connect to your Azure Storage and Azure SQL Database, respectively. The Azure Blob dataset specifies the blob container and blob folder that contains the input blobs in your Blob storage. The Azure SQL Table dataset specifies the SQL table in your SQL Database to which the data is to be copied.
30+
Here's a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked services: Azure Storage and Azure SQL Database. Then, create two datasets: Azure Blob dataset (which refers to the Azure Storage linked service) and Azure SQL Table dataset (which refers to the Azure SQL Database linked service). The Azure Storage and Azure SQL Database linked services contain connection strings that the service uses at runtime to connect to your Azure Storage and Azure SQL Database, respectively. The Azure Blob dataset specifies the blob container and blob folder that contains the input blobs in your Blob storage. The Azure SQL Table dataset specifies the SQL table in your SQL Database to which the data is to be copied.
3131

3232
The following diagram shows the relationships among pipeline, activity, dataset, and linked service in the service:
3333

@@ -37,21 +37,21 @@ The following diagram shows the relationships among pipeline, activity, dataset,
3737

3838
# [Azure Data Factory](#tab/data-factory)
3939

40-
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
40+
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
4141

4242
:::image type="content" source="media/concepts-linked-services/create-linked-service.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
4343

44-
After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
44+
After selecting **+ New** to create a new linked service you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
4545

4646
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
4747

4848
# [Synapse Analytics](#tab/synapse-analytics)
4949

50-
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
50+
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
5151

5252
:::image type="content" source="media/concepts-linked-services/create-linked-service-synapse.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
5353

54-
After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
54+
After selecting **+ New** to create a new linked service you are able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
5555

5656
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
5757

@@ -113,15 +113,15 @@ Linked services can be created in the Azure Data Factory UX via the [management
113113

114114
You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), [Azure Resource Manager Template](quickstart-create-data-factory-resource-manager-template.md), and [Azure portal](quickstart-create-data-factory-portal.md).
115115

116-
When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access is not granted, the user will not be able to see the available resources and will need to use manual entry option.
116+
When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access isn't granted, the user can't see the available resources and needs to use manual entry option.
117117

118118
## Data store linked services
119119

120-
You can find the list of supported data stores in the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Click a data store to learn the supported connection properties.
120+
You can find the list of supported data stores in the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Select a data store to learn the supported connection properties.
121121

122122
## Compute linked services
123123

124-
Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service as well as the different configurations.
124+
Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service and the different configurations.
125125

126126
## Related content
127127

articles/data-factory/concepts-parameters-variables.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: soferreira
77
ms.subservice: orchestration
88
ms.custom: synapse
99
ms.topic: conceptual
10-
ms.date: 10/20/2023
10+
ms.date: 09/25/2024
1111
---
1212

1313
# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics

articles/data-factory/concepts-workflow-orchestration-manager.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about when to use Workflow Orchestration Manager, basic conce
55
ms.topic: conceptual
66
author: nabhishek
77
ms.author: abnarain
8-
ms.date: 10/20/2023
8+
ms.date: 09/25/2024
99
ms.custom: references_regions
1010
---
1111

articles/data-factory/connector-azure-blob-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 09/29/2023
10+
ms.date: 09/25/2024
1111
---
1212

1313
# Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics

articles/data-factory/connector-file-system.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 10/20/2023
9+
ms.date: 09/25/2024
1010
ms.author: jianleishen
1111
---
1212

articles/data-factory/connector-sap-hana.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: ulrichchrist
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 10/20/2023
10+
ms.date: 09/25/2024
1111
---
1212

1313
# Copy data from SAP HANA using Azure Data Factory or Synapse Analytics
@@ -36,7 +36,7 @@ Specifically, this SAP HANA connector supports:
3636
- Copying data from any version of SAP HANA database.
3737
- Copying data from **HANA information models** (such as Analytic and Calculation views) and **Row/Column tables**.
3838
- Copying data using **Basic** or **Windows** authentication.
39-
- Parallel copying from a SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
39+
- Parallel copying from an SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
4040

4141
> [!TIP]
4242
> To copy data **into** SAP HANA data store, use generic ODBC connector. See [SAP HANA sink](#sap-hana-sink) section with details. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
@@ -258,7 +258,7 @@ You are suggested to enable parallel copy with data partitioning especially when
258258

259259
| Scenario | Suggested settings |
260260
| -------------------------------------------------- | ------------------------------------------------------------ |
261-
| Full load from large table. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and choose the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
261+
| Full load from large table. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and chooses the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
262262
| Load large amount of data by using a custom query. | **Partition option**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHanaDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to apply dynamic range partition. <br><br>During execution, the service first calculates the value ranges of the specified partition column, by evenly distributes the rows in a number of buckets according to the number of distinct partition column values the parallel copy setting, then replaces `?AdfHanaDynamicRangePartitionCondition` with filtering the partition column value range for each partition, and sends to SAP HANA.<br><br>If you want to use multiple columns as partition column, you can concatenate the values of each column as one column in the query and specify it as the partition column, like `SELECT * FROM (SELECT *, CONCAT(<KeyColumn1>, <KeyColumn2>) AS PARTITIONCOLUMN FROM <TABLENAME>) WHERE ?AdfHanaDynamicRangePartitionCondition`. |
263263

264264
**Example: query with physical partitions of a table**

articles/data-factory/connector-sap-table.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: ulrichchrist
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 10/20/2023
10+
ms.date: 09/25/2024
1111
---
1212

1313
# Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics

0 commit comments

Comments
 (0)