You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
63
+
As you change the shape of your data through transformations, you can see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata isn't visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
64
64
65
65
#### Data preview
66
66
@@ -96,7 +96,7 @@ Mapping data flows are operationalized within ADF pipelines using the [data flow
96
96
97
97
## Debug mode
98
98
99
-
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both in when building your data flow logic and running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
99
+
Debug mode allows you to interactively see the results of each transformation step while you build and debug your data flows. The debug session can be used both when building your data flow logic and when running pipeline debug runs with data flow activities. To learn more, see the [debug mode documentation](concepts-data-flow-debug-mode.md).
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-linked-services.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.reviewer: jburchel
8
8
ms.subservice: data-movement
9
9
ms.custom: synapse
10
10
ms.topic: conceptual
11
-
ms.date: 10/20/2023
11
+
ms.date: 09/25/2024
12
12
---
13
13
14
14
# Linked services in Azure Data Factory and Azure Synapse Analytics
@@ -17,7 +17,7 @@ ms.date: 10/20/2023
17
17
18
18
This article describes what linked services are, how they're defined in JSON format, and how they're used in Azure Data Factory and Azure Synapse Analytics.
19
19
20
-
To learn more read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse](../synapse-analytics/overview-what-is.md).
20
+
To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse](../synapse-analytics/overview-what-is.md).
21
21
22
22
## Overview
23
23
@@ -27,7 +27,7 @@ Now, a **dataset** is a named view of data that simply points to or references t
27
27
28
28
Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way: the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
29
29
30
-
Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked services: Azure Storage and Azure SQL Database. Then, create two datasets: Azure Blob dataset (which refers to the Azure Storage linked service) and Azure SQL Table dataset (which refers to the Azure SQL Database linked service). The Azure Storage and Azure SQL Database linked services contain connection strings that the service uses at runtime to connect to your Azure Storage and Azure SQL Database, respectively. The Azure Blob dataset specifies the blob container and blob folder that contains the input blobs in your Blob storage. The Azure SQL Table dataset specifies the SQL table in your SQL Database to which the data is to be copied.
30
+
Here's a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked services: Azure Storage and Azure SQL Database. Then, create two datasets: Azure Blob dataset (which refers to the Azure Storage linked service) and Azure SQL Table dataset (which refers to the Azure SQL Database linked service). The Azure Storage and Azure SQL Database linked services contain connection strings that the service uses at runtime to connect to your Azure Storage and Azure SQL Database, respectively. The Azure Blob dataset specifies the blob container and blob folder that contains the input blobs in your Blob storage. The Azure SQL Table dataset specifies the SQL table in your SQL Database to which the data is to be copied.
31
31
32
32
The following diagram shows the relationships among pipeline, activity, dataset, and linked service in the service:
33
33
@@ -37,21 +37,21 @@ The following diagram shows the relationships among pipeline, activity, dataset,
37
37
38
38
# [Azure Data Factory](#tab/data-factory)
39
39
40
-
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
40
+
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
41
41
42
42
:::image type="content" source="media/concepts-linked-services/create-linked-service.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
43
43
44
-
After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
44
+
After selecting **+ New** to create a new linked service you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
45
45
46
46
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
47
47
48
48
# [Synapse Analytics](#tab/synapse-analytics)
49
49
50
-
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
50
+
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
51
51
52
52
:::image type="content" source="media/concepts-linked-services/create-linked-service-synapse.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
53
53
54
-
After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
54
+
After selecting **+ New** to create a new linked service you are able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
55
55
56
56
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
57
57
@@ -113,15 +113,15 @@ Linked services can be created in the Azure Data Factory UX via the [management
113
113
114
114
You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), [Azure Resource Manager Template](quickstart-create-data-factory-resource-manager-template.md), and [Azure portal](quickstart-create-data-factory-portal.md).
115
115
116
-
When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access is not granted, the user will not be able to see the available resources and will need to use manual entry option.
116
+
When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access isn't granted, the user can't see the available resources and needs to use manual entry option.
117
117
118
118
## Data store linked services
119
119
120
-
You can find the list of supported data stores in the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Click a data store to learn the supported connection properties.
120
+
You can find the list of supported data stores in the [connector overview](copy-activity-overview.md#supported-data-stores-and-formats) article. Select a data store to learn the supported connection properties.
121
121
122
122
## Compute linked services
123
123
124
-
Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service as well as the different configurations.
124
+
Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service and the different configurations.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-sap-hana.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: ulrichchrist
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
9
ms.custom: synapse
10
-
ms.date: 10/20/2023
10
+
ms.date: 09/25/2024
11
11
---
12
12
13
13
# Copy data from SAP HANA using Azure Data Factory or Synapse Analytics
@@ -36,7 +36,7 @@ Specifically, this SAP HANA connector supports:
36
36
- Copying data from any version of SAP HANA database.
37
37
- Copying data from **HANA information models** (such as Analytic and Calculation views) and **Row/Column tables**.
38
38
- Copying data using **Basic** or **Windows** authentication.
39
-
- Parallel copying from a SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
39
+
- Parallel copying from an SAP HANA source. See the [Parallel copy from SAP HANA](#parallel-copy-from-sap-hana) section for details.
40
40
41
41
> [!TIP]
42
42
> To copy data **into** SAP HANA data store, use generic ODBC connector. See [SAP HANA sink](#sap-hana-sink) section with details. Note the linked services for SAP HANA connector and ODBC connector are with different type thus cannot be reused.
@@ -258,7 +258,7 @@ You are suggested to enable parallel copy with data partitioning especially when
| Full load from large table. |**Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and choose the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
261
+
| Full load from large table. |**Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and chooses the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
262
262
| Load large amount of data by using a custom query. | **Partition option**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHanaDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to apply dynamic range partition. <br><br>During execution, the service first calculates the value ranges of the specified partition column, by evenly distributes the rows in a number of buckets according to the number of distinct partition column values the parallel copy setting, then replaces `?AdfHanaDynamicRangePartitionCondition` with filtering the partition column value range for each partition, and sends to SAP HANA.<br><br>If you want to use multiple columns as partition column, you can concatenate the values of each column as one column in the query and specify it as the partition column, like `SELECT * FROM (SELECT *, CONCAT(<KeyColumn1>, <KeyColumn2>) AS PARTITIONCOLUMN FROM <TABLENAME>) WHERE ?AdfHanaDynamicRangePartitionCondition`. |
263
263
264
264
**Example: query with physical partitions of a table**
0 commit comments