You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/load-azure-sql-data-warehouse.md
+31-40Lines changed: 31 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.service: data-factory
10
10
ms.workload: data-services
11
11
ms.topic: conceptual
12
12
ms.custom: seo-lt-2019
13
-
ms.date: 06/22/2018
13
+
ms.date: 04/16/2020
14
14
---
15
15
16
16
# Load data into Azure SQL Data Warehouse by using Azure Data Factory
@@ -44,13 +44,9 @@ This article shows you how to use the Data Factory Copy Data tool to _load data
44
44
45
45
1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
46
46
47
-

47
+
2. On the **New data factory** page, provide values for following items:
48
48
49
-
2. In the **New data factory** page, provide values for the fields that are shown in the following image:
50
-
51
-

52
-
53
-
***Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadSQLDWDemo\" is not available," enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
49
+
***Name**: Enter *LoadSQLDWDemo* for name. The name for your data factory must be *globally unique. If you receive the error "Data factory name 'LoadSQLDWDemo' is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
54
50
***Subscription**: Select your Azure subscription in which to create the data factory.
55
51
***Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
56
52
***Version**: Select **V2**.
@@ -59,100 +55,95 @@ This article shows you how to use the Data Factory Copy Data tool to _load data
59
55
3. Select **Create**.
60
56
4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
61
57
62
-

58
+

63
59
64
60
Select the **Author & Monitor** tile to launch the Data Integration Application in a separate tab.
65
61
66
62
## Load data into Azure SQL Data Warehouse
67
63
68
-
1. In the **Get started** page, select the **Copy Data** tile to launch the Copy Data tool:
64
+
1. In the **Get started** page, select the **Copy Data** tile to launch the Copy Data tool.
69
65
70
-

71
-
1. In the **Properties** page, specify **CopyFromSQLToSQLDW** for the **Task name** field, and select **Next**:
66
+
1. In the **Properties** page, specify **CopyFromSQLToSQLDW** for the **Task name** field, and select **Next**.
1. In the **Source data store** page, complete the following steps:
71
+
>[!TIP]
72
+
>In this tutorial, you use *SQL authentication* as the authentication type for your source data store, but you can choose other supported authentication methods:*Service Principal* and *Managed Identity* if needed. Refer to corresponding sections in [this article](https://docs.microsoft.com/azure/data-factory/connector-azure-sql-database#linked-service-properties) for details.
73
+
>To store secrets for data stores securely, it's also recommended to use an Azure Key Vault. Refer to [this article](https://docs.microsoft.com/azure/data-factory/store-credentials-in-key-vault) for detailed illustrations.
76
74
77
-
a. click **+ Create new connection**:
78
-
79
-

75
+
a. click **+ Create new connection**.
80
76
81
77
b. Select **Azure SQL Database** from the gallery, and select **Continue**. You can type "SQL" in the search box to filter the connectors.
c. In the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password. Click **Test connection** to validate the settings, then select **Finish**.
81
+
c. In the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password. Click **Test connection** to validate the settings, then select **Create**.
1. In the **Select tables from which to copy the data or use a custom query** page, enter **SalesLT** to filter the tables. Choose the **(Select all)** box to use all of the tables for the copy, and then select **Next**:
87
+
1. In the **Select tables from which to copy the data or use a custom query** page, enter **SalesLT** to filter the tables. Choose the **(Select all)** box to use all of the tables for the copy, and then select **Next**.
1. In the **Apply filter** page, specify your settings or select **Next**.
92
+
97
93
1. In the **Destination data store** page, complete the following steps:
94
+
>[!TIP]
95
+
>In this tutorial, you use *SQL authentication* as the authentication type for your destination data store, but you can choose other supported authentication methods:*Service Principal* and *Managed Identity* if needed. Refer to corresponding sections in [this article](https://docs.microsoft.com/azure/data-factory/connector-azure-sql-data-warehouse#linked-service-properties) for details.
96
+
>To store secrets for data stores securely, it's also recommended to use an Azure Key Vault. Refer to [this article](https://docs.microsoft.com/azure/data-factory/store-credentials-in-key-vault) for detailed illustrations.
98
97
99
98
a. Click **+ Create new connection** to add a connection
100
99
101
-

102
-
103
-
b. Select **Azure SQL Data Warehouse** from the gallery, and select **Next**.
100
+
b. Select **Azure Synapse Analytics (formerly SQL DW)** from the gallery, and select **Continue**. You can type "SQL" in the search box to filter the connectors.
c. In the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password. Click **Test connection** to validate the settings, then select **Finish**.
104
+
c. In the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password. Click **Test connection** to validate the settings, then select **Create**.
1. In the **Table mapping** page, review the content, and select **Next**. An intelligent table mapping displays. The source tables are mapped to the destination tables based on the table names. If a source table doesn't exist in the destination, Azure Data Factory creates a destination table with the same name by default. You can also map a source table to an existing destination table.
116
111
117
112
> [!NOTE]
118
113
> Automatic table creation for the SQL Data Warehouse sink applies when SQL Server or Azure SQL Database is the source. If you copy data from another source data store, you need to pre-create the schema in the sink Azure SQL Data Warehouse before executing the data copy.
1. In the **Schema mapping** page, review the content, and select **Next**. The intelligent table mapping is based on the column name. If you let Data Factory automatically create the tables, data type conversion can occur when there are incompatibilities between the source and destination stores. If there's an unsupported data type conversion between the source and destination column, you see an error message next to the corresponding table.
117
+
1. In the **Column mapping** page, review the content, and select **Next**. The intelligent table mapping is based on the column name. If you let Data Factory automatically create the tables, data type conversion can occur when there are incompatibilities between the source and destination stores. If there's an unsupported data type conversion between the source and destination column, you see an error message next to the corresponding table.
1. In the **Settings** page, complete the following steps:
127
122
128
-
a. In **Staging settings** section, click **+ New** to new a staging storage. The storage is used for staging the data before it loads into SQL Data Warehouse by using PolyBase. After the copy is complete, the interim data in Azure Storage is automatically cleaned up.
b. In the **New Linked Service** page, select your storage account, and select **Finish**.
123
+
a. In **Staging settings** section, click **+ New** to new a staging storage. The storage is used for staging the data before it loads into SQL Data Warehouse by using PolyBase. After the copy is complete, the interim data in Azure Blob Storage is automatically cleaned up.
1. Notice that the **Monitor** tab on the left is automatically selected. The **Actions** column includes links to view activity run details and to rerun the pipeline:
136
+
1. Notice that the **Monitor** tab on the left is automatically selected. When the pipeline run completes successfully, select the **CopyFromSQLToSQLDW** link under the **PIPELINE NAME** column to view activity run details and to rerun the pipeline.
1. To view activity runs that are associated with the pipeline run, select the **View Activity Runs** link in the **Actions** column. To switch back to the pipeline runs view, select the **Pipelines** link at the top. Select **Refresh** to refresh the list.
1. To monitor the execution details for each copy activity, select the **Details** link under **Actions** in the activity monitoring view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations:
143
+
1. To monitor the execution details for each copy activity, select the **Details** link (eyeglasses icon) under **ACTIVITY NAME** in the activity runs view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations.
144
+

154
145
155
-

146
+

0 commit comments