You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/tutorial-copy-data-portal-private.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.service: data-factory
6
6
ms.subservice: tutorials
7
7
ms.topic: tutorial
8
8
ms.custom: seo-lt-2019
9
-
ms.date: 07/05/2021
9
+
ms.date: 09/26/2022
10
10
ms.author: jianleishen
11
11
---
12
12
@@ -118,7 +118,8 @@ In this tutorial, you start by creating a pipeline. Then you create linked servi
118
118
119
119
1. On the home page, select **Orchestrate**.
120
120
121
-
:::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
121
+
:::image type="content" source="media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted.":::
122
+
122
123
1. In the properties pane for the pipeline, enter **CopyPipeline** for the pipeline name.
123
124
124
125
1. In the **Activities** tool box, expand the **Move and Transform** category, and drag the **Copy data** activity from the tool box to the pipeline designer surface. Enter **CopyFromBlobToSql** for the name.
@@ -276,4 +277,4 @@ You can debug a pipeline before you publish artifacts (linked services, datasets
276
277
The pipeline in this sample copies data from Blob storage to SQL Database by using private endpoints in Data Factory Managed Virtual Network. You learned how to:
Copy file name to clipboardExpand all lines: articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,15 +6,15 @@ ms.author: aamerril
6
6
ms.service: synapse-analytics
7
7
ms.topic: conceptual
8
8
ms.custom: seo-lt-2021
9
-
ms.date: 10/18/2021
9
+
ms.date: 09/26/2022
10
10
---
11
11
12
12
# AdventureWorks template documentation
13
13
14
14
This document explains how to setup and use Microsoft's AdventureWorks pipeline template to jump start the exploration of the AdventureWorks dataset using Azure Synapse Analytics and the Retail database template.
15
15
16
16
## Overview
17
-
AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they are being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse.
17
+
AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they're being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse.
18
18
19
19
## Prerequisites
20
20
@@ -34,7 +34,7 @@ Follow these steps to locate the template.
34
34
These steps open the template overview page.
35
35
36
36
## Configure the template
37
-
The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and click**Open pipeline** to create the resources in your own workspace. You will get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You will find the below components of the template:
37
+
The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and select**Open pipeline** to create the resources in your own workspace. You'll get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You'll find the below components of the template:
38
38
39
39
* 17 pipelines: These are scheduled to ensure the data loads into the target tables correctly, and include one pipeline per source table plus the scheduling ones.
40
40
* 14 data flows: These contain the logic to load from the source system and land the data into the target database.
@@ -43,15 +43,15 @@ If you have the AdventureWorks dataset loaded into a different database, you can
43
43
44
44
45
45
## Dataset and source/target models
46
-
The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorks%20Data.zip). In addition, you can access the [schema definition for both the source and target databases](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorksSchemas.xlsx). Using the database designer in Synapse, recreate the source and target databases with the schema in the Excel you downloaded earlier. For more details on the database designer, see this [documentation](../synapse-analytics/database-designer/concepts-database-templates.md).
46
+
The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorks%20Data.zip). In addition, you can access the [schema definition for both the source and target databases](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorksSchemas.xlsx). Using the database designer in Synapse, recreate the source and target databases with the schema in the Excel you downloaded earlier. For more information on the database designer, see this [documentation](../synapse-analytics/database-designer/concepts-database-templates.md).
47
47
48
48
With the databases created, ensure the dataflows are pointing to the correct tables by editing the dropdowns in the Workspace DB source and sink settings. You can load the data into the source model by placing the CSV files provided in the example dataset in the correct folders specified by the tables. Once that is done, all that's required is to run the pipelines.
49
49
50
50
## Troubleshoot the pipelines
51
51
If the pipeline fails to run successfully, there's a few main things to check for errors.
52
52
53
53
* Dataset schema. Make sure the data settings for the CSV files are accurate. If you included row headers, make sure the how headers option is checked on the database table.
54
-
* Data flow sources. If you used different column or table names than what were provided in the example schema, you will need to step through the data flows to verify that the columns are mapped correctly.
54
+
* Data flow sources. If you used different column or table names than what were provided in the example schema, you'll need to step through the data flows to verify that the columns are mapped correctly.
55
55
* Data flow sink. The schema and data format configurations on the target database will need to match the data flow template. Like above, if any changes were made you those items will need to be aligned.
0 commit comments