Skip to content

Commit 7245ff8

Browse files
authored
Merge pull request #212584 from jonburchel/2022-09-23-screenshot-updates-tut-thru-z
Screenshot update Tut* thru Z*
2 parents 024f878 + a411cbd commit 7245ff8

File tree

42 files changed

+72
-70
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+72
-70
lines changed
96.3 KB
Loading

articles/data-factory/tutorial-bulk-copy-portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: tutorials
88
ms.workload: data-services
99
ms.topic: tutorial
10-
ms.date: 07/06/2021
10+
ms.date: 09/26/2022
1111
---
1212

1313
# Copy multiple tables in bulk by using Azure Data Factory in the Azure portal

articles/data-factory/tutorial-bulk-copy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: tutorials
88
ms.topic: tutorial
99
ms.custom: seo-lt-2019
10-
ms.date: 02/18/2021
10+
ms.date: 09/26/2022
1111
---
1212

1313
# Copy multiple tables in bulk by using Azure Data Factory using PowerShell

articles/data-factory/tutorial-control-flow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: jburchel
77
ms.service: data-factory
88
ms.subservice: tutorials
99
ms.topic: tutorial
10-
ms.date: 01/28/2022
10+
ms.date: 09/28/2022
1111
---
1212

1313
# Branching and chaining activities in a Data Factory pipeline

articles/data-factory/tutorial-copy-data-dot-net.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: jianleishen
55
ms.service: data-factory
66
ms.subservice: tutorials
77
ms.topic: tutorial
8-
ms.date: 02/18/2021
8+
ms.date: 09/26/2022
99
ms.author: jianleishen
1010
---
1111

articles/data-factory/tutorial-copy-data-portal-private.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.service: data-factory
66
ms.subservice: tutorials
77
ms.topic: tutorial
88
ms.custom: seo-lt-2019
9-
ms.date: 07/05/2021
9+
ms.date: 09/26/2022
1010
ms.author: jianleishen
1111
---
1212

@@ -118,7 +118,8 @@ In this tutorial, you start by creating a pipeline. Then you create linked servi
118118

119119
1. On the home page, select **Orchestrate**.
120120

121-
:::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
121+
:::image type="content" source="media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the data factory home page with the Orchestrate button highlighted.":::
122+
122123
1. In the properties pane for the pipeline, enter **CopyPipeline** for the pipeline name.
123124

124125
1. In the **Activities** tool box, expand the **Move and Transform** category, and drag the **Copy data** activity from the tool box to the pipeline designer surface. Enter **CopyFromBlobToSql** for the name.
@@ -276,4 +277,4 @@ You can debug a pipeline before you publish artifacts (linked services, datasets
276277
The pipeline in this sample copies data from Blob storage to SQL Database by using private endpoints in Data Factory Managed Virtual Network. You learned how to:
277278

278279
* Create a data factory.
279-
* Create a pipeline with a copy activity.
280+
* Create a pipeline with a copy activity.

articles/data-factory/tutorial-copy-data-portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.service: data-factory
66
ms.subservice: tutorials
77
ms.topic: tutorial
88
ms.custom: seo-lt-2019
9-
ms.date: 07/05/2021
9+
ms.date: 09/26/2022
1010
ms.author: jianleishen
1111
---
1212

articles/data-factory/tutorial-copy-data-tool.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: tutorials
88
ms.topic: tutorial
99
ms.custom: seo-lt-2019
10-
ms.date: 07/08/2021
10+
ms.date: 09/26/2022
1111
---
1212

1313
# Copy data from Azure Blob storage to a SQL Database by using the Copy Data tool

articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,15 @@ ms.author: aamerril
66
ms.service: synapse-analytics
77
ms.topic: conceptual
88
ms.custom: seo-lt-2021
9-
ms.date: 10/18/2021
9+
ms.date: 09/26/2022
1010
---
1111

1212
# AdventureWorks template documentation
1313

1414
This document explains how to setup and use Microsoft's AdventureWorks pipeline template to jump start the exploration of the AdventureWorks dataset using Azure Synapse Analytics and the Retail database template.
1515

1616
## Overview
17-
AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they are being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse.
17+
AdventureWorks is a fictional sports equipment retailer that is used to demo Microsoft applications. In this case, they're being used as an example for how to use Synapse Pipelines to map retail data to the Retail database template for further analysis within Azure Synapse.
1818

1919
## Prerequisites
2020

@@ -34,7 +34,7 @@ Follow these steps to locate the template.
3434
These steps open the template overview page.
3535

3636
## Configure the template
37-
The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and click **Open pipeline** to create the resources in your own workspace. You will get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You will find the below components of the template:
37+
The template is designed to require minimal configuration. From the template overview page you can see a preview of the initial starting configuration of the pipeline, and select **Open pipeline** to create the resources in your own workspace. You'll get a notification that all 31 resources in the template have been created, and can review these before committing or publishing them. You'll find the below components of the template:
3838

3939
* 17 pipelines: These are scheduled to ensure the data loads into the target tables correctly, and include one pipeline per source table plus the scheduling ones.
4040
* 14 data flows: These contain the logic to load from the source system and land the data into the target database.
@@ -43,15 +43,15 @@ If you have the AdventureWorks dataset loaded into a different database, you can
4343

4444

4545
## Dataset and source/target models
46-
The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorks%20Data.zip). In addition, you can access the [schema definition for both the source and target databases](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorksSchemas.xlsx). Using the database designer in Synapse, recreate the source and target databases with the schema in the Excel you downloaded earlier. For more details on the database designer, see this [documentation](../synapse-analytics/database-designer/concepts-database-templates.md).
46+
The AdventureWorks dataset in Excel format can be downloaded from this [GitHub site](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorks%20Data.zip). In addition, you can access the [schema definition for both the source and target databases](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/AdventureWorksSchemas.xlsx). Using the database designer in Synapse, recreate the source and target databases with the schema in the Excel you downloaded earlier. For more information on the database designer, see this [documentation](../synapse-analytics/database-designer/concepts-database-templates.md).
4747

4848
With the databases created, ensure the dataflows are pointing to the correct tables by editing the dropdowns in the Workspace DB source and sink settings. You can load the data into the source model by placing the CSV files provided in the example dataset in the correct folders specified by the tables. Once that is done, all that's required is to run the pipelines.
4949

5050
## Troubleshoot the pipelines
5151
If the pipeline fails to run successfully, there's a few main things to check for errors.
5252

5353
* Dataset schema. Make sure the data settings for the CSV files are accurate. If you included row headers, make sure the how headers option is checked on the database table.
54-
* Data flow sources. If you used different column or table names than what were provided in the example schema, you will need to step through the data flows to verify that the columns are mapped correctly.
54+
* Data flow sources. If you used different column or table names than what were provided in the example schema, you'll need to step through the data flows to verify that the columns are mapped correctly.
5555
* Data flow sink. The schema and data format configurations on the target database will need to match the data flow template. Like above, if any changes were made you those items will need to be aligned.
5656

5757
## Next steps

articles/data-factory/tutorial-data-flow-delta-lake.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: data-flows
88
ms.topic: conceptual
99
ms.custom: seo-lt-2021
10-
ms.date: 06/04/2021
10+
ms.date: 09/26/2022
1111
---
1212

1313
# Transform data in delta lake using mapping data flows

0 commit comments

Comments
 (0)