Skip to content

Commit ebad541

Browse files
committed
Resolving alt text
1 parent ceb94d8 commit ebad541

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/data-factory/tutorial-data-flow.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -114,19 +114,19 @@ In this step, you build a data flow that takes the moviesDB.csv in ADLS storage
114114

115115
1. In the linked service creation screen, name your ADLS gen2 linked service **ADLSGen2** and specify your authentication method. Then enter your connection credentials. In this tutorial, we're using Account key to connect to our storage account. You can select **Test connection** to verify your credentials were entered correctly. Select Create when finished.
116116

117-
:::image type="content" source="media/tutorial-data-flow/create-linked-service.png" alt-text="Linked Service":::
117+
:::image type="content" source="media/tutorial-data-flow/create-linked-service.png" alt-text="Screenshot of the pane to create a new linked service for Azure Data Lake Storage.":::
118118

119119
1. Once you're back at the dataset creation screen, enter where your file is located under the **File path** field. In this tutorial, the file moviesDB.csv is located in container sample-data. As the file has headers, check **First row as header**. Select **From connection/store** to import the header schema directly from the file in storage. Select OK when done.
120120

121-
:::image type="content" source="media/tutorial-data-flow/source-linked-service.png" alt-text="Datasets":::
121+
:::image type="content" source="media/tutorial-data-flow/source-linked-service.png" alt-text="Screenshot of the paine to create a dataset.":::
122122

123123
1. If your debug cluster has started, go to the **Data Preview** tab of the source transformation and select **Refresh** to get a snapshot of the data. You can use data preview to verify your transformation is configured correctly.
124124

125125
:::image type="content" source="media/tutorial-data-flow/dataflow-first-preview.png" alt-text="Screenshot that shows where you can preview your data to verify your transformation is configured correctly.":::
126126

127127
1. Next to your source node on the data flow canvas, select the plus icon to add a new transformation. The first transformation you're adding is a **Filter**.
128128

129-
:::image type="content" source="media/tutorial-data-flow/dataflow-add-filter.png" alt-text="Data Flow Canvas":::
129+
:::image type="content" source="media/tutorial-data-flow/dataflow-add-filter.png" alt-text="Screenshot of the data flow canvas.":::
130130

131131
1. Name your filter transformation **FilterYears**. Select the expression box next to **Filter on** and then **Open expression builder**. Here you specify your filtering condition.
132132

@@ -190,11 +190,11 @@ In this step, you build a data flow that takes the moviesDB.csv in ADLS storage
190190

191191
1. Choose **DelimitedText**. Select Continue.
192192

193-
:::image type="content" source="media/tutorial-data-flow/select-delimited.png" alt-text="Dataset":::
193+
:::image type="content" source="media/tutorial-data-flow/select-delimited.png" alt-text="Screenshot of the pane to select a dataset type.":::
194194

195195
1. Name your sink dataset **MoviesSink**. For linked service, choose the ADLS gen2 linked service you created in step 6. Enter an output folder to write your data to. In this tutorial, we're writing to folder 'output' in container 'sample-data'. The folder doesn't need to exist beforehand and can be dynamically created. Set **First row as header** as true and select **None** for **Import schema**. Select Finish.
196196

197-
:::image type="content" source="media/tutorial-data-flow/sink-properties.png" alt-text="Sink":::
197+
:::image type="content" source="media/tutorial-data-flow/sink-properties.png" alt-text="Screenshot of the sink creation page with first row as header selected.":::
198198

199199
Now you've finished building your data flow. You're ready to run it in your pipeline.
200200

@@ -218,7 +218,7 @@ You can debug a pipeline before you publish it. In this step, you're going to tr
218218

219219
1. Select a transformation to get detailed information about the columns and partitioning of the data.
220220

221-
:::image type="content" source="media/tutorial-data-flow/review-stage.png" alt-text="Monitoring":::
221+
:::image type="content" source="media/tutorial-data-flow/review-stage.png" alt-text="Screenshot of the monitoring transformation pane.":::
222222

223223
If you followed this tutorial correctly, you should have written 83 rows and 2 columns into your sink folder. You can verify the data is correct by checking your blob storage.
224224

0 commit comments

Comments
 (0)