You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this procedure, you create two datasets: **InputDataset** and **OutputDataset**. These datasets are of type **AzureBlob**. They refer to the Azure Storage linked service that you created in the previous section.
89
92
@@ -100,6 +103,8 @@ In the linked service settings, you specified the Azure Storage account that con
100
103
101
104
1. On the **Select Format** page, choose the format type of your data, and then select **Continue**. In this case, select **Binary** when copy files as-is without parsing the content.
1. On the **Set Properties** page, complete following steps:
104
109
105
110
a. Under **Name**, enter **InputDataset**.
@@ -136,12 +141,14 @@ In this procedure, you create and validate a pipeline with a copy activity that
136
141
1. In the **General** tab, specify **CopyPipeline** for **Name**.
137
142
138
143
1. In the **Activities** toolbox, expand **Move & Transform**. Drag the **Copy Data** activity from the **Activities** toolbox to the pipeline designer surface. You can also search for activities in the **Activities** toolbox. Specify **CopyFromBlobToBlob** for **Name**.
144
+

139
145
140
146
1. Switch to the **Source** tab in the copy activity settings, and select **InputDataset** for **Source Dataset**.
141
147
142
148
1. Switch to the **Sink** tab in the copy activity settings, and select **OutputDataset** for **Sink Dataset**.
143
149
144
150
1. Click **Validate** on the pipeline toolbar above the canvas to validate the pipeline settings. Confirm that the pipeline has been successfully validated. To close the validation output, select the **>>** (right arrow) button.
151
+

145
152
146
153
## Debug the pipeline
147
154
In this step, you debug the pipeline before deploying it to Data Factory.
@@ -157,7 +164,8 @@ In this step, you debug the pipeline before deploying it to Data Factory.
157
164
## Trigger the pipeline manually
158
165
In this procedure, you deploy entities (linked services, datasets, pipelines) to Azure Data Factory. Then, you manually trigger a pipeline run.
159
166
160
-
1. Before you trigger a pipeline, you must publish entities to Data Factory. To publish, select **Publish All** on the top.
167
+
1. Before you trigger a pipeline, you must publish entities to Data Factory. To publish, select **Publish all** on the top.
1. To trigger the pipeline manually, select **Add Trigger** on the pipeline toolbar, and then select **Trigger Now**. On the **Pipeline run** page, select **Finish**.
163
171
@@ -192,7 +200,7 @@ This procedure is optional in this tutorial. You can create a *scheduler trigger
1. Select **Publish All** to publish changes to Data Factory.
203
+
1. Select **Publish all** to publish changes to Data Factory.
196
204
197
205
1. Switch to the **Monitor** tab on the left. Select **Refresh** to refresh the list. You see that the pipeline runs once every minute from the publish time to the end time.
0 commit comments