You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/how-to-trigger-published-pipeline.md
+22-24Lines changed: 22 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -169,27 +169,27 @@ Now create a [logic app](/azure/logic-apps/logic-apps-overview) instance. After
169
169
> [!div class="mx-imgBorder"]
170
170
> :::image type="content" source="media/how-to-trigger-published-pipeline/add-trigger.png" alt-text="Screenshot that shows how to add a trigger to a logic app." lightbox="media/how-to-trigger-published-pipeline/add-trigger.png":::
171
171
172
-
1. Fill in the connection info for the Blob Storage account that you wish to monitor for blob additions or modifications. Select the Container to monitor.
172
+
1. Fill in the connection information for the Blob Storage account that you want to monitor for blob additions or modifications. Select the container to monitor.
173
173
174
-
Choose the **Interval** and **Frequency**to poll for updates that work for you.
174
+
Select **Interval** and **Frequency**values that work for you.
175
175
176
176
> [!NOTE]
177
-
> This trigger will monitor the selected Container but won't monitor subfolders.
177
+
> This trigger will monitor the selected container but won't monitor subfolders.
178
178
179
-
1. Add an HTTP action that will run when a new or modified blob is detected. Select **+ New Step**, then search for and select the HTTP action.
179
+
1. Add an HTTP action that will run when a blob is changed or a new blob is detected. Select **+ New Step**, and then search for and select the HTTP action.
180
180
181
-
> [!div class="mx-imgBorder"]
182
-
> :::image type="content" source="media/how-to-trigger-published-pipeline/search-http.png" alt-text="Search for HTTP action":::
181
+
> [!div class="mx-imgBorder"]
182
+
> :::image type="content" source="media/how-to-trigger-published-pipeline/search-http.png" alt-text="Screenshot that shows how to add an HTTP action.":::
183
183
184
-
Use the following settings to configure your action:
184
+
Use the following settings to configure your action:
185
185
186
-
| Setting | Value |
187
-
|---|---|
188
-
| HTTP action | POST |
189
-
| URI |the endpoint to the published pipeline that you found as a [Prerequisite](#prerequisites)|
190
-
| Authentication mode | Managed Identity |
186
+
| Setting | Value |
187
+
|---|---|
188
+
| HTTP action |**POST**|
189
+
| URI |The endpoint of the published pipeline. See [Prerequisites](#prerequisites).|
190
+
| Authentication mode |**Managed Identity**|
191
191
192
-
1.Set up your schedule to set the value of any [DataPath PipelineParameters](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb)you might have:
192
+
1.Configure your schedule to set the values of any [DataPath PipelineParameters](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb)that you have:
193
193
194
194
```json
195
195
{
@@ -207,30 +207,28 @@ Now create a [logic app](/azure/logic-apps/logic-apps-overview) instance. After
207
207
}
208
208
```
209
209
210
-
Use the `DataStoreName` you added to your workspace as a [Prerequisite](#prerequisites).
210
+
Use the `DataStoreName` that you added to your workspace as a [prerequisite](#prerequisites).
> :::image type="content" source="media/how-to-trigger-published-pipeline/http-settings.png" alt-text="Screenshot that shows the HTTP settings.":::
214
214
215
-
1. Select **Save** and your schedule is now ready.
215
+
1. Select **Save**.
216
216
217
217
> [!IMPORTANT]
218
-
> If you are using Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
218
+
> If you use Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
219
219
220
220
## Call machine learning pipelines from Azure Data Factory pipelines
221
221
222
-
In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activity runs an Azure Machine Learning pipeline. You can find this activity in the Data Factory's authoring page under the *Machine Learning* category:
222
+
In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activity runs an Azure Machine Learning pipeline. You can find this activity on the Azure Data Factory authoring page under **Machine Learning** in the menu:
223
223
224
-
:::image type="content" source="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png" alt-text="Screenshot showing the ML pipeline activity in the Azure Data Factory authoring environment":::
224
+
:::image type="content" source="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png" alt-text="Screenshot showing the machine learning pipeline activity in the Azure Data Factory authoring environment." lightbox="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png":::
225
225
226
226
## Next steps
227
227
228
-
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule jobs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
228
+
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule is triggered based on elapsed clock time. The other schedule is triggered if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure logic app to trigger a pipeline.
229
229
230
-
For more information, see:
231
-
232
-
> [!div class="nextstepaction"]
233
-
> [Use Azure Machine Learning Pipelines for batch scoring](../tutorial-pipeline-batch-scoring-classification.md)
230
+
These articles provide more information:
234
231
232
+
* [Use Azure Machine Learning Pipelines for batch scoring](../tutorial-pipeline-batch-scoring-classification.md)
235
233
* Learn more about [pipelines](../concept-ml-pipelines.md)
236
234
* Learn more about [exploring Azure Machine Learning with Jupyter](../samples-notebooks.md)
0 commit comments