Skip to content

Commit be8f105

Browse files
committed
updates
1 parent e454134 commit be8f105

File tree

1 file changed

+22
-24
lines changed

1 file changed

+22
-24
lines changed

articles/machine-learning/v1/how-to-trigger-published-pipeline.md

Lines changed: 22 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -169,27 +169,27 @@ Now create a [logic app](/azure/logic-apps/logic-apps-overview) instance. After
169169
> [!div class="mx-imgBorder"]
170170
> :::image type="content" source="media/how-to-trigger-published-pipeline/add-trigger.png" alt-text="Screenshot that shows how to add a trigger to a logic app." lightbox="media/how-to-trigger-published-pipeline/add-trigger.png":::
171171
172-
1. Fill in the connection info for the Blob Storage account that you wish to monitor for blob additions or modifications. Select the Container to monitor.
172+
1. Fill in the connection information for the Blob Storage account that you want to monitor for blob additions or modifications. Select the container to monitor.
173173

174-
Choose the **Interval** and **Frequency** to poll for updates that work for you.
174+
Select **Interval** and **Frequency** values that work for you.
175175

176176
> [!NOTE]
177-
> This trigger will monitor the selected Container but won't monitor subfolders.
177+
> This trigger will monitor the selected container but won't monitor subfolders.
178178
179-
1. Add an HTTP action that will run when a new or modified blob is detected. Select **+ New Step**, then search for and select the HTTP action.
179+
1. Add an HTTP action that will run when a blob is changed or a new blob is detected. Select **+ New Step**, and then search for and select the HTTP action.
180180

181-
> [!div class="mx-imgBorder"]
182-
> :::image type="content" source="media/how-to-trigger-published-pipeline/search-http.png" alt-text="Search for HTTP action":::
181+
> [!div class="mx-imgBorder"]
182+
> :::image type="content" source="media/how-to-trigger-published-pipeline/search-http.png" alt-text="Screenshot that shows how to add an HTTP action.":::
183183
184-
Use the following settings to configure your action:
184+
Use the following settings to configure your action:
185185

186-
| Setting | Value |
187-
|---|---|
188-
| HTTP action | POST |
189-
| URI |the endpoint to the published pipeline that you found as a [Prerequisite](#prerequisites) |
190-
| Authentication mode | Managed Identity |
186+
| Setting | Value |
187+
|---|---|
188+
| HTTP action | **POST** |
189+
| URI |The endpoint of the published pipeline. See [Prerequisites](#prerequisites). |
190+
| Authentication mode | **Managed Identity** |
191191

192-
1. Set up your schedule to set the value of any [DataPath PipelineParameters](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) you might have:
192+
1. Configure your schedule to set the values of any [DataPath PipelineParameters](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) that you have:
193193

194194
```json
195195
{
@@ -207,30 +207,28 @@ Now create a [logic app](/azure/logic-apps/logic-apps-overview) instance. After
207207
}
208208
```
209209

210-
Use the `DataStoreName` you added to your workspace as a [Prerequisite](#prerequisites).
210+
Use the `DataStoreName` that you added to your workspace as a [prerequisite](#prerequisites).
211211

212212
> [!div class="mx-imgBorder"]
213-
> :::image type="content" source="media/how-to-trigger-published-pipeline/http-settings.png" alt-text="HTTP settings":::
213+
> :::image type="content" source="media/how-to-trigger-published-pipeline/http-settings.png" alt-text="Screenshot that shows the HTTP settings.":::
214214

215-
1. Select **Save** and your schedule is now ready.
215+
1. Select **Save**.
216216

217217
> [!IMPORTANT]
218-
> If you are using Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
218+
> If you use Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
219219

220220
## Call machine learning pipelines from Azure Data Factory pipelines
221221

222-
In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activity runs an Azure Machine Learning pipeline. You can find this activity in the Data Factory's authoring page under the *Machine Learning* category:
222+
In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activity runs an Azure Machine Learning pipeline. You can find this activity on the Azure Data Factory authoring page under **Machine Learning** in the menu:
223223

224-
:::image type="content" source="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png" alt-text="Screenshot showing the ML pipeline activity in the Azure Data Factory authoring environment":::
224+
:::image type="content" source="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png" alt-text="Screenshot showing the machine learning pipeline activity in the Azure Data Factory authoring environment." lightbox="./media/how-to-trigger-published-pipeline/azure-data-factory-pipeline-activity.png":::
225225

226226
## Next steps
227227

228-
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule jobs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
228+
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule is triggered based on elapsed clock time. The other schedule is triggered if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure logic app to trigger a pipeline.
229229

230-
For more information, see:
231-
232-
> [!div class="nextstepaction"]
233-
> [Use Azure Machine Learning Pipelines for batch scoring](../tutorial-pipeline-batch-scoring-classification.md)
230+
These articles provide more information:
234231

232+
* [Use Azure Machine Learning Pipelines for batch scoring](../tutorial-pipeline-batch-scoring-classification.md)
235233
* Learn more about [pipelines](../concept-ml-pipelines.md)
236234
* Learn more about [exploring Azure Machine Learning with Jupyter](../samples-notebooks.md)

0 commit comments

Comments
 (0)