Skip to content

Commit 79a9f9b

Browse files
committed
update
1 parent 6e732b4 commit 79a9f9b

File tree

3 files changed

+9
-2
lines changed

3 files changed

+9
-2
lines changed

articles/machine-learning/how-to-deploy-pipelines.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,10 @@ Once you have a pipeline up and running, you can publish a pipeline so that it r
6363
version="1.0")
6464
```
6565

66+
4. After publish your pipeline, you can check it in UI. Pipeline ID is the unique identified of thie published pipeline.
67+
68+
![published pipeline detail](./media/how-to-create-your-first-pipeline/published-pipeline-detail.png)
69+
6670
## Run a published pipeline
6771

6872
All published pipelines have a REST endpoint. With the pipeline endpoint, you can trigger a run of the pipeline from any external systems, including non-Python clients. This endpoint enables "managed repeatability" in batch scoring and retraining scenarios.
@@ -296,7 +300,7 @@ You can create a Pipeline Endpoint with multiple published pipelines behind it.
296300
```python
297301
from azureml.pipeline.core import PipelineEndpoint
298302

299-
published_pipeline = PublishedPipeline.get(workspace=ws, name="My_Published_Pipeline")
303+
published_pipeline = PublishedPipeline.get(workspace=ws, id="My_Published_Pipeline_id")
300304
pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name="PipelineEndpointTest",
301305
pipeline=published_pipeline, description="Test description Notebook")
302306
```

articles/machine-learning/how-to-trigger-published-pipeline.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: machine-learning
77
ms.subservice: mlops
88
ms.author: larryfr
99
author: blackmist
10-
ms.date: 10/21/2021
10+
ms.date: 08/12/2022
1111
ms.topic: how-to
1212
ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1313
#Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
@@ -81,6 +81,9 @@ recurring_schedule = Schedule.create(ws, name="MyRecurringSchedule",
8181

8282
Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a run.
8383

84+
> [!NOTE]
85+
> Change-based schedules only supports monitoring Azure Blob storage.
86+
8487
To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-none---workflow-provider-none---service-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument.
8588

8689
The `polling_interval` argument allows you to specify, in minutes, the frequency at which the datastore is checked for changes.
139 KB
Loading

0 commit comments

Comments
 (0)