Skip to content

Commit 9383980

Browse files
authored
Merge pull request #207836 from likebupt/blanca-update-20220812
update
2 parents b393c3b + a9883e1 commit 9383980

File tree

6 files changed

+10
-4
lines changed

6 files changed

+10
-4
lines changed

articles/machine-learning/how-to-deploy-pipelines.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,10 @@ Once you have a pipeline up and running, you can publish a pipeline so that it r
6363
version="1.0")
6464
```
6565

66+
4. After you publish your pipeline, you can check it in the UI. Pipeline ID is the unique identified of the published pipeline.
67+
68+
:::image type="content" source="./media/how-to-deploy-pipelines/published-pipeline-detail.png" alt-text="Screenshot showing published pipeline detail." lightbox= "./media/how-to-deploy-pipelines/published-pipeline-detail.png":::
69+
6670
## Run a published pipeline
6771

6872
All published pipelines have a REST endpoint. With the pipeline endpoint, you can trigger a run of the pipeline from any external systems, including non-Python clients. This endpoint enables "managed repeatability" in batch scoring and retraining scenarios.
@@ -296,7 +300,7 @@ You can create a Pipeline Endpoint with multiple published pipelines behind it.
296300
```python
297301
from azureml.pipeline.core import PipelineEndpoint
298302

299-
published_pipeline = PublishedPipeline.get(workspace=ws, name="My_Published_Pipeline")
303+
published_pipeline = PublishedPipeline.get(workspace=ws, id="My_Published_Pipeline_id")
300304
pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name="PipelineEndpointTest",
301305
pipeline=published_pipeline, description="Test description Notebook")
302306
```

articles/machine-learning/how-to-trigger-published-pipeline.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: machine-learning
77
ms.subservice: mlops
88
ms.author: larryfr
99
author: blackmist
10-
ms.date: 10/21/2021
10+
ms.date: 08/12/2022
1111
ms.topic: how-to
1212
ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1313
#Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
@@ -81,6 +81,9 @@ recurring_schedule = Schedule.create(ws, name="MyRecurringSchedule",
8181

8282
Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a job.
8383

84+
> [!NOTE]
85+
> Change-based schedules only supports monitoring Azure Blob storage.
86+
8487
To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-none---workflow-provider-none---service-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument.
8588

8689
The `polling_interval` argument allows you to specify, in minutes, the frequency at which the datastore is checked for changes.

articles/machine-learning/how-to-use-pipeline-ui.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,11 +89,10 @@ If your pipeline fails or gets stuck on a node, first view the logs.
8989

9090
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [View and download diagnostic logs](how-to-log-view-metrics.md#view-and-download-diagnostic-logs).
9191

92-
:::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
92+
![How to check node logs](media/how-to-use-pipeline-ui/node-logs.gif)
9393

9494
If you don't see those folders, this is due to the compute run time update isn't released to the compute cluster yet, and you can look at **70_driver_log.txt** under **azureml-logs** folder first.
9595

96-
:::image type="content" source="./media/how-to-use-pipeline-ui/view-driver-logs.png" alt-text="Screenshot showing the driver logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-driver-logs.png":::
9796

9897
## Clone a pipeline job to continue editing
9998

144 KB
Loading
97 KB
Loading
3.69 MB
Loading

0 commit comments

Comments
 (0)