Skip to content

Commit 94133fe

Browse files
author
Larry Franks
committed
moving v1 articles to v1 folder
1 parent 9316b2c commit 94133fe

19 files changed

+41
-41
lines changed

articles/machine-learning/how-to-debug-parallel-run-step.md renamed to articles/machine-learning/v1/how-to-debug-parallel-run-step.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.date: 10/21/2021
1717

1818
# Troubleshooting the ParallelRunStep
1919

20-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
20+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
2121

2222
In this article, you learn how to troubleshoot when you get errors using the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class from the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro).
2323

articles/machine-learning/how-to-debug-pipelines.md renamed to articles/machine-learning/v1/how-to-debug-pipelines.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.custom: troubleshooting, devx-track-python, contperf-fy21q2, sdkv1, event-tie
1515

1616
# Troubleshooting machine learning pipelines
1717

18-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
18+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1919

2020
In this article, you learn how to troubleshoot when you get errors running a [machine learning pipeline](concept-ml-pipelines.md) in the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro) and [Azure Machine Learning designer](./concept-designer.md).
2121

@@ -31,7 +31,7 @@ The following table contains common problems during pipeline development, with p
3131
| Pipeline not reusing steps | Step reuse is enabled by default, but ensure you haven't disabled it in a pipeline step. If reuse is disabled, the `allow_reuse` parameter in the step will be set to `False`. |
3232
| Pipeline is rerunning unnecessarily | To ensure that steps only rerun when their underlying data or scripts change, decouple your source-code directories for each step. If you use the same source directory for multiple steps, you may experience unnecessary reruns. Use the `source_directory` parameter on a pipeline step object to point to your isolated directory for that step, and ensure you aren't using the same `source_directory` path for multiple steps. |
3333
| Step slowing down over training epochs or other looping behavior | Try switching any file writes, including logging, from `as_mount()` to `as_upload()`. The **mount** mode uses a remote virtualized filesystem and uploads the entire file each time it is appended to. |
34-
| Compute target takes a long time to start | Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). |
34+
| Compute target takes a long time to start | Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](/azure/container-registry/container-registry-skus). |
3535

3636
### Authentication errors
3737

@@ -233,7 +233,7 @@ For pipelines created in the designer, you can find the **70_driver_log** file i
233233

234234
### Enable logging for real-time endpoints
235235

236-
In order to troubleshoot and debug real-time endpoints in the designer, you must enable Application Insight logging using the SDK. Logging lets you troubleshoot and debug model deployment and usage issues. For more information, see [Logging for deployed models](./v1/how-to-enable-app-insights.md).
236+
In order to troubleshoot and debug real-time endpoints in the designer, you must enable Application Insight logging using the SDK. Logging lets you troubleshoot and debug model deployment and usage issues. For more information, see [Logging for deployed models](how-to-enable-app-insights.md).
237237

238238
### Get logs from the authoring page
239239

articles/machine-learning/how-to-deploy-pipelines.md renamed to articles/machine-learning/v1/how-to-deploy-pipelines.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,15 @@ ms.custom: contperf-fy21q1, sdkv1, event-tier1-build-2022
1414

1515
# Publish and track machine learning pipelines
1616

17-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
17+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1818

1919
This article will show you how to share a machine learning pipeline with your colleagues or customers.
2020

2121
Machine learning pipelines are reusable workflows for machine learning tasks. One benefit of pipelines is increased collaboration. You can also version pipelines, allowing customers to use the current model while you're working on a new version.
2222

2323
## Prerequisites
2424

25-
* Create an [Azure Machine Learning workspace](quickstart-create-resources.md) to hold all your pipeline resources
25+
* Create an [Azure Machine Learning workspace](../quickstart-create-resources.md) to hold all your pipeline resources
2626

2727
* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](concept-compute-instance.md) with the SDK already installed
2828

articles/machine-learning/how-to-log-pipelines-application-insights.md renamed to articles/machine-learning/v1/how-to-log-pipelines-application-insights.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,21 +14,21 @@ ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1414

1515
# Collect machine learning pipeline log files in Application Insights for alerts and debugging
1616

17-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
17+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1818

1919
The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
2020

2121
Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
2222

2323
## Prerequisites
2424

25-
* Follow the steps to create an [Azure Machine Learning workspace](quickstart-create-resources.md) and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
25+
* Follow the steps to create an [Azure Machine Learning workspace](../quickstart-create-resources.md) and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
2626
* [Configure your development environment](./how-to-configure-environment.md) to install the Azure Machine Learning SDK.
2727
* Install the [OpenCensus Azure Monitor Exporter](https://pypi.org/project/opencensus-ext-azure/) package locally:
2828
```python
2929
pip install opencensus-ext-azure
3030
```
31-
* Create an [Application Insights instance](../azure-monitor/app/opencensus-python.md) (this doc also contains information on getting the connection string for the resource)
31+
* Create an [Application Insights instance](/azure/azure-monitor/app/opencensus-python) (this doc also contains information on getting the connection string for the resource)
3232

3333
## Getting Started
3434

@@ -160,6 +160,6 @@ Some of the queries below use 'customDimensions.Level'. These severity levels co
160160

161161
## Next Steps
162162

163-
Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md) based on query results.
163+
Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](/azure/azure-monitor/alerts/alerts-overview) based on query results.
164164

165-
You can also add results from queries to an [Azure Dashboard](../azure-monitor/app/tutorial-app-dashboards.md#add-logs-query) for additional insights.
165+
You can also add results from queries to an [Azure Dashboard](/azure/azure-monitor/app/tutorial-app-dashboards#add-logs-query) for additional insights.

articles/machine-learning/how-to-move-data-in-out-of-pipelines.md renamed to articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ ms.service: machine-learning
77
ms.subservice: mldata
88
ms.author: larryfr
99
author: blackmist
10-
ms.date: 10/21/2021
10+
ms.date: 08/18/2022
1111
ms.topic: how-to
1212
ms.custom: contperf-fy20q4, devx-track-python, data4ml, sdkv1, event-tier1-build-2022
1313
#Customer intent: As a data scientist using Python, I want to get data into my pipeline and flowing between steps.
1414
---
1515

1616
# Moving data into and between ML pipeline steps (Python)
1717

18-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
18+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1919

2020
This article provides code for importing, transforming, and moving data between steps in an Azure Machine Learning pipeline. For an overview of how data works in Azure Machine Learning, see [Access data in Azure storage services](how-to-access-data.md). For the benefits and structure of Azure Machine Learning pipelines, see [What are Azure Machine Learning pipelines?](concept-ml-pipelines.md)
2121

@@ -47,7 +47,7 @@ You'll need:
4747
ws = Workspace.from_config()
4848
```
4949

50-
- Some pre-existing data. This article briefly shows the use of an [Azure blob container](../storage/blobs/storage-blobs-overview.md).
50+
- Some pre-existing data. This article briefly shows the use of an [Azure blob container](/azure/storage/blobs/storage-blobs-overview).
5151

5252
- Optional: An existing machine learning pipeline, such as the one described in [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md).
5353

@@ -69,13 +69,13 @@ datastore_path = [
6969
cats_dogs_dataset = Dataset.File.from_files(path=datastore_path)
7070
```
7171

72-
For more options on creating datasets with different options and from different sources, registering them and reviewing them in the Azure Machine Learning UI, understanding how data size interacts with compute capacity, and versioning them, see [Create Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
72+
For more options on creating datasets with different options and from different sources, registering them and reviewing them in the Azure Machine Learning UI, understanding how data size interacts with compute capacity, and versioning them, see [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
7373

7474
### Pass datasets to your script
7575

7676
To pass the dataset's path to your script, use the `Dataset` object's `as_named_input()` method. You can either pass the resulting `DatasetConsumptionConfig` object to your script as an argument or, by using the `inputs` argument to your pipeline script, you can retrieve the dataset using `Run.get_context().input_datasets[]`.
7777

78-
Once you've created a named input, you can choose its access mode: `as_mount()` or `as_download()`. If your script processes all the files in your dataset and the disk on your compute resource is large enough for the dataset, the download access mode is the better choice. The download access mode will avoid the overhead of streaming the data at runtime. If your script accesses a subset of the dataset or it's too large for your compute, use the mount access mode. For more information, read [Mount vs. Download](./how-to-train-with-datasets.md#mount-vs-download)
78+
Once you've created a named input, you can choose its access mode: `as_mount()` or `as_download()`. If your script processes all the files in your dataset and the disk on your compute resource is large enough for the dataset, the download access mode is the better choice. The download access mode will avoid the overhead of streaming the data at runtime. If your script accesses a subset of the dataset or it's too large for your compute, use the mount access mode. For more information, read [Mount vs. Download](how-to-train-with-datasets.md#mount-vs-download)
7979

8080
To pass a dataset to your pipeline step:
8181

@@ -194,7 +194,7 @@ After the initial pipeline step writes some data to the `OutputFileDatasetConfig
194194

195195
In the following code:
196196

197-
* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](./v1/how-to-access-data.md) in order to write data back to ADLS Gen 2 datastores.
197+
* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](how-to-access-data.md) in order to write data back to ADLS Gen 2 datastores.
198198

199199
* After `step1` completes and the output is written to the destination indicated by `step1_output_data`, then step2 is ready to use `step1_output_data` as an input.
200200

@@ -236,12 +236,12 @@ step1_output_ds = step1_output_data.register_on_complete(name='processed_data',
236236
Azure does not automatically delete intermediate data written with `OutputFileDatasetConfig`. To avoid storage charges for large amounts of unneeded data, you should either:
237237

238238
* Programmatically delete intermediate data at the end of a pipeline job, when it is no longer needed
239-
* Use blob storage with a short-term storage policy for intermediate data (see [Optimize costs by automating Azure Blob Storage access tiers](../storage/blobs/lifecycle-management-overview.md?tabs=azure-portal))
239+
* Use blob storage with a short-term storage policy for intermediate data (see [Optimize costs by automating Azure Blob Storage access tiers](/azure/storage/blobs/lifecycle-management-overview))
240240
* Regularly review and delete no-longer-needed data
241241

242242
For more information, see [Plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md).
243243

244244
## Next steps
245245

246-
* [Create an Azure machine learning dataset](./v1/how-to-create-register-datasets.md)
247-
* [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
246+
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
247+
* [Create and run machine learning pipelines with Azure Machine Learning SDK](how-to-create-machine-learning-pipelines.md)

articles/machine-learning/how-to-trigger-published-pipeline.md renamed to articles/machine-learning/v1/how-to-trigger-published-pipeline.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1515

1616
# Trigger machine learning pipelines
1717

18-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
18+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1919

2020
In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time or on file-system changes. Time-based schedules can be used to take care of routine tasks, such as monitoring for data drift. Change-based schedules can be used to react to irregular or unpredictable changes, such as new data being uploaded or old data being edited. After learning how to create schedules, you'll learn how to retrieve and deactivate them. Finally, you'll learn how to use other Azure services, Azure Logic App and Azure Data Factory, to run pipelines. An Azure Logic App allows for more complex triggering logic or behavior. Azure Data Factory pipelines allow you to call a machine learning pipeline as part of a larger data orchestration pipeline.
2121

@@ -141,7 +141,7 @@ If you then run `Schedule.list(ws)` again, you should get an empty list.
141141

142142
## Use Azure Logic Apps for complex triggers
143143

144-
More complex trigger rules or behavior can be created using an [Azure Logic App](../logic-apps/logic-apps-overview.md).
144+
More complex trigger rules or behavior can be created using an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
145145

146146
To use an Azure Logic App to trigger a Machine Learning pipeline, you'll need the REST endpoint for a published Machine Learning pipeline. [Create and publish your pipeline](./how-to-create-machine-learning-pipelines.md). Then find the REST endpoint of your `PublishedPipeline` by using the pipeline ID:
147147

@@ -154,11 +154,11 @@ published_pipeline.endpoint
154154

155155
## Create a Logic App
156156

157-
Now create an [Azure Logic App](../logic-apps/logic-apps-overview.md) instance. If you wish, [use an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md) and [set up a customer-managed key](../logic-apps/customer-managed-keys-integration-service-environment.md) for use by your Logic App.
157+
Now create an [Azure Logic App](../logic-apps/logic-apps-overview.md) instance. If you wish, [use an integration service environment (ISE)](/azure/logic-apps/connect-virtual-network-vnet-isolated-environment) and [set up a customer-managed key](/azure/logic-apps/customer-managed-keys-integration-service-environment) for use by your Logic App.
158158

159159
Once your Logic App has been provisioned, use these steps to configure a trigger for your pipeline:
160160

161-
1. [Create a system-assigned managed identity](../logic-apps/create-managed-service-identity.md) to give the app access to your Azure Machine Learning Workspace.
161+
1. [Create a system-assigned managed identity](/azure/logic-apps/create-managed-service-identity) to give the app access to your Azure Machine Learning Workspace.
162162

163163
1. Navigate to the Logic App Designer view and select the Blank Logic App template.
164164
> [!div class="mx-imgBorder"]

0 commit comments

Comments
 (0)