Skip to content

Commit 75a6c73

Browse files
author
Larry Franks
committed
fixing links, some acrolinx
1 parent 94133fe commit 75a6c73

File tree

66 files changed

+229
-229
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+229
-229
lines changed

articles/machine-learning/concept-data-encryption.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,6 @@ Each workspace has an associated system-assigned managed identity that has the s
142142

143143
* [Connect to Azure storage](how-to-access-data.md)
144144
* [Get data from a datastore](how-to-create-register-datasets.md)
145-
* [Connect to data](how-to-connect-data-ui.md)
146-
* [Train with datasets](how-to-train-with-datasets.md)
145+
* [Connect to data](v1/how-to-connect-data-ui.md)
146+
* [Train with datasets](v1/how-to-train-with-datasets.md)
147147
* [Customer-managed keys](concept-customer-managed-keys.md).

articles/machine-learning/concept-differential-privacy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,6 @@ The system library provides the following tools and services for working with ta
7777

7878
Learn more about differential privacy in machine learning:
7979

80-
- [How to build a differentially private system](how-to-differential-privacy.md) in Azure Machine Learning.
80+
- [How to build a differentially private system](v1/how-to-differential-privacy.md) in Azure Machine Learning with SDK v1.
8181

8282
- To learn more about the components of SmartNoise, check out the GitHub repositories for [SmartNoise Core](https://github.com/opendifferentialprivacy/smartnoise-core), [SmartNoise SDK](https://github.com/opendifferentialprivacy/smartnoise-sdk), and [SmartNoise samples](https://github.com/opendifferentialprivacy/smartnoise-samples).

articles/machine-learning/concept-endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ You can [override compute resource settings](how-to-use-batch-endpoint.md#config
208208

209209
You can use the following options for input data when invoking a batch endpoint:
210210

211-
- Cloud data - Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md)
211+
- Cloud data - Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Connect to data with the Azure Machine Learning studio](v1/how-to-connect-data-ui.md)
212212
- Data stored locally - it will be automatically uploaded to the Azure ML registered datastore and passed to the batch endpoint.
213213

214214
> [!NOTE]

articles/machine-learning/concept-ml-pipelines.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Once the teams get familiar with pipelines and want to do more machine learning
4646

4747
Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the team’s overall productivity will be improved significantly.
4848

49-
Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK](how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
49+
Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK v1](v1/how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
5050

5151
<a name="compare"></a>
5252
## Which Azure pipeline technology should I use?

articles/machine-learning/concept-model-management-and-deployment.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ For more information, see [Enable model data collection](v1/how-to-enable-data-c
172172

173173
## Retrain your model on new data
174174

175-
Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](how-to-monitor-datasets.md), model performance can degrade because of:
175+
Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](v1/how-to-monitor-datasets.md), model performance can degrade because of:
176176

177177
- Changes to a particular sensor.
178178
- Natural data changes such as seasonal effects.
@@ -202,7 +202,7 @@ For more information on using Azure Pipelines with Machine Learning, see:
202202
* [Machine Learning MLOps](https://aka.ms/mlops) repository
203203
* [Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
204204

205-
You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](how-to-cicd-data-ingestion.md).
205+
You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](v1/how-to-cicd-data-ingestion.md).
206206

207207
## Next steps
208208

articles/machine-learning/concept-sourcing-human-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ For more information on how to work with your data:
232232
- [Secure data access in Azure Machine Learning](concept-data.md)
233233
- [Data ingestion options for Azure Machine Learning workflows](concept-data-ingestion.md)
234234
- [Optimize data processing with Azure Machine Learning](concept-optimize-data-processing.md)
235-
- [Use differential privacy in Azure Machine Learning](how-to-differential-privacy.md)
235+
- [Use differential privacy with Azure Machine Learning SDK](v1/how-to-differential-privacy.md)
236236

237237
Follow these how-to guides to work with your data after you've collected it:
238238

articles/machine-learning/concept-train-machine-learning-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Define the iterations, hyperparameter settings, featurization, and other setting
7070
Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models. In a pipeline, you can train a model using automated machine learning or run configurations.
7171

7272
* [What are ML pipelines in Azure Machine Learning?](concept-ml-pipelines.md)
73-
* [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
73+
* [Create and run machine learning pipelines with Azure Machine Learning SDK](v1/how-to-create-machine-learning-pipelines.md)
7474
* [Tutorial: Use Azure Machine Learning Pipelines for batch scoring](tutorial-pipeline-batch-scoring-classification.md)
7575
* [Examples: Jupyter Notebook examples for machine learning pipelines](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines)
7676
* [Examples: Pipeline with automated machine learning](https://aka.ms/pl-automl)
@@ -95,7 +95,7 @@ The Azure training lifecycle consists of:
9595
1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace
9696
1. Scaling down compute, including removing temporary storage
9797

98-
If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](./how-to-debug-pipelines.md) for an example).
98+
If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](v1/how-to-debug-pipelines.md) for an example).
9999

100100
## Azure Machine Learning designer
101101

articles/machine-learning/concept-workspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ When you create a new workspace, it automatically creates several Azure resource
118118
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that are used for the following components:
119119
* [Azure Machine Learning environments](concept-environments.md) when training and deploying models
120120
* [AutoML](concept-automated-ml.md) when deploying
121-
* [Data profiling](how-to-connect-data-ui.md#data-profile-and-preview)
121+
* [Data profiling](v1/how-to-connect-data-ui.md#data-profile-and-preview)
122122

123123
To minimize costs, ACR is **lazy-loaded** until images are needed.
124124

articles/machine-learning/how-to-configure-databricks-automl-environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ Try it out:
114114
![Select Import](./media/how-to-configure-environment/azure-db-screenshot.png)
115115
![Import Panel](./media/how-to-configure-environment/azure-db-import.png)
116116

117-
+ Learn how to [create a pipeline with Databricks as the training compute](./how-to-create-machine-learning-pipelines.md).
117+
+ Learn how to [create a pipeline with Databricks as the training compute](v1/how-to-create-machine-learning-pipelines.md).
118118

119119
## Troubleshooting
120120

articles/machine-learning/how-to-create-image-labeling-projects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -216,7 +216,7 @@ Use the **Export** button on the **Project details** page of your labeling proje
216216

217217
* Image labels can be exported as:
218218
* [COCO format](http://cocodataset.org/#format-data).The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/coco*.
219-
* An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md).
219+
* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
220220

221221
Access exported Azure Machine Learning datasets in the **Datasets** section of Machine Learning. The dataset details page also provides sample code to access your labels from Python.
222222

0 commit comments

Comments
 (0)