Skip to content

Commit b2ef5e7

Browse files
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into pauljewell-lease-blob
2 parents a156871 + 04348af commit b2ef5e7

15 files changed

+75
-50
lines changed

articles/cognitive-services/Translator/custom-translator/platform-upgrade.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,37 +6,37 @@ author: laujan
66
manager: nitinme
77
ms.service: cognitive-services
88
ms.subservice: translator-text
9-
ms.date: 03/30/2023
9+
ms.date: 04/10/2023
1010
ms.author: lajanuar
1111
ms.topic: reference
1212
---
1313
# Custom Translator platform upgrade
1414

1515
> [!CAUTION]
1616
>
17-
> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the v2.0 platform for continued processing and support.
17+
> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the new platform for continued processing and support.
1818
19-
Following measured and consistent high-quality results using models trained on the Custom Translator v2.0 platform, the v1.0 platform is retiring. Custom Translator v2.0 delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the v2.0 platform by June 02, 2023.
19+
Following measured and consistent high-quality results using models trained on the Custom Translator new platform, the v1.0 platform is retiring. The new Custom Translator platform delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the new platform by June 02, 2023.
2020

2121
## Custom Translator v1.0 upgrade timeline
2222

2323
* **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
2424

25-
* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to v2.0 models.
25+
* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to new platform models.
2626

2727
* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
2828

29-
## Upgrade to v2.0
29+
## Upgrade to new platform
3030

3131
> [!IMPORTANT]
3232
>
3333
> * Starting **May 01, 2023** the upgrade wizard and workspace banner will be displayed in the Custom Translator portal indicating that you have v1.0 models to upgrade.
3434
> * The banner contains a **Select** button that takes you to the upgrade wizard where a list of all your v1.0 models available for upgrade are displayed.
35-
> * Select any or all of your v1.0 models then select **Train** to start v2.0 model upgrade training.
35+
> * Select any or all of your v1.0 models then select **Train** to start new platform model training.
3636
3737
* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
3838

39-
* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the v2.0 platform.
39+
* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the new platform.
4040

4141
## Unpublished and opt-out published models
4242

articles/deployment-environments/how-to-manage-environments.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: how-to
99
ms.date: 02/28/2023
1010
---
1111

12-
# Manage your environment
12+
# Manage your deployment environment
1313

1414
In Azure Deployment Environments Preview, a development infrastructure admin gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
1515

articles/machine-learning/how-to-nlp-processing-batch.md

Lines changed: 34 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.custom: devplatv2
1717

1818
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
1919

20-
Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
20+
Batch Endpoints can be used for processing tabular data that contain text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
2121

2222
## About this sample
2323

@@ -27,13 +27,26 @@ The model we are going to work with was built using the popular library transfor
2727
* It is trained for summarization of text in English.
2828
* We are going to use Torch as a backend.
2929

30-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using our SDK for Python.
30+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the [`cli/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/cli/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using the Azure CLI or [`sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using our SDK for Python.
31+
32+
# [Azure CLI](#tab/cli)
3133

3234
```azurecli
3335
git clone https://github.com/Azure/azureml-examples --depth 1
3436
cd azureml-examples/cli/endpoints/batch/deploy-models/huggingface-text-summarization
3537
```
3638

39+
# [Python](#tab/python)
40+
41+
In a Jupyter notebook:
42+
43+
```python
44+
!git clone https://github.com/Azure/azureml-examples --depth 1
45+
!cd azureml-examples/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization
46+
```
47+
48+
---
49+
3750
### Follow along in Jupyter Notebooks
3851

3952
You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization/text-summarization-batch.ipynb).
@@ -46,7 +59,7 @@ You can follow along this sample in a Jupyter Notebook. In the cloned repository
4659

4760
First, let's connect to Azure Machine Learning workspace where we're going to work on.
4861

49-
# [Azure CLI](#tab/azure-cli)
62+
# [Azure CLI](#tab/cli)
5063

5164
```azurecli
5265
az account set --subscription <subscription>
@@ -80,7 +93,13 @@ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
8093

8194
### Registering the model
8295

83-
Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `model`. We will use it during the course of this tutorial.
96+
Due to the size of the model, it hasn't been included in this repository. Instead, you can download a copy from the HuggingFace model's hub. You need the packages `transformers` and `torch` installed in the environment you are using.
97+
98+
```python
99+
%pip install transformers torch
100+
```
101+
102+
Use the following code to download the model to a folder `model`:
84103

85104
```python
86105
from transformers import pipeline
@@ -99,7 +118,7 @@ MODEL_NAME='bart-text-summarization'
99118
az ml model create --name $MODEL_NAME --path "model"
100119
```
101120

102-
# [Python](#tab/sdk)
121+
# [Python](#tab/python)
103122

104123
```python
105124
model_name = 'bart-text-summarization'
@@ -115,7 +134,7 @@ We are going to create a batch endpoint named `text-summarization-batch` where t
115134

116135
1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
117136

118-
# [Azure CLI](#tab/azure-cli)
137+
# [Azure CLI](#tab/cli)
119138

120139
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
121140

@@ -133,7 +152,7 @@ We are going to create a batch endpoint named `text-summarization-batch` where t
133152
134153
1. Configure your batch endpoint
135154
136-
# [Azure CLI](#tab/azure-cli)
155+
# [Azure CLI](#tab/cli)
137156
138157
The following YAML file defines a batch endpoint:
139158
@@ -156,7 +175,7 @@ We are going to create a batch endpoint named `text-summarization-batch` where t
156175
157176
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_batch_endpoint" :::
158177
159-
# [Python](#tab/sdk)
178+
# [Python](#tab/python)
160179
161180
```python
162181
ml_client.batch_endpoints.begin_create_or_update(endpoint)
@@ -199,7 +218,7 @@ Let's create the deployment that will host the model:
199218

200219
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deployment.yml" range="7-10" :::
201220

202-
# [Python](#tab/sdk)
221+
# [Python](#tab/python)
203222

204223
Let's get a reference to the environment:
205224

@@ -217,7 +236,7 @@ Let's create the deployment that will host the model:
217236
218237
1. Each deployment runs on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). In this example, our model can benefit from GPU acceleration, which is why we will use a GPU cluster.
219238

220-
# [Azure CLI](#tab/azure-cli)
239+
# [Azure CLI](#tab/cli)
221240

222241
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_compute" :::
223242

@@ -253,7 +272,7 @@ Let's create the deployment that will host the model:
253272

254273
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
255274

256-
# [Python](#tab/sdk)
275+
# [Python](#tab/python)
257276

258277
To create a new deployment with the indicated environment and scoring script use the following code:
259278

@@ -298,7 +317,7 @@ Let's create the deployment that will host the model:
298317
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
299318
```
300319

301-
# [Python](#tab/sdk)
320+
# [Python](#tab/python)
302321

303322
```python
304323
endpoint.defaults.deployment_name = deployment.name
@@ -321,7 +340,7 @@ For testing our endpoint, we are going to use a sample of the dataset [BillSum:
321340
> [!NOTE]
322341
> The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
323342
324-
# [Python](#tab/sdk)
343+
# [Python](#tab/python)
325344

326345
```python
327346
input = Input(type=AssetTypes.URI_FOLDER, path="data")
@@ -341,7 +360,7 @@ For testing our endpoint, we are going to use a sample of the dataset [BillSum:
341360

342361
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="show_job_in_studio" :::
343362

344-
# [Python](#tab/sdk)
363+
# [Python](#tab/python)
345364

346365
```python
347366
ml_client.jobs.get(job.name)
@@ -357,7 +376,7 @@ For testing our endpoint, we are going to use a sample of the dataset [BillSum:
357376
az ml job download --name $JOB_NAME --output-name score --download-path .
358377
```
359378

360-
# [Python](#tab/sdk)
379+
# [Python](#tab/python)
361380

362381
```python
363382
ml_client.jobs.download(name=job.name, output_name='score', download_path='./')

articles/machine-learning/samples-designer.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ ms.author: keli19
1212
ms.date: 10/21/2021
1313
ms.custom: designer
1414
---
15+
1516
# Example pipelines & datasets for Azure Machine Learning designer
1617

1718
Use the built-in examples in Azure Machine Learning designer to quickly get started building your own machine learning pipelines. The Azure Machine Learning designer [GitHub repository](https://github.com/Azure/MachineLearningDesigner) contains detailed documentation to help you understand some common machine learning scenarios.
@@ -116,10 +117,10 @@ The sample datasets are available under **Datasets**-**Samples** category. You c
116117
|-------------|:--------------------|
117118
| Adult Census Income Binary Classification dataset | A subset of the 1994 Census database, using working adults over the age of 16 with an adjusted income index of > 100.<br/>**Usage**: Classify people using demographics to predict whether a person earns over 50K a year.<br/> **Related Research**: Kohavi, R., Becker, B., (1996). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science|
118119
|Automobile price data (Raw)|Information about automobiles by make and model, including the price, features such as the number of cylinders and MPG, as well as an insurance risk score.<br/> The risk score is initially associated with auto price. It is then adjusted for actual risk in a process known to actuaries as symboling. A value of +3 indicates that the auto is risky, and a value of -3 that it is probably safe.<br/>**Usage**: Predict the risk score by features, using regression or multivariate classification.<br/>**Related Research**: Schlimmer, J.C. (1987). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science. |
119-
| CRM Appetency Labels Shared |Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_appetency.labels](http://www.sigkdd.org/site/2009/files/orange_small_train_appetency.labels)).|
120-
|CRM Churn Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_churn.labels](http://www.sigkdd.org/site/2009/files/orange_small_train_churn.labels)).|
121-
|CRM Dataset Shared | This data comes from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train.data.zip](http://www.sigkdd.org/site/2009/files/orange_small_train.data.zip)). <br/>The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse. |
122-
|CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](http://www.sigkdd.org/site/2009/files/orange_large_train_upselling.labels)|
120+
| CRM Appetency Labels Shared |Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_appetency.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_appetency.labels)).|
121+
|CRM Churn Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_churn.labels](https://kdd.org/cupfiles/KDDCupData/2009/files/orange_small_train_churn.labels)).|
122+
|CRM Dataset Shared | This data comes from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train.data.zip](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train.data.zip)). <br/>The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse. |
123+
|CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_upselling.labels)|
123124
|Flight Delays Data|Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation ([On-Time](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)).<br/>The dataset covers the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/>- The dataset was filtered to cover only the 70 busiest airports in the continental US <br/>- Canceled flights were labeled as delayed by more than 15 minutes <br/>- Diverted flights were filtered out <br/>- The following columns were selected: Year, Month, DayofMonth, DayOfWeek, Carrier, OriginAirportID, DestAirportID, CRSDepTime, DepDelay, DepDel15, CRSArrTime, ArrDelay, ArrDel15, Canceled|
124125
|German Credit Card UCI dataset|The UCI Statlog (German Credit Card) dataset ([Statlog+German+Credit+Data](https://archive.ics.uci.edu/ml/datasets/Statlog+(German+Credit+Data))), using the german.data file.<br/>The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.|
125126
|IMDB Movie Titles|The dataset contains information about movies that were rated in Twitter tweets: IMDB movie ID, movie name, genre, and production year. There are 17K movies in the dataset. The dataset was introduced in the paper "S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013."|
@@ -137,3 +138,4 @@ The sample datasets are available under **Datasets**-**Samples** category. You c
137138
## Next steps
138139

139140
Learn the fundamentals of predictive analytics and machine learning with [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)
141+

articles/storage/blobs/create-data-lake-storage-account.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
title: Create a storage account for Azure Data Lake Storage Gen2
33
titleSuffix: Azure Storage
44
description: Learn how to create a storage account for use with Azure Data Lake Storage Gen2.
5-
author: jimmart-dev
5+
author: normesta
66

77
ms.topic: how-to
8-
ms.author: jammart
8+
ms.author: normesta
99
ms.date: 03/09/2023
1010
ms.service: storage
1111
ms.subservice: data-lake-storage-gen2

articles/storage/blobs/data-lake-storage-access-control-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: Access control model for Azure Data Lake Storage Gen2
33
titleSuffix: Azure Storage
44
description: Learn how to configure container, directory, and file-level access in accounts that have a hierarchical namespace.
5-
author: jimmart-dev
5+
author: normesta
66

77
ms.subservice: data-lake-storage-gen2
88
ms.service: storage
99
ms.topic: conceptual
1010
ms.date: 03/09/2023
11-
ms.author: jammart
11+
ms.author: normesta
1212
ms.custom: engagement-fy23
1313
---
1414

articles/storage/blobs/data-lake-storage-access-control.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: Access control lists in Azure Data Lake Storage Gen2
33
titleSuffix: Azure Storage
44
description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage Gen2.
5-
author: jimmart-dev
5+
author: normesta
66

77
ms.subservice: data-lake-storage-gen2
88
ms.service: storage
99
ms.topic: conceptual
1010
ms.date: 03/09/2023
11-
ms.author: jammart
11+
ms.author: normesta
1212
ms.reviewer: jamesbak
1313
ms.devlang: python
1414
ms.custom: engagement-fy23

articles/storage/blobs/data-lake-storage-acl-azure-portal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2
33
titleSuffix: Azure Storage
44
description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that have a hierarchical namespace (HNS) enabled.
5-
author: jimmart-dev
5+
author: normesta
66

77
ms.subservice: data-lake-storage-gen2
88
ms.service: storage
99
ms.topic: how-to
1010
ms.date: 03/09/2023
11-
ms.author: jammart
11+
ms.author: normesta
1212
---
1313

1414
# Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2

0 commit comments

Comments
 (0)