Skip to content

Commit 7b87ab4

Browse files
authored
Update how-to-nlp-processing-batch.md
1 parent a10f047 commit 7b87ab4

File tree

1 file changed

+22
-3
lines changed

1 file changed

+22
-3
lines changed

articles/machine-learning/how-to-nlp-processing-batch.md

Lines changed: 22 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.custom: devplatv2
1717

1818
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
1919

20-
Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
20+
Batch Endpoints can be used for processing tabular data that contain text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
2121

2222
## About this sample
2323

@@ -27,13 +27,26 @@ The model we are going to work with was built using the popular library transfor
2727
* It is trained for summarization of text in English.
2828
* We are going to use Torch as a backend.
2929

30-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using our SDK for Python.
30+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the [`cli/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/cli/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using the Azure CLI or [`sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using our SDK for Python.
31+
32+
# [Azure CLI](#tab/azure-cli)
3133

3234
```azurecli
3335
git clone https://github.com/Azure/azureml-examples --depth 1
3436
cd azureml-examples/cli/endpoints/batch/deploy-models/huggingface-text-summarization
3537
```
3638

39+
# [Python](#tab/python)
40+
41+
In a Jupyter notebook:
42+
43+
```python
44+
!git clone https://github.com/Azure/azureml-examples --depth 1
45+
!cd azureml-examples/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization
46+
```
47+
48+
---
49+
3750
### Follow along in Jupyter Notebooks
3851

3952
You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization/text-summarization-batch.ipynb).
@@ -80,7 +93,13 @@ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
8093

8194
### Registering the model
8295

83-
Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `model`. We will use it during the course of this tutorial.
96+
Due to the size of the model, it hasn't been included in this repository. Instead, you can download a copy from the HuggingFace model's hub. You need the packages `transformers` and `torch` installed in the environment you are using.
97+
98+
```python
99+
%pip install transformers torch
100+
```
101+
102+
Use the following code to download the model to a folder `model`:
84103

85104
```python
86105
from transformers import pipeline

0 commit comments

Comments
 (0)