Skip to content

Commit e4994ef

Browse files
committed
Incorporate feedback
1 parent 10b76d0 commit e4994ef

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/machine-learning/how-to-deploy-model-custom-output.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Batch deployments allow you to take control of the output of the jobs by letting
2929

3030
## About this sample
3131

32-
This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses a model based on the [UCI Heart Disease dataset](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we use a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It's integer valued from 0 (no presence) to 1 (presence).
32+
This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. The model is based on the [UCI Heart Disease dataset](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but this example uses a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It's integer valued from 0 (no presence) to 1 (presence).
3333

3434
The model was trained using an `XGBBoost` classifier and all the required preprocessing was packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
3535

@@ -51,7 +51,7 @@ There's a Jupyter notebook that you can use to follow this example. In the clone
5151

5252
## Create a batch deployment with a custom output
5353

54-
In this example, we create a deployment that can write directly to the output folder of the batch deployment job. The deployment uses this feature to write custom parquet files.
54+
In this example, you create a deployment that can write directly to the output folder of the batch deployment job. The deployment uses this feature to write custom parquet files.
5555

5656
### Register the model
5757

@@ -86,7 +86,7 @@ __Remarks:__
8686
* The `run` method returns a list of the processed files. It's required for the `run` function to return a `list` or a `pandas.DataFrame` object.
8787

8888
> [!WARNING]
89-
> Take into account that all the batch executors have write access to this path at the same time. This means that you need to account for concurrency. In this case, we ensure that each executor writes its own file by using the input file name as the name of the output folder.
89+
> Take into account that all the batch executors have write access to this path at the same time. This means that you need to account for concurrency. In this case, ensure that each executor writes its own file by using the input file name as the name of the output folder.
9090
9191
## Create the endpoint
9292

@@ -96,13 +96,13 @@ You now create a batch endpoint named `heart-classifier-batch` where the model i
9696

9797
# [Azure CLI](#tab/cli)
9898

99-
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
99+
In this case, place the name of the endpoint in a variable so you can easily reference it later.
100100

101101
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="name_endpoint" :::
102102

103103
# [Python](#tab/python)
104104

105-
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
105+
In this case, place the name of the endpoint in a variable so you can easily reference it later.
106106

107107
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=name_endpoint)]
108108

@@ -179,7 +179,7 @@ Follow the next steps to create a deployment using the previous scoring script:
179179

180180
To test your endpoint, use a sample of unlabeled data located in this repository, which can be used with the model. Batch endpoints can only process data that's located in the cloud and is accessible from the Azure Machine Learning workspace. In this example, you upload it to an Azure Machine Learning data store. You're going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
181181

182-
1. Let's invoke the endpoint with data from a storage account:
182+
1. Invoke the endpoint with data from a storage account:
183183

184184
# [Azure CLI](#tab/cli)
185185

@@ -213,7 +213,7 @@ To test your endpoint, use a sample of unlabeled data located in this repository
213213

214214
## Analyze the outputs
215215

216-
The job generates a named output called `score` where all the generated files are placed. Since we wrote into the directory directly, one file per each input file, then we can expect to have the same number of files. In this particular example, we decided to name the output files the same as the inputs, but they have a parquet extension.
216+
The job generates a named output called `score` where all the generated files are placed. Since you wrote into the directory directly, one file per each input file, then you can expect to have the same number of files. In this particular example, name the output files the same as the inputs, but they have a parquet extension.
217217

218218
> [!NOTE]
219219
> Notice that a file *predictions.csv* is also included in the output folder. This file contains the summary of the processed files.

0 commit comments

Comments
 (0)