You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-model-custom-output.md
+19-74Lines changed: 19 additions & 74 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,20 +29,20 @@ In any of those cases, Batch Deployments allow you to take control of the output
29
29
30
30
## About this sample
31
31
32
-
This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
32
+
This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses a model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
33
33
34
34
The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
35
35
36
-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
36
+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/custom-outputs-parquet` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet` if you are using our SDK for Python.
cd azureml-examples/cli/endpoints/batch/deploy-models/custom-outputs-parquet
41
41
```
42
42
43
43
### Follow along in Jupyter Notebooks
44
44
45
-
You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/custom-output-batch.ipynb).
45
+
You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb).
46
46
47
47
## Prerequisites
48
48
@@ -63,23 +63,20 @@ Batch Endpoint can only deploy registered models. In this case, we already have
63
63
# [Azure CLI](#tab/cli)
64
64
65
65
```azurecli
66
-
MODEL_NAME='heart-classifier'
67
-
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
66
+
MODEL_NAME='heart-classifier-sklpipe'
67
+
az ml model create --name $MODEL_NAME --type "custom_model" --path "model"
> The model used in this tutorial is an MLflow model. However, the steps apply for both MLflow models and custom models.
82
-
83
80
### Creating a scoring script
84
81
85
82
We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
@@ -89,38 +86,9 @@ We need to create a scoring script that can read the input data provided by the
89
86
3. Appends the predictions to a `pandas.DataFrame` along with the input data.
90
87
4. Writes the data in a file named as the input file, but in `parquet` format.
91
88
92
-
__batch_driver_parquet.py__
89
+
__code/batch_driver.py__
93
90
94
-
```python
95
-
import os
96
-
import mlflow
97
-
import pandas as pd
98
-
from pathlib import Path
99
-
100
-
definit():
101
-
global model
102
-
global output_path
103
-
104
-
# AZUREML_MODEL_DIR is an environment variable created during deployment
105
-
# It is the path to the model folder
106
-
# Please provide your model's folder name if there's one:
2.MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed.
125
+
2.Create the deployment
155
126
156
127
> [!NOTE]
157
128
> This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
@@ -160,37 +131,11 @@ Follow the next steps to create a deployment using the previous scoring script:
160
131
161
132
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
0 commit comments