You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -33,16 +33,16 @@ This example shows how you can deploy an MLflow model to a batch endpoint to per
33
33
34
34
The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
35
35
36
-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
36
+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/heart-classifier-mlflow` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow` if you are using our SDK for Python.
cd azureml-examples/cli/endpoints/batch/deploy-models/heart-classifier-mlflow
41
41
```
42
42
43
43
### Follow along in Jupyter Notebooks
44
44
45
-
You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb).
45
+
You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb).
46
46
47
47
## Prerequisites
48
48
@@ -85,15 +85,15 @@ Follow these steps to deploy an MLflow model to a batch endpoint for running bat
2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
89
+
90
+
1. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
91
91
92
92
# [Azure CLI](#tab/cli)
93
93
94
94
```azurecli
95
95
MODEL_NAME='heart-classifier'
96
-
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
96
+
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "model"
97
97
```
98
98
99
99
# [Python](#tab/sdk)
@@ -106,7 +106,7 @@ Follow these steps to deploy an MLflow model to a batch endpoint for running bat
106
106
)
107
107
```
108
108
109
-
3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure Machine Learning compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an Azure Machine Learning compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
109
+
1. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure Machine Learning compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an Azure Machine Learning compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
110
110
111
111
# [Azure CLI](#tab/cli)
112
112
@@ -141,33 +141,43 @@ Follow these steps to deploy an MLflow model to a batch endpoint for running bat
141
141
ml_client.begin_create_or_update(compute_cluster)
142
142
```
143
143
144
-
4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
144
+
1. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
145
+
146
+
# [Azure CLI](#tab/cli)
147
+
148
+
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
149
+
150
+
```azurecli
151
+
ENDPOINT_NAME="heart-classifier-batch"
152
+
```
153
+
154
+
# [Python](#tab/sdk)
155
+
156
+
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
157
+
158
+
```python
159
+
endpoint_name="heart-classifier-batch"
160
+
```
161
+
162
+
1. Create the endpoint:
145
163
146
164
# [Azure CLI](#tab/cli)
147
165
148
166
To create a new endpoint, create a `YAML` configuration like the following:
> The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
@@ -342,9 +320,7 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
@@ -426,7 +400,7 @@ The following data types are supported for batch inference when deploying MLflow
426
400
427
401
| File extension | Type returned as model's input | Signature requirement |
428
402
| :- | :- | :- |
429
-
|`.csv`|`pd.DataFrame`|`ColSpec`. If not provided, columns typing is not enforced. |
403
+
|`.csv`, `.parquet`|`pd.DataFrame`|`ColSpec`. If not provided, columns typing is not enforced. |
430
404
|`.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif`|`np.ndarray`|`TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). |
431
405
432
406
> [!WARNING]
@@ -485,34 +459,9 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
485
459
486
460
1. Create a scoring script. Notice how the folder name `model` you identified before has been included in the `init()` function.
487
461
488
-
__batch_driver.py__
489
-
490
-
```python
491
-
import os
492
-
import mlflow
493
-
import pandas as pd
494
-
495
-
definit():
496
-
global model
462
+
__deployment-custom/code/batch_driver.py__
497
463
498
-
# AZUREML_MODEL_DIR is an environment variable created during deployment
1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
518
467
@@ -532,8 +481,9 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
0 commit comments