You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# customer intent: As a developer, I want to see how to deploy an MLflow model to an online endpoint so that I can use the model to make predictions in real time.
@@ -57,7 +57,7 @@ For no-code-deployment, Azure Machine Learning:
57
57
58
58
# [Python (MLflow SDK)](#tab/mlflow)
59
59
60
-
- Install the MLflow SDK package, `mlflow`, and the Azure Machine Learning plug-in for MLflow, `azureml-mlflow`.
60
+
- Install the MLflow SDK package, `mlflow`, and the Azure Machine Learning integration package for MLflow, `azureml-mlflow`.
61
61
62
62
```bash
63
63
pip install mlflow azureml-mlflow
@@ -73,11 +73,11 @@ For no-code-deployment, Azure Machine Learning:
73
73
74
74
## About the example
75
75
76
-
The example shows you how to deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains 10 baseline variables: age, sex, body mass index, average blood pressure, and 6 blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after the date of the baseline data.
76
+
The example in this article shows you how to deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains 10 baseline variables: age, sex, body mass index, average blood pressure, and 6 blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after the date of the baseline data.
77
77
78
-
The model was trained by using a `scikit-learn` regressor. All the required preprocessing has been packaged as a pipeline, so this model is an end-to-end pipeline that goes from raw data to predictions.
78
+
The model was trained by using a `scikit-learn` regressor. All the required preprocessing is packaged as a pipeline, so this model is an end-to-end pipeline that goes from raw data to predictions.
79
79
80
-
The information in this article is based on code samples from the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy or paste YAML files and other files, use the following commands to clone the repository and go to the folder for your coding language:
80
+
The information in this article is based on code samples from the [azureml-examples](https://github.com/azure/azureml-examples) repository. If you clone the repository, you can run the commands in this article locally without having to copy or paste YAML files and other files. Use the following commands to clone the repository and go to the folder for your coding language:
81
81
82
82
# [Azure CLI](#tab/cli)
83
83
@@ -141,9 +141,9 @@ az configure --defaults workspace=<workspace-name> group=<resource-group-name> l
141
141
1. Configure workspace details and get a handle to the workspace:
@@ -181,7 +181,7 @@ Go to [Azure Machine Learning studio](https://ml.azure.com).
181
181
182
182
### Register the model
183
183
184
-
You can deploy only registered models to online endpoints. The steps in this article use a model that's trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). In this case, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you want to deploy is already registered.
184
+
You can deploy only registered models to online endpoints. The steps in this article use a model that's trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). In this case, you already have a local copy of the model in your cloned repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you want to deploy is already registered.
185
185
186
186
# [Azure CLI](#tab/cli)
187
187
@@ -218,7 +218,7 @@ To create a model in Azure Machine Learning studio:
218
218
219
219
1. In the studio, select __Models__.
220
220
221
-
1. Select __Register__, and then select where your model is located. For this example, select __From local files__.
221
+
1. Select __Register__, and then select the location of your model. For this example, select __From local files__.
222
222
223
223
1. On the __Upload model__ page, under __Model type__, select __MLflow__.
224
224
@@ -239,19 +239,19 @@ If your model was logged inside a run, you can register it directly.
239
239
To register the model, you need to know its storage location:
240
240
241
241
- If you use the MLflow `autolog` feature, the path to the model depends on the model type and framework. Check the job output to identify the name of the model folder. This folder contains a file named MLModel.
242
-
- If you use the `log_model` method to manually log your models, you pass the path to the model as the argument to the method. For example, if you use `mlflow.sklearn.log_model(my_model, "classifier")` to log the model, the path that the model is stored on is called `classifier`.
242
+
- If you use the `log_model` method to manually log your models, you pass the path to the model as an argument to that method. For example, if you use `mlflow.sklearn.log_model(my_model, "classifier")` to log the model, `classifier` is the path that the model is stored on.
243
243
244
244
# [Azure CLI](#tab/cli)
245
245
246
-
You can use the Azure Machine Learning CLI v2 to create a model from training job output. The following code uses the artifacts of a job with ID `$RUN_ID` to register a model named `$MODEL_NAME`. `$MODEL_PATH` is the path that the job used to store the model.
246
+
You can use the Azure Machine Learning CLI v2 to create a model from training job output. The following code uses the artifacts of a job with ID `$RUN_ID` to register a model named `$MODEL_NAME`. `$MODEL_PATH` is the path that the job uses to store the model.
247
247
248
248
```bash
249
249
az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/artifacts/$MODEL_PATH
250
250
```
251
251
252
252
# [Python (Azure Machine Learning SDK)](#tab/sdk)
253
253
254
-
You can use the Python SDK to create a model from training job output. The following code uses the artifacts of a job with ID `RUN_ID` to register a model named `sklearn-diabetes`. `MODEL_PATH` is the path that the job used to store the model.
254
+
You can use the Python SDK to create a model from training job output. The following code uses the artifacts of a job with ID `RUN_ID` to register a model named `sklearn-diabetes`. `MODEL_PATH` is the path that the job uses to store the model.
You can use the Python MLflow SDK to create a model from training job output. The following code uses the artifacts of a job with ID `RUN_ID` to register a model named `sklearn-diabetes`. `MODEL_PATH` is the path that the job used to store the model.
270
+
You can use the Python MLflow SDK to create a model from training job output. The following code uses the artifacts of a job with ID `RUN_ID` to register a model named `sklearn-diabetes`. `MODEL_PATH` is the path that the job uses to store the model.
271
271
272
272
```python
273
273
model_name ='sklearn-diabetes'
@@ -292,7 +292,7 @@ version = registered_model.version
292
292
1. Select __Next__.
293
293
294
294
1. On the __Model settings__ page, take the following steps:
295
-
1. Under __Name__, enter the name you want to use for the registered model.
295
+
1. Under __Name__, enter the name that you want to use for the registered model.
296
296
1. Select __Next__.
297
297
298
298
1. On the __Review__ page, review the settings, and then select __Register__. A message appears about the model being created successfully.
@@ -333,7 +333,7 @@ version = registered_model.version
333
333
334
334
# [Python (MLflow SDK)](#tab/mlflow)
335
335
336
-
You can use a configuration file to configure the properties of this endpoint. In this case, you configure the authentication mode of the endpoint to be `key`.
336
+
You can use a configuration file to configure the properties of the endpoint. In this case, you configure the authentication mode of the endpoint to be `key`.
337
337
338
338
```python
339
339
@@ -457,7 +457,7 @@ version = registered_model.version
457
457
---
458
458
459
459
> [!NOTE]
460
-
> Automatic generation of the `scoring_script`and`environment`are only supported for the `pyfunc` model flavor. To use a different model flavor, see [Customize MLflow model deployments](#customize-mlflow-model-deployments).
460
+
> Automatic generation of the `scoring_script`and`environment`is only supported for the `PyFunc` model flavor. To use a different model flavor, see [Customize MLflow model deployments](#customize-mlflow-model-deployments).
461
461
462
462
1. Create the deployment:
463
463
@@ -515,7 +515,7 @@ version = registered_model.version
515
515
516
516
# [Azure CLI](#tab/cli)
517
517
518
-
This step isn't required in the Azure CLI if you use the `--all-traffic` flag during creation. If you need to change the traffic, you can use the `az ml online-endpoint update --traffic` command. For more information on how to update traffic, see [Progressively update the traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).
518
+
This step isn't required in the Azure CLI if you use the `--all-traffic` flag during creation. If you need to change the traffic, you can use the `az ml online-endpoint update --traffic` command. For more information about how to update traffic, see [Progressively update the traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).
519
519
520
520
# [Python (Azure Machine Learning SDK)](#tab/sdk)
521
521
@@ -547,7 +547,7 @@ version = registered_model.version
547
547
548
548
# [Azure CLI](#tab/cli)
549
549
550
-
This step isn't required in the Azure CLI if you use the `--all-traffic` flag during creation. If you need to change traffic, you can use the `az ml online-endpoint update --traffic` command. For more information on how to update traffic, see [Progressively update the traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).
550
+
This step isn't required in the Azure CLI if you use the `--all-traffic` flag during creation. If you need to change traffic, you can use the `az ml online-endpoint update --traffic` command. For more information about how to update traffic, see [Progressively update the traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).
551
551
552
552
# [Python (Azure Machine Learning SDK)](#tab/sdk)
553
553
@@ -572,7 +572,7 @@ version = registered_model.version
572
572
573
573
## Invoke the endpoint
574
574
575
-
When your deployment is ready, you can use it to serve requests. One way to test the deployment is by using the built-in invocation capability inthe deployment client that you use. In the examples repository, the sample-request-sklearn.json file contains the following JSON code. You can use it as a sample request filefor the deployment.
575
+
When your deployment is ready, you can use it to serve requests. One way to test the deployment is by using the built-in invocation capability inyour deployment client. In the examples repository, the sample-request-sklearn.json file contains the following JSON code. You can use it as a sample request filefor the deployment.
576
576
577
577
# [Azure CLI](#tab/cli)
578
578
@@ -593,7 +593,7 @@ When your deployment is ready, you can use it to serve requests. One way to test
593
593
---
594
594
595
595
> [!NOTE]
596
-
> This file uses the `input_data` key instead of `inputs`, which MLflow serving uses. Azure Machine Learning requires a different inputformat to be able to automatically generate the swagger contracts for the endpoints. For more information about expected input formats, see [Deployment in the MLflow built-in server vs. deployment in Azure Machine Learning inferencing server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server).
596
+
> This file uses the `input_data` key instead of `inputs`, which MLflow serving uses. Azure Machine Learning requires a different inputformat to be able to automatically generate the Swagger contracts for the endpoints. For more information about expected input formats, see [Deployment in the MLflow built-in server vs. deployment in Azure Machine Learning inferencing server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server).
597
597
598
598
Submit a request to the endpoint:
599
599
@@ -633,7 +633,7 @@ MLflow models can use the __Test__ tab to create invocations to the created endp
633
633
634
634
1. Select __Test__.
635
635
636
-
1. The box on the right displays the predictions.
636
+
1. The output box displays the predictions.
637
637
638
638
---
639
639
@@ -687,8 +687,8 @@ You typically want to customize your MLflow model deployment in the following ca
687
687
688
688
- The model doesn't have a `PyFunc` flavor.
689
689
- You need to customize the way the model is run. For instance, you need to use `mlflow.<flavor>.load_model()` to use a specific flavor to load the model.
690
-
- You need to do preprocessing or postprocessing in your scoring routine when it's not done by the model itself.
691
-
- The output of the model can't be nicely represented in tabular data. For instance, it's a tensor representing an image.
690
+
- You need to do preprocessing or postprocessing in your scoring routine, because the model doesn't do this processing.
691
+
- The output of the model can't be nicely represented in tabular data. For instance, the output is a tensor that represents an image.
692
692
693
693
> [!IMPORTANT]
694
694
> If you specify a scoring script for an MLflow model deployment, you also have to specify the environment that the deployment runs in.
@@ -707,7 +707,7 @@ Identify the folder that contains your MLflow model by taking the following step
707
707
708
708
1. Select the model that you want to deploy and go to its __Artifacts__ tab.
709
709
710
-
1. Take note of the folder that's displayed. You specified this folder when you registered the model.
710
+
1. Take note of the folder that's displayed. When you register a model, you specify this folder.
711
711
712
712
:::image type="content"source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png"lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" alt-text="Screenshot that shows the folder that contains the model artifacts.":::
713
713
@@ -722,7 +722,7 @@ The following scoring script, score.py, provides an example of how to perform in
722
722
723
723
#### Create an environment
724
724
725
-
The next step is to create an environment that you can run the scoring script in. Because the model is an MLflow model, the conda requirements are also specified in the model package. For more details about the files included in an MLflow model, see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format). You build the environment by using the conda dependencies from the file. However, you need to also include the `azureml-inference-server-http` package, which is required for online deployments in Azure Machine Learning.
725
+
The next step is to create an environment that you can run the scoring script in. Because the model is an MLflow model, the conda requirements are also specified in the model package. For more information about the files included in an MLflow model, see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format). You build the environment by using the conda dependencies from the file. However, you need to also include the `azureml-inference-server-http` package, which is required for online deployments in Azure Machine Learning.
726
726
727
727
You can create a conda definition file named conda.yaml that contains the following lines:
728
728
@@ -757,9 +757,9 @@ This operation isn't supported in the MLflow SDK.
757
757
1. Go to the __Custom environments__ tab, and then select __Create__.
758
758
759
759
1. On the __Settings__ page, take the following steps:
760
-
1. Under __Name__, enter the name of the environment. In this case, enter **sklearn-mlflow-online-py37**.
760
+
1. Under __Name__, enter the name of the environment. In this case, enter __sklearn-mlflow-online-py37__.
761
761
1. Under __Select environment source__, select __Use existing docker image with optional conda file__.
762
-
1. Under __Container registry image path__, enter **mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04**.
762
+
1. Under __Container registry image path__, enter __mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04__.
763
763
1. Select __Next__ to go to the __Customize__ section.
764
764
765
765
1. Copy the contents of the sklearn-diabetes/environment/conda.yaml fileand paste it in the text box.
@@ -837,13 +837,13 @@ To create the deployment, take the steps in the following sections.
837
837
838
838
1. Enter a name and authentication typefor the endpoint, and then select __Next__.
839
839
840
-
1. Check to see that the model you selected is being used for your deployment, and then select __Next__ to continue to the ___Deployment__ page.
840
+
1. Check to see that the model you selected is being used for your deployment, and then select __Next__ to continue to the __Deployment__ page.
841
841
842
842
1. Select __Next__.
843
843
844
844
##### Configure custom settings
845
845
846
-
1. On the __Code and environment for inferencing__ page, next to __Customize environment and scoring script__, select the slider. When you select a model that's registered in MLflow format, you don't need to specify a scoring script or an environment. But in this case, you want to specify both.
846
+
1. On the __Code and environment for inferencing__ page, next to __Customize environment and scoring script__, select the slider. When you use a model that's registered in MLflow format, you don't need to specify a scoring script or an environment. But in this case, you want to specify both.
847
847
848
848
:::image type="content"source="media/how-to-deploy-mlflow-models-online-endpoints/configure-scoring-script-mlflow.png"lightbox="media/how-to-deploy-mlflow-models-online-endpoints/configure-scoring-script-mlflow.png" alt-text="Screenshot of a studio configuration page. Highlighted components include an option for customizing the environment and scoring script.":::
849
849
@@ -911,7 +911,7 @@ This operation isn't supported in the MLflow SDK.
0 commit comments