You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This example shows how you can deploy an MLflow model to an online endpoint to perform predictions. This example uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.
36
+
The example shows how you can deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.
37
37
38
38
The model was trained using a `scikit-learn` regressor, and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
39
39
40
-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to `cli` if you're using the Azure CLI or `sdk/python/endpoints/online/mlflow` if you're using the Azure Machine Learning SDK for Python.
40
+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to `cli`, if you're using the Azure CLI. If you're using the Azure Machine Learning SDK for Python, change directories to `sdk/python/endpoints/online/mlflow`.
@@ -199,13 +199,13 @@ To create a model in Azure Machine Learning studio:
199
199
200
200
---
201
201
202
-
__What if your model was logged inside of a run?__
202
+
#### What if your model was logged inside of a run?
203
203
204
204
If your model was logged inside of a run, you can register it directly.
205
205
206
-
To register the model, you need to know the location where the model is stored. If you're using MLflow's `autolog` feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named `MLModel`.
206
+
To register the model, you need to know the location where it is stored. If you're using MLflow's `autolog` feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named `MLModel`.
207
207
208
-
If you're logging your models manually, using the `log_model` method, then the path to the model is the argument you pass to the method. For example, if you log the model, using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is called `classifier`.
208
+
If you're using the `log_model` method to manually log your models, thenpass the path to the model as the argument to the method. For example, if you log the model, using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is called `classifier`.
209
209
210
210
# [Azure CLI](#tab/cli)
211
211
@@ -403,7 +403,7 @@ version = registered_model.version
403
403
---
404
404
405
405
> [!NOTE]
406
-
> Autogeneration of `scoring_script` and `environment` are only supported for `pyfunc` model flavor. To use a different flavor, see [Customizing MLflow model deployments](#customizing-mlflow-model-deployments).
406
+
> Autogeneration of the `scoring_script` and `environment` are only supported for `pyfunc` model flavor. To use a different model flavor, see [Customizing MLflow model deployments](#customize-mlflow-model-deployments).
407
407
408
408
1. Create the deployment:
409
409
@@ -440,10 +440,10 @@ version = registered_model.version
440
440
441
441
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
442
442
443
-
1. Choose the MLflow model that you registered previously andselectthe**Select** button.
443
+
1. Choose the MLflow model that you registered previously, thenselectthe**Select** button.
444
444
445
445
> [!NOTE]
446
-
> The configuration page includes a note that says the the scoring script and environment are auto generated for your selected MLflow model.
446
+
> The configuration page includes a note to inform you that the the scoring script and environment are auto generated for your selected MLflow model.
447
447
448
448
1. Select **New** to deploy to a new endpoint.
449
449
1. Provide a name for the endpoint and deployment or keep the default names.
@@ -456,7 +456,7 @@ version = registered_model.version
456
456
457
457
# [Azure CLI](#tab/cli)
458
458
459
-
*This step in not required in the Azure CLI since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command`az ml online-endpoint update --traffic` as explained a in the [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic) section.*
459
+
*This step in not required in the Azure CLI, since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command`az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
460
460
461
461
# [Python (Azure Machine Learning SDK)](#tab/sdk)
462
462
@@ -485,8 +485,8 @@ version = registered_model.version
485
485
1. Update the endpoint configuration:
486
486
487
487
# [Azure CLI](#tab/cli)
488
-
489
-
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command`az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
488
+
489
+
*This step in not required in the Azure CLI, since you used the `--all-traffic`flag during creation. If you need to change traffic, you can use the command`az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
490
490
491
491
# [Python (Azure Machine Learning SDK)](#tab/sdk)
492
492
@@ -507,18 +507,18 @@ version = registered_model.version
507
507
508
508
*This step in not required in the studio.*
509
509
510
-
### Invoke the endpoint
510
+
## Invoke the endpoint
511
511
512
-
Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using the built-in invocation capability in the deployment client you are using.
512
+
Once your deployment is ready, you can use it to serve request. One way to test the deployment is by using the built-in invocation capability in the deployment client you are using. The following JSON is a sample request for the deployment.
> Notice how the key `input_data`has been used in this example instead of `inputs`as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts forthe endpoints. See [Differences between models deployedin Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server)for details about expected input format.
519
+
>`input_data`is used in this example, instead of `inputs`that is used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts forthe endpoints. For more information about expected input formats, see [Differences between models deployedin Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server).
520
520
521
-
To submit a request to the endpoint, you can do as follows:
MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
550
550
551
-
1. Go to the __Endpoints__ tab and selectthenew endpoint created.
551
+
1. Go to the __Endpoints__ tab and selectthe endpoint you created.
552
552
1. Go to the __Test__ tab.
553
553
1. Paste the content of the file `sample-request-sklearn.json`.
554
-
1. Click on __Test__.
554
+
1. Select __Test__.
555
555
1. The predictions will show up in the box on the right.
556
556
557
557
---
@@ -569,7 +569,7 @@ The response will be similar to the following text:
569
569
> For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
570
570
571
571
572
-
## Customizing MLflow model deployments
572
+
## Customize MLflow model deployments
573
573
574
574
MLflow models can be deployed to online endpoints without indicating a scoring script in the deployment definition. However, you can opt to customize how inference is executed.
0 commit comments