Skip to content

Commit 8dbf9cb

Browse files
committed
more edits
1 parent aa1a66e commit 8dbf9cb

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
2-
title: Deploy MLflow models to online endpoint
2+
title: Deploy MLflow models to real-time endpoints
33
titleSuffix: Azure Machine Learning
4-
description: Learn to deploy your MLflow model as a web service that's automatically managed by Azure.
4+
description: Learn to deploy your MLflow model as a web service that's managed by Azure.
55
services: machine-learning
66
ms.service: machine-learning
7-
ms.subservice: core
7+
ms.subservice: inferencing
88
author: santiagxf
99
ms.author: fasantia
1010
ms.reviewer: mopeakande
11-
ms.date: 01/22/2024
11+
ms.date: 01/31/2024
1212
ms.topic: how-to
1313
ms.custom: deploy, mlflow, devplatv2, no-code-deployment, devx-track-azurecli, cliv2, event-tier1-build-2022
1414
---
@@ -22,7 +22,7 @@ In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model
2222

2323
For no-code-deployment, Azure Machine Learning:
2424

25-
* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies are installed during container runtime.
25+
* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies get installed during container runtime.
2626
* Provides an MLflow base image/curated environment that contains the following items:
2727
* [`azureml-inference-server-http`](how-to-inference-server-http.md)
2828
* [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
@@ -31,13 +31,13 @@ For no-code-deployment, Azure Machine Learning:
3131
[!INCLUDE [mlflow-model-package-for-workspace-without-egress](includes/mlflow-model-package-for-workspace-without-egress.md)]
3232

3333

34-
## About this example
34+
## About the example
3535

36-
This example shows how you can deploy an MLflow model to an online endpoint to perform predictions. This example uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.
36+
The example shows how you can deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.
3737

3838
The model was trained using a `scikit-learn` regressor, and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
3939

40-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to `cli` if you're using the Azure CLI or `sdk/python/endpoints/online/mlflow` if you're using the Azure Machine Learning SDK for Python.
40+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to `cli`, if you're using the Azure CLI. If you're using the Azure Machine Learning SDK for Python, change directories to `sdk/python/endpoints/online/mlflow`.
4141

4242
```azurecli
4343
git clone https://github.com/Azure/azureml-examples --depth 1
@@ -199,13 +199,13 @@ To create a model in Azure Machine Learning studio:
199199

200200
---
201201

202-
__What if your model was logged inside of a run?__
202+
#### What if your model was logged inside of a run?
203203

204204
If your model was logged inside of a run, you can register it directly.
205205

206-
To register the model, you need to know the location where the model is stored. If you're using MLflow's `autolog` feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named `MLModel`.
206+
To register the model, you need to know the location where it is stored. If you're using MLflow's `autolog` feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named `MLModel`.
207207
208-
If you're logging your models manually, using the `log_model` method, then the path to the model is the argument you pass to the method. For example, if you log the model, using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is called `classifier`.
208+
If you're using the `log_model` method to manually log your models, then pass the path to the model as the argument to the method. For example, if you log the model, using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is called `classifier`.
209209

210210
# [Azure CLI](#tab/cli)
211211

@@ -403,7 +403,7 @@ version = registered_model.version
403403
---
404404
405405
> [!NOTE]
406-
> Autogeneration of `scoring_script` and `environment` are only supported for `pyfunc` model flavor. To use a different flavor, see [Customizing MLflow model deployments](#customizing-mlflow-model-deployments).
406+
> Autogeneration of the `scoring_script` and `environment` are only supported for `pyfunc` model flavor. To use a different model flavor, see [Customizing MLflow model deployments](#customize-mlflow-model-deployments).
407407
408408
1. Create the deployment:
409409
@@ -440,10 +440,10 @@ version = registered_model.version
440440

441441
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
442442

443-
1. Choose the MLflow model that you registered previously and select the **Select** button.
443+
1. Choose the MLflow model that you registered previously, then select the **Select** button.
444444

445445
> [!NOTE]
446-
> The configuration page includes a note that says the the scoring script and environment are auto generated for your selected MLflow model.
446+
> The configuration page includes a note to inform you that the the scoring script and environment are auto generated for your selected MLflow model.
447447

448448
1. Select **New** to deploy to a new endpoint.
449449
1. Provide a name for the endpoint and deployment or keep the default names.
@@ -456,7 +456,7 @@ version = registered_model.version
456456

457457
# [Azure CLI](#tab/cli)
458458

459-
*This step in not required in the Azure CLI since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained a in the [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic) section.*
459+
*This step in not required in the Azure CLI, since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
460460

461461
# [Python (Azure Machine Learning SDK)](#tab/sdk)
462462

@@ -485,8 +485,8 @@ version = registered_model.version
485485
1. Update the endpoint configuration:
486486

487487
# [Azure CLI](#tab/cli)
488-
489-
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
488+
489+
*This step in not required in the Azure CLI, since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
490490

491491
# [Python (Azure Machine Learning SDK)](#tab/sdk)
492492

@@ -507,18 +507,18 @@ version = registered_model.version
507507

508508
*This step in not required in the studio.*
509509

510-
### Invoke the endpoint
510+
## Invoke the endpoint
511511

512-
Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using the built-in invocation capability in the deployment client you are using.
512+
Once your deployment is ready, you can use it to serve request. One way to test the deployment is by using the built-in invocation capability in the deployment client you are using. The following JSON is a sample request for the deployment.
513513

514514
**sample-request-sklearn.json**
515515

516516
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/ncd/sample-request-sklearn.json":::
517517

518518
> [!NOTE]
519-
> Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server) for details about expected input format.
519+
>`input_data` is used in this example, instead of `inputs` that is used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. For more information about expected input formats, see [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server).
520520

521-
To submit a request to the endpoint, you can do as follows:
521+
Submit a request to the endpoint as follows:
522522

523523
# [Azure CLI](#tab/cli)
524524

@@ -548,10 +548,10 @@ deployment_client.predict(endpoint=endpoint_name, df=samples)
548548

549549
MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
550550

551-
1. Go to the __Endpoints__ tab and select the new endpoint created.
551+
1. Go to the __Endpoints__ tab and select the endpoint you created.
552552
1. Go to the __Test__ tab.
553553
1. Paste the content of the file `sample-request-sklearn.json`.
554-
1. Click on __Test__.
554+
1. Select __Test__.
555555
1. The predictions will show up in the box on the right.
556556

557557
---
@@ -569,7 +569,7 @@ The response will be similar to the following text:
569569
> For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
570570

571571

572-
## Customizing MLflow model deployments
572+
## Customize MLflow model deployments
573573

574574
MLflow models can be deployed to online endpoints without indicating a scoring script in the deployment definition. However, you can opt to customize how inference is executed.
575575

0 commit comments

Comments
 (0)