Skip to content

Commit 009e519

Browse files
committed
fix: improvements
1 parent 461eefd commit 009e519

File tree

2 files changed

+85
-36
lines changed

2 files changed

+85
-36
lines changed

articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,58 @@ Before following the steps in this article, make sure you have the following pre
3838
- Install the Azure Machine Learning plug-in for MLflow: `azureml-mlflow`.
3939
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
4040

41+
### Connect to your workspace
42+
43+
First, let's connect to Azure Machine Learning workspace where we are going to work on.
44+
45+
# [Azure CLI](#tab/cli)
46+
47+
```azurecli
48+
az account set --subscription <subscription>
49+
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
50+
```
51+
52+
# [Python (Azure ML SDK)](#tab/sdk)
53+
54+
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
55+
56+
1. Import the required libraries:
57+
58+
```python
59+
from azure.ai.ml import MLClient, Input
60+
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment, Model
61+
from azure.ai.ml.constants import AssetTypes
62+
from azure.identity import DefaultAzureCredential
63+
```
64+
65+
2. Configure workspace details and get a handle to the workspace:
66+
67+
```python
68+
subscription_id = "<subscription>"
69+
resource_group = "<resource-group>"
70+
workspace = "<workspace>"
71+
72+
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
73+
```
74+
75+
# [Python (MLflow SDK)](#tab/mlflow)
76+
77+
1. Import the required libraries
78+
79+
```python
80+
import json
81+
import mlflow
82+
from mlflow.deployments import get_deploy_client
83+
```
84+
85+
1. Configure the deployment client
86+
87+
```python
88+
deployment_client = get_deploy_client(mlflow.get_tracking_uri())
89+
```
90+
91+
---
92+
4193
### Registering the model in the registry
4294

4395
Ensure your model is registered in Azure Machine Learning registry. Deployment of unregistered models is not supported in Azure Machine Learning. You can register a new model using the MLflow SDK:

articles/machine-learning/how-to-deploy-mlflow-models.md

Lines changed: 33 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,7 @@ ms.devlang: azurecli
2525
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to perform management of the deployment.
2626

2727

28-
> [!TIP]
29-
>
30-
31-
## What's different about deploying MLflow models vs custom models?
28+
## Deploying MLflow models vs custom models
3229

3330
When deploying MLflow models to Azure Machine Learning, you don't have to provide a scoring script or an environment for deployment as they are automatically generated for you. We typically refer to this functionality as no-code deployment.
3431

@@ -42,26 +39,6 @@ For no-code-deployment, Azure Machine Learning:
4239
> [!WARNING]
4340
> Online Endpoints dynamically installs Python packages provided MLflow model package during container runtime. deploying MLflow models to online endpoints with no-code deployment in a private network without egress connectivity is not supported by the moment. If that's your case, either enable egress connectivity or indicate the environment to use in the deployment as explained in [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments). This limitation is not present in Batch Endpoints.
4441
45-
## How to customize inference when deploying MLflow models
46-
47-
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
48-
49-
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
50-
51-
52-
### Change how your model is logged during training
53-
54-
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
55-
56-
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
57-
58-
### Customize inference with a scoring script
59-
60-
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
61-
62-
> [!IMPORTANT]
63-
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
64-
6542
## Deployment tools
6643

6744
Azure Machine Learning offers many ways to deploy MLflow models into Online and Batch endpoints. You can deploy models using the following tools:
@@ -75,12 +52,12 @@ Each workflow has different capabilities, particularly around which type of comp
7552

7653
| Scenario | MLflow SDK | Azure ML CLI/SDK | Azure ML studio |
7754
| :- | :-: | :-: | :-: |
78-
| Deploy MLflow models to managed online endpoints | [See example](how-to-deploy-mlflow-models-online-progressive.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)]<sup>1</sup> |
79-
| Deploy MLflow models to managed online endpoints (with a scoring script) | Not supported | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | Not supported |
80-
| Deploy MLflow models to batch endpoints | | [See example](how-to-mlflow-batch.md) | [See example](how-to-mlflow-batch.md?tab=studio) |
81-
| Deploy MLflow models to batch endpoints (with a scoring script) | | [See example](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script) | Not supported |
82-
| Deploy MLflow models to web services (ACI/AKS) | Supported<sup>2</sup> | <sup>2</sup> | <sup>2</sup> |
83-
| Deploy MLflow models to web services (ACI/AKS - with a scoring script) | <sup>2</sup> | <sup>2</sup> | Supported<sup>2</sup> |
55+
| Deploy to managed online endpoints | [See example](how-to-deploy-mlflow-models-online-progressive.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)<sup>1</sup> |
56+
| Deploy to managed online endpoints (with a scoring script) | | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | Not supported |
57+
| Deploy to batch endpoints | | [See example](how-to-mlflow-batch.md) | [See example](how-to-mlflow-batch.md?tab=studio) |
58+
| Deploy to batch endpoints (with a scoring script) | | [See example](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script) | |
59+
| Deploy to web services (ACI/AKS) | Legacy support<sup>2</sup> | <sup>2</sup> | <sup>2</sup> |
60+
| Deploy to web services (ACI/AKS - with a scoring script) | <sup>2</sup> | <sup>2</sup> | Legacy support<sup>2</sup> |
8461

8562
> [!NOTE]
8663
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with a scoring script on those scenarios.
@@ -102,7 +79,7 @@ MLflow includes built-in deployment tools that model developers can use to test
10279
| JSON-serialized pandas DataFrames in the split orientation | **&check;** | **&check;** |
10380
| JSON-serialized pandas DataFrames in the records orientation | Deprecated | |
10481
| CSV-serialized pandas DataFrames | **&check;** | Use batch<sup>1</sup> |
105-
| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | | **&check;** |
82+
| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | **&check;** | **&check;** |
10683
| Tensor input formatted as in TF Serving’s API | **&check;** | |
10784

10885
> [!NOTE]
@@ -115,6 +92,9 @@ Regardless of the input type used, Azure Machine Learning requires inputs to be
11592
> [!WARNING]
11693
> Note that such key is not required when serving models using the command `mlflow models serve` and hence payloads can't be used interchangeably.
11794
95+
> [!IMPORTANT]
96+
> **MLflow 2.0 advisory**: Notice that the payload's structure has changed in MLflow 2.0.
97+
11898
#### Payload example for a JSON-serialized pandas DataFrames in the split orientation
11999

120100
# [Azure Machine Learning](#tab/azureml)
@@ -135,11 +115,6 @@ Regardless of the input type used, Azure Machine Learning requires inputs to be
135115

136116
# [MLflow built-in server](#tab/builtin)
137117

138-
The following payload corresponds to MLflow server 2.0+.
139-
140-
> [!WARNING]
141-
> **MLflow 2.0 advisory**: Notice that the payload's structure has changed in MLflow 2.0.
142-
143118
```json
144119
{
145120
"dataframe_split": {
@@ -153,6 +128,9 @@ The following payload corresponds to MLflow server 2.0+.
153128
}
154129
}
155130
```
131+
132+
The previous payload corresponds to MLflow server 2.0+.
133+
156134
---
157135

158136

@@ -222,6 +200,25 @@ The following payload corresponds to MLflow server 2.0+.
222200

223201
For more information about MLflow built-in deployment tools see [MLflow documentation section](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools).
224202

203+
## How to customize inference when deploying MLflow models
204+
205+
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
206+
207+
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
208+
209+
210+
### Change how your model is logged during training
211+
212+
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
213+
214+
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
215+
216+
### Customize inference with a scoring script
217+
218+
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
219+
220+
> [!IMPORTANT]
221+
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
225222
226223
## Next steps
227224

0 commit comments

Comments
 (0)