You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
+52Lines changed: 52 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,6 +38,58 @@ Before following the steps in this article, make sure you have the following pre
38
38
- Install the Azure Machine Learning plug-in for MLflow: `azureml-mlflow`.
39
39
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
40
40
41
+
### Connect to your workspace
42
+
43
+
First, let's connect to Azure Machine Learning workspace where we are going to work on.
44
+
45
+
# [Azure CLI](#tab/cli)
46
+
47
+
```azurecli
48
+
az account set --subscription <subscription>
49
+
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
50
+
```
51
+
52
+
# [Python (Azure ML SDK)](#tab/sdk)
53
+
54
+
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
55
+
56
+
1. Import the required libraries:
57
+
58
+
```python
59
+
from azure.ai.ml import MLClient, Input
60
+
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment, Model
61
+
from azure.ai.ml.constants import AssetTypes
62
+
from azure.identity import DefaultAzureCredential
63
+
```
64
+
65
+
2. Configure workspace details and get a handle to the workspace:
Ensure your model is registered in Azure Machine Learning registry. Deployment of unregistered models isnot supported in Azure Machine Learning. You can register a new model using the MLflow SDK:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-mlflow-models.md
+33-36Lines changed: 33 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,10 +25,7 @@ ms.devlang: azurecli
25
25
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to perform management of the deployment.
26
26
27
27
28
-
> [!TIP]
29
-
>
30
-
31
-
## What's different about deploying MLflow models vs custom models?
28
+
## Deploying MLflow models vs custom models
32
29
33
30
When deploying MLflow models to Azure Machine Learning, you don't have to provide a scoring script or an environment for deployment as they are automatically generated for you. We typically refer to this functionality as no-code deployment.
34
31
@@ -42,26 +39,6 @@ For no-code-deployment, Azure Machine Learning:
42
39
> [!WARNING]
43
40
> Online Endpoints dynamically installs Python packages provided MLflow model package during container runtime. deploying MLflow models to online endpoints with no-code deployment in a private network without egress connectivity is not supported by the moment. If that's your case, either enable egress connectivity or indicate the environment to use in the deployment as explained in [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments). This limitation is not present in Batch Endpoints.
44
41
45
-
## How to customize inference when deploying MLflow models
46
-
47
-
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
48
-
49
-
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
50
-
51
-
52
-
### Change how your model is logged during training
53
-
54
-
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
55
-
56
-
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
57
-
58
-
### Customize inference with a scoring script
59
-
60
-
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
61
-
62
-
> [!IMPORTANT]
63
-
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
64
-
65
42
## Deployment tools
66
43
67
44
Azure Machine Learning offers many ways to deploy MLflow models into Online and Batch endpoints. You can deploy models using the following tools:
@@ -75,12 +52,12 @@ Each workflow has different capabilities, particularly around which type of comp
75
52
76
53
| Scenario | MLflow SDK | Azure ML CLI/SDK | Azure ML studio |
| Deploy to managed online endpoints (with a scoring script) ||[See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments)| Not supported |
57
+
| Deploy to batch endpoints ||[See example](how-to-mlflow-batch.md)|[See example](how-to-mlflow-batch.md?tab=studio)|
58
+
| Deploy to batch endpoints (with a scoring script) ||[See example](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script)||
59
+
| Deploy to web services (ACI/AKS) |Legacy support<sup>2</sup> | <sup>2</sup> | <sup>2</sup> |
60
+
| Deploy to web services (ACI/AKS - with a scoring script) | <sup>2</sup> | <sup>2</sup> |Legacy support<sup>2</sup> |
84
61
85
62
> [!NOTE]
86
63
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with a scoring script on those scenarios.
@@ -102,7 +79,7 @@ MLflow includes built-in deployment tools that model developers can use to test
102
79
| JSON-serialized pandas DataFrames in the split orientation |**✓**|**✓**|
103
80
| JSON-serialized pandas DataFrames in the records orientation | Deprecated ||
104
81
| CSV-serialized pandas DataFrames |**✓**| Use batch<sup>1</sup> |
105
-
| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) ||**✓**|
82
+
| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) |**✓**|**✓**|
106
83
| Tensor input formatted as in TF Serving’s API |**✓**||
107
84
108
85
> [!NOTE]
@@ -115,6 +92,9 @@ Regardless of the input type used, Azure Machine Learning requires inputs to be
115
92
> [!WARNING]
116
93
> Note that such key is not required when serving models using the command `mlflow models serve` and hence payloads can't be used interchangeably.
117
94
95
+
> [!IMPORTANT]
96
+
> **MLflow 2.0 advisory**: Notice that the payload's structure has changed in MLflow 2.0.
97
+
118
98
#### Payload example for a JSON-serialized pandas DataFrames in the split orientation
119
99
120
100
# [Azure Machine Learning](#tab/azureml)
@@ -135,11 +115,6 @@ Regardless of the input type used, Azure Machine Learning requires inputs to be
135
115
136
116
# [MLflow built-in server](#tab/builtin)
137
117
138
-
The following payload corresponds to MLflow server 2.0+.
139
-
140
-
> [!WARNING]
141
-
> **MLflow 2.0 advisory**: Notice that the payload's structure has changed in MLflow 2.0.
142
-
143
118
```json
144
119
{
145
120
"dataframe_split": {
@@ -153,6 +128,9 @@ The following payload corresponds to MLflow server 2.0+.
153
128
}
154
129
}
155
130
```
131
+
132
+
The previous payload corresponds to MLflow server 2.0+.
133
+
156
134
---
157
135
158
136
@@ -222,6 +200,25 @@ The following payload corresponds to MLflow server 2.0+.
222
200
223
201
For more information about MLflow built-in deployment tools see [MLflow documentation section](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools).
224
202
203
+
## How to customize inference when deploying MLflow models
204
+
205
+
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
206
+
207
+
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
208
+
209
+
210
+
### Change how your model is logged during training
211
+
212
+
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
213
+
214
+
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
215
+
216
+
### Customize inference with a scoring script
217
+
218
+
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
219
+
220
+
> [!IMPORTANT]
221
+
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
0 commit comments