You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-mlflow-models.md
+31-23Lines changed: 31 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,9 +56,10 @@ The following example *MLmodel* file highlights the `signature`.
56
56
> [!TIP]
57
57
> Signatures in MLflow models are recommended because they provide a convenient way to detect data compatibility issues. For more information about how to log models with signatures, see [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
## MLflow and Azure Machine Learning deployment targets
60
61
61
-
MLflow includes built-in deployment tools that developers can use to test models locally. For instance, you can run a local instance of a model that's registered in the MLflow server registry by using `mlflow models serve -m my_model` or the MLflow CLI `mlflow models predict`. For more information about MLflow built-in deployment tools, see [Built-in deployment tools](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) in the MLflow documentation.
62
+
Model developers can use MLflow built-in deployment tools to test models locally. For instance, you can run a local instance of a model that's registered in the MLflow server registry by using `mlflow models serve` or the MLflow CLI `mlflow models predict`. For more information about MLflow built-in deployment tools, see [Built-in deployment tools](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) in the MLflow documentation.
62
63
63
64
Azure Machine Learning also supports deploying models to both online and batch endpoints. These endpoints run different inferencing technologies that can have different features.
64
65
@@ -68,13 +69,13 @@ Azure Machine Learning also supports deploying models to both online and batch e
68
69
69
70
### Input formats
70
71
71
-
The following table shows the input types supported by the MLflow built-in server and Azure Machine Learning online endpoints.
72
+
The following table shows the input types supported by the MLflow built-in server versus Azure Machine Learning online endpoints.
72
73
73
74
| Input type | MLflow built-in server | Azure Machine Learning online endpoints |
74
75
| :- | :-: | :-: |
75
76
| JSON-serialized pandas DataFrames in the split orientation |**✓**|**✓**|
76
77
| JSON-serialized pandas DataFrames in the records orientation | Deprecated ||
77
-
| CSV-serialized pandas DataFrames |**✓**| Use batch inferencing. For more information, see [Deploy MLflow models to batch endpoints](how-to-mlflow-batch.md)|
78
+
| CSV-serialized pandas DataFrames |**✓**| Use batch inferencing. For more information, see [Deploy MLflow models to batch endpoints](how-to-mlflow-batch.md).|
78
79
| TensorFlow input as JSON-serialized lists (tensors) and dictionary of lists (named tensors) |**✓**|**✓**|
79
80
| TensorFlow input using the TensorFlow Serving API |**✓**||
80
81
@@ -109,7 +110,7 @@ The following payload examples show differences between a model deployed in the
109
110
110
111
# [MLflow server](#tab/builtin)
111
112
112
-
This payload corresponds to MLflow server 2.0+.
113
+
This payload uses MLflow server 2.0+.
113
114
114
115
```json
115
116
{
@@ -195,7 +196,7 @@ This payload corresponds to MLflow server 2.0+.
195
196
196
197
Scoring scripts customize how to execute inferencing for custom models. But for MLflow model deployment, the decision about how to execute inferencing is made by the model builder rather than by the deployment engineer. Each model framework can automatically apply specific inference routines.
197
198
198
-
If you need to change how inference of an MLflow model is executed, you can do one of two things:
199
+
If you need to change how inference is executed for an MLflow model, you can do one of the following things:
199
200
200
201
- Change how your model is being logged in the training routine.
201
202
- Customize inference with a scoring script at deployment time.
@@ -204,13 +205,13 @@ If you need to change how inference of an MLflow model is executed, you can do o
204
205
205
206
When you log a model by using either `mlflow.autolog` or `mlflow.<flavor>.log_model`, the flavor used for the model determines how to execute inference and what results to return. MLflow doesn't enforce any specific behavior for how the `predict()` function generates results.
206
207
207
-
In some cases, you might want to do some preprocessing or postprocessing before and after your model executes. Or, you might want to change what is returned, for example probabilities versus classes. One solution is to implement machine learning pipelines that move from inputs to outputs directly.
208
+
In some cases, you might want to do some preprocessing or postprocessing before and after your model executes. Or, you might want to change what is returned, for example probabilities instead of classes. One solution is to implement machine learning pipelines that move from inputs to outputs directly.
208
209
209
-
For example, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are ways to implement pipelines that are sometimes recommended for performance reasons. You can also customize how your model does inferencing by [logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
210
+
For example, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are popular ways to implement pipelines, and are sometimes recommended for performance reasons. You can also customize how your model does inferencing by [logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
210
211
211
212
### Customize inference with a scoring script
212
213
213
-
Although MLflow models don't require a scoring script, you can still provide one if needed to customize inference execution for MLflow models. For more information on how to customize inference, see [Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) for online endpoints, or [Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) for batch endpoints.
214
+
Although MLflow models don't require a scoring script, you can still provide one to customize inference execution for MLflow models if needed. For more information on how to customize inference, see [Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) for online endpoints or [Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) for batch endpoints.
214
215
215
216
> [!IMPORTANT]
216
217
> If you choose to specify a scoring script for an MLflow model deployment, you also need to provide an environment for the deployment.
@@ -224,29 +225,36 @@ Azure Machine Learning offers the following tools to deploy MLflow models to onl
224
225
-[Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/)
Each tool has different capabilities, particularly for which type of compute it can target. The following table shows the support for different MLflow deployment scenarios.
228
229
229
-
- Use the MLflow SDK if you're familiar with MLflow and want to continue using the same set of methods, and you're using a platform like Azure Databricks that supports MLflow natively.
230
-
- Use the Azure Machine Learning CLI v2 or SDK for Python if you're familiar with them, want to automate deployment with pipelines, or want to keep deployment configuration in a Git repository.
231
-
- Use the Azure Machine Learning studio UI if you want to quickly deploy and test models trained with MLflow.
232
-
233
-
Each tool has different capabilities, particularly for which type of compute it can target. The following table shows the support for different scenarios.
| Deploy to managed online endpoints<sup>1</sup> |[Progressive rollout of MLflow models to online endpoints](how-to-deploy-mlflow-models-online-progressive.md)|[Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md)|[Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)|
238
-
| Deploy to managed online endpoints with a scoring script | Not supported<sup>3</sup> |[Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments)|[Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md?tab=studio#customize-mlflow-model-deployments)|
239
-
| Deploy to batch endpoints | Not supported<sup>3</sup> |[Use MLflow models in batch deployments](how-to-mlflow-batch.md)|[Use MLflow models in batch deployments](how-to-mlflow-batch.md?tab=studio)|
240
-
| Deploy to batch endpoints with a scoring script | Not supported<sup>3</sup> |[Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script)|[Customize model deployment with scoring script](how-to-mlflow-batch.md?tab=studio#customize-model-deployment-with-scoring-script)|
241
-
| Deploy to web services like Azure Container Instances or Azure Kubernetes Service (AKS) | Legacy support<sup>2</sup> | Not supported<sup>2</sup> | Not supported<sup>2</sup> |
242
-
| Deploy to web services like Container Instances or AKS with a scoring script | Not supported<sup>3</sup> | Legacy support<sup>2</sup> | Legacy support<sup>2</sup> |
230
+
| Scenario | MLflow SDK | Azure Machine Learning CLI/SDK or studio |
231
+
|---|---|---|---|
232
+
| Deploy to managed online endpoints<sup>1</sup> |[Progressive rollout of MLflow models to online endpoints](how-to-deploy-mlflow-models-online-progressive.md)|
233
+
| Deploy to managed online endpoints with a scoring script | Not supported<sup>3</sup> |[Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments)|
234
+
| Deploy to batch endpoints | Not supported<sup>3</sup> |[Use MLflow models in batch deployments](how-to-mlflow-batch.md)|
235
+
| Deploy to batch endpoints with a scoring script | Not supported<sup>3</sup> |[Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script)|
236
+
| Deploy to web services like Azure Container Instances or Azure Kubernetes Service (AKS) | Legacy support<sup>2</sup> | Not supported<sup>2</sup> |
237
+
| Deploy to web services like Container Instances or AKS with a scoring script | Not supported<sup>3</sup> | Legacy support<sup>2</sup> |
243
238
244
239
<sup>1</sup> Deployment to online endpoints that are in workspaces with private link enabled requires you to [package models before deployment (preview)](how-to-package-models.md).
245
240
246
241
<sup>2</sup> Switch to [managed online endpoints](concept-endpoints.md) if possible.
247
242
248
243
<sup>3</sup> Open-source MLflow doesn't have the concept of a scoring script and doesn't support batch execution.
249
244
245
+
### Choose a deployment tool
246
+
247
+
Use the MLflow SDK if:
248
+
- You're familiar with MLflow and want to continue using the same methods, and
249
+
- You're using a platform like Azure Databricks that supports MLflow natively.
250
+
251
+
Use the Azure Machine Learning CLI v2 or SDK for Python if:
252
+
- You're familiar with them, or
253
+
- You want to automate deployment with pipelines, or
254
+
- You want to keep deployment configuration in a Git repository.
255
+
256
+
Use the Azure Machine Learning studio UI if you want to quickly deploy and test models trained with MLflow.
257
+
250
258
## Related content
251
259
252
260
-[MLflow model deployment to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md)
0 commit comments