Skip to content

Commit e46bc97

Browse files
committed
review cx
1 parent aac0ea9 commit e46bc97

File tree

1 file changed

+15
-13
lines changed

1 file changed

+15
-13
lines changed

articles/machine-learning/how-to-deploy-mlflow-models.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-machine-learning
77
ms.subservice: mlops
88
author: msakande
99
ms.author: mopeakande
10-
ms.reviewer: fasantia
10+
ms.reviewer: cacrest
1111
ms.date: 09/27/2024
1212
ms.topic: concept-article
1313
ms.custom: deploy, mlflow, devplatv2, no-code-deployment, cliv2, update-code, FY25Q1-Linter
@@ -19,16 +19,18 @@ ms.devlang: azurecli
1919

2020
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
2121

22-
In this article, learn about deployment of [MLflow](https://www.mlflow.org) models to Azure Machine Learning for both real-time and batch inference, and about different tools you can use to manage the deployment.
22+
In this article, learn about deployment of [MLflow](https://www.mlflow.org) models to Azure Machine Learning for both real-time and batch inference, and about different tools you can use to manage the deployments.
2323

2424
## No-code deployment
2525

2626
When you deploy MLflow models to Azure Machine Learning, unlike with custom model deployment, you don't have to provide a scoring script or an environment. Azure Machine Learning automatically generates the scoring script and environment for you. This functionality is called *no-code deployment*.
2727

28-
To ensure that all the package dependencies in the MLflow model are satisfied, Azure Machine Learning provides an MLflow base image or curated environment that contains:
28+
For no-code deployment, Azure Machine Learning:
2929

30-
- Packages required to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst).
31-
- A scoring script to perform inference.
30+
- Ensures that all the package dependencies indicated in the MLflow model are satisfied.
31+
- Provides an MLflow base image or curated environment that contains the following items:
32+
- Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst).
33+
- A scoring script to perform inference.
3234

3335
[!INCLUDE [mlflow-model-package-for-workspace-without-egress](includes/mlflow-model-package-for-workspace-without-egress.md)]
3436

@@ -41,7 +43,7 @@ The following example *conda.yaml* file shows conda dependencies specified in an
4143
:::code language="yaml" source="~/azureml-examples-main/sdk/python/endpoints/online/mlflow/sklearn-diabetes/model/conda.yaml":::
4244

4345
> [!IMPORTANT]
44-
> MLflow automatically detects packages when it logs a model, and pins the package versions in the model's conda dependencies. This automatic package detection might not reflect your intentions or requirements. You can alternatively [log models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
46+
> MLflow automatically detects packages when it logs a model, and it pins the package versions in the model's conda dependencies. This automatic package detection might not reflect your intentions or requirements. You can alternatively [log models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
4547
4648
### Models with signatures
4749

@@ -57,7 +59,7 @@ The following example *MLmodel* file highlights the `signature`.
5759
> Signatures in MLflow models are recommended because they provide a convenient way to detect data compatibility issues. For more information about how to log models with signatures, see [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
5860
5961
<a name="models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server"></a>
60-
## MLflow and Azure Machine Learning deployment targets
62+
## Deployment in the MLflow built-in server vs. deployment in Azure Machine Learning inferencing server
6163

6264
Model developers can use MLflow built-in deployment tools to test models locally. For instance, you can run a local instance of a model that's registered in the MLflow server registry by using `mlflow models serve` or the MLflow CLI `mlflow models predict`. For more information about MLflow built-in deployment tools, see [Built-in deployment tools](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) in the MLflow documentation.
6365

@@ -71,7 +73,7 @@ Azure Machine Learning also supports deploying models to both online and batch e
7173

7274
The following table shows the input types supported by the MLflow built-in server versus Azure Machine Learning online endpoints.
7375

74-
| Input type | MLflow built-in server | Azure Machine Learning online endpoints |
76+
| Input type | MLflow built-in server | Azure Machine Learning online endpoint |
7577
|---| :-: | :-: |
7678
| JSON-serialized pandas DataFrames in the split orientation | **&check;** | **&check;** |
7779
| JSON-serialized pandas DataFrames in the records orientation | Deprecated | |
@@ -205,7 +207,7 @@ If you need to change how inference is executed for an MLflow model, you can do
205207

206208
When you log a model by using either `mlflow.autolog` or `mlflow.<flavor>.log_model`, the flavor used for the model determines how to execute inference and what results to return. MLflow doesn't enforce any specific behavior for how the `predict()` function generates results.
207209

208-
In some cases, you might want to do some preprocessing or postprocessing before and after your model executes. Or, you might want to change what is returned, for example probabilities instead of classes. One solution is to implement machine learning pipelines that move from inputs to outputs directly.
210+
In some cases, you might want to do some preprocessing or postprocessing before and after your model executes. Or, you might want to change what is returned; for example, probabilities instead of classes. One solution is to implement machine learning pipelines that move from inputs to outputs directly.
209211

210212
For example, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are popular ways to implement pipelines, and are sometimes recommended for performance reasons. You can also customize how your model does inferencing by [logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
211213

@@ -229,10 +231,10 @@ Each tool has different capabilities, particularly for which type of compute it
229231

230232
| Scenario | MLflow SDK | Azure Machine Learning CLI/SDK or studio |
231233
|---| :-: | :-: |
232-
| Deploy to managed online endpoints<sup>1</sup> | [Progressive rollout of MLflow models to online endpoints](how-to-deploy-mlflow-models-online-progressive.md) | [Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md) |
233-
| Deploy to managed online endpoints with a scoring script | Not supported<sup>3</sup> | [Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) |
234-
| Deploy to batch endpoints | Not supported<sup>3</sup> | [Use MLflow models in batch deployments](how-to-mlflow-batch.md) |
235-
| Deploy to batch endpoints with a scoring script | Not supported<sup>3</sup> | [Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) |
234+
| Deploy to managed online endpoints<sup>1</sup> | Supported. See [Progressive rollout of MLflow models to online endpoints](how-to-deploy-mlflow-models-online-progressive.md) | Supported. See [Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md) |
235+
| Deploy to managed online endpoints with a scoring script | Not supported<sup>3</sup> | Supported. See [Customize MLflow model deployments](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) |
236+
| Deploy to batch endpoints | Not supported<sup>3</sup> | Supported. See [Use MLflow models in batch deployments](how-to-mlflow-batch.md) |
237+
| Deploy to batch endpoints with a scoring script | Not supported<sup>3</sup> | Supported. See [Customize model deployment with scoring script](how-to-mlflow-batch.md#customize-model-deployment-with-scoring-script) |
236238
| Deploy to web services like Azure Container Instances or Azure Kubernetes Service (AKS) | Legacy support<sup>2</sup> | Not supported<sup>2</sup> |
237239
| Deploy to web services like Container Instances or AKS with a scoring script | Not supported<sup>3</sup> | Legacy support<sup>2</sup> |
238240

0 commit comments

Comments
 (0)