You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,11 +20,11 @@ In this article, you'll learn how you can progressively update and deploy MLflow
20
20
21
21
## About this example
22
22
23
-
Online Endpoints have the concept of __Endpoint__ and __Deployment__. An endpoint represent the API that customers uses to consume the model, while the deployment indicates the specific implementation of that API. This distinction allows users to decouple the API from the implementation and to change the underlying implementation without affecting the consumer. This example will use this concepts to update the deployed model in endpoints without introducing service disruption.
23
+
Online Endpoints have the concept of __Endpoint__ and __Deployment__. An endpoint represents the API that customers use to consume the model, while the deployment indicates the specific implementation of that API. This distinction allows users to decouple the API from the implementation and to change the underlying implementation without affecting the consumer. This example will use such concepts to update the deployed model in endpoints without introducing service disruption.
24
24
25
25
The model we will deploy is based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence). It has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
26
26
27
-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo and then change directories to `sdk/using-mlflow/deploy`.
27
+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo, and then change directories to `sdk/using-mlflow/deploy`.
28
28
29
29
### Follow along in Jupyter Notebooks
30
30
@@ -36,7 +36,7 @@ Before following the steps in this article, make sure you have the following pre
36
36
37
37
- Install the Mlflow SDK package: `mlflow`.
38
38
- Install the Azure Machine Learning plug-in for MLflow: `azureml-mlflow`.
39
-
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. For more information about how to Set up tracking environment, see[Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
39
+
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See[Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
40
40
41
41
### Registering the model in the registry
42
42
@@ -204,7 +204,7 @@ We are going to exploit this functionality by deploying multiple versions of the
204
204
205
205
### Create a blue deployment
206
206
207
-
So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default" and this will represent our "blue deployment".
207
+
So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default" and it will represent our "blue deployment".
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-mlflow-models.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,21 +37,21 @@ For no-code-deployment, Azure Machine Learning:
37
37
* Ensures all the package dependencies indicated in the MLflow model are satisfied.
38
38
* Provides a MLflow base image/curated environment that contains the following items:
39
39
* Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst).
40
-
*An scoring script to perform inference.
40
+
*A scoring script to perform inference.
41
41
42
42
> [!WARNING]
43
43
> Online Endpoints dynamically installs Python packages provided MLflow model package during container runtime. deploying MLflow models to online endpoints with no-code deployment in a private network without egress connectivity is not supported by the moment. If that's your case, either enable egress connectivity or indicate the environment to use in the deployment as explained in [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments). This limitation is not present in Batch Endpoints.
44
44
45
45
## How to customize inference when deploying MLflow models
46
46
47
-
You may be use to author scoring scripts to customize how inference is executed for you models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
47
+
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
48
48
49
-
For those cases, you can either [change how you model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
49
+
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
50
50
51
51
52
52
### Change how your model is logged during training
53
53
54
-
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generate results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
54
+
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
55
55
56
56
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
57
57
@@ -60,7 +60,7 @@ A solution to this scenario is to implement machine learning pipelines that move
60
60
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script).
61
61
62
62
> [!IMPORTANT]
63
-
> When you opt-in to indicate an scoring script, you also need to provide an environment for deployment.
63
+
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
64
64
65
65
## Deployment tools
66
66
@@ -76,14 +76,14 @@ Each workflow has different capabilities, particularly around which type of comp
76
76
| Scenario | MLflow SDK | Azure ML CLI/SDK | Azure ML studio |
| Deploy MLflow models to managed online endpoints (with an scoring script) | Not supported |[See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments)| Not supported |
79
+
| Deploy MLflow models to managed online endpoints (with a scoring script) | Not supported |[See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments)| Not supported |
80
80
| Deploy MLflow models to batch endpoints ||[See example](how-to-mlflow-batch.md)|[See example](how-to-mlflow-batch.md?tab=studio)|
81
-
| Deploy MLflow models to batch endpoints (with an scoring script) ||[See example](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script)| Not supported |
81
+
| Deploy MLflow models to batch endpoints (with a scoring script) ||[See example](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script)| Not supported |
82
82
| Deploy MLflow models to web services (ACI/AKS) | Supported<sup>2</sup> | <sup>2</sup> | <sup>2</sup> |
83
83
| Deploy MLflow models to web services (ACI/AKS - with a scoring script) | <sup>2</sup> | <sup>2</sup> | Supported<sup>2</sup> |
84
84
85
85
> [!NOTE]
86
-
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with an scoring script on those scenarios.
86
+
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with a scoring script on those scenarios.
87
87
> - <sup>2</sup> We recommend switching to our [managed online endpoints](concept-endpoints.md) instead.
88
88
89
89
### Which option to use?
@@ -93,7 +93,7 @@ If you are familiar with MLflow or your platform support MLflow natively (like A
93
93
94
94
## Differences between MLflow models deployed in Azure Machine Learning and MLflow built-in server
95
95
96
-
MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model registered in MLflow server registry with `mlflow models serve -m my_model`. Since Azure Machine Learning online endpoints runs our influencing server technology, the behavior of these two services is different.
96
+
MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model registered in MLflow server registry with `mlflow models serve -m my_model`. Since Azure Machine Learning online endpoints run our influencing server technology, the behavior of these two services is different.
0 commit comments