Skip to content

Commit 276aa07

Browse files
committed
fix: acrolyx
1 parent 6fabe9b commit 276aa07

File tree

2 files changed

+13
-13
lines changed

2 files changed

+13
-13
lines changed

articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,11 @@ In this article, you'll learn how you can progressively update and deploy MLflow
2020

2121
## About this example
2222

23-
Online Endpoints have the concept of __Endpoint__ and __Deployment__. An endpoint represent the API that customers uses to consume the model, while the deployment indicates the specific implementation of that API. This distinction allows users to decouple the API from the implementation and to change the underlying implementation without affecting the consumer. This example will use this concepts to update the deployed model in endpoints without introducing service disruption.
23+
Online Endpoints have the concept of __Endpoint__ and __Deployment__. An endpoint represents the API that customers use to consume the model, while the deployment indicates the specific implementation of that API. This distinction allows users to decouple the API from the implementation and to change the underlying implementation without affecting the consumer. This example will use such concepts to update the deployed model in endpoints without introducing service disruption.
2424

2525
The model we will deploy is based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence). It has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
2626

27-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo and then change directories to `sdk/using-mlflow/deploy`.
27+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo, and then change directories to `sdk/using-mlflow/deploy`.
2828

2929
### Follow along in Jupyter Notebooks
3030

@@ -36,7 +36,7 @@ Before following the steps in this article, make sure you have the following pre
3636

3737
- Install the Mlflow SDK package: `mlflow`.
3838
- Install the Azure Machine Learning plug-in for MLflow: `azureml-mlflow`.
39-
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. For more information about how to Set up tracking environment, see [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
39+
- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
4040

4141
### Registering the model in the registry
4242

@@ -204,7 +204,7 @@ We are going to exploit this functionality by deploying multiple versions of the
204204

205205
### Create a blue deployment
206206

207-
So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default" and this will represent our "blue deployment".
207+
So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default" and it will represent our "blue deployment".
208208

209209
#. Configure the deployment
210210

articles/machine-learning/how-to-deploy-mlflow-models.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -37,21 +37,21 @@ For no-code-deployment, Azure Machine Learning:
3737
* Ensures all the package dependencies indicated in the MLflow model are satisfied.
3838
* Provides a MLflow base image/curated environment that contains the following items:
3939
* Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst).
40-
* An scoring script to perform inference.
40+
* A scoring script to perform inference.
4141

4242
> [!WARNING]
4343
> Online Endpoints dynamically installs Python packages provided MLflow model package during container runtime. deploying MLflow models to online endpoints with no-code deployment in a private network without egress connectivity is not supported by the moment. If that's your case, either enable egress connectivity or indicate the environment to use in the deployment as explained in [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments). This limitation is not present in Batch Endpoints.
4444
4545
## How to customize inference when deploying MLflow models
4646

47-
You may be use to author scoring scripts to customize how inference is executed for you models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
47+
You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
4848

49-
For those cases, you can either [change how you model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
49+
For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
5050

5151

5252
### Change how your model is logged during training
5353

54-
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generate results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
54+
When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
5555

5656
A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
5757

@@ -60,7 +60,7 @@ A solution to this scenario is to implement machine learning pipelines that move
6060
If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script).
6161

6262
> [!IMPORTANT]
63-
> When you opt-in to indicate an scoring script, you also need to provide an environment for deployment.
63+
> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
6464
6565
## Deployment tools
6666

@@ -76,14 +76,14 @@ Each workflow has different capabilities, particularly around which type of comp
7676
| Scenario | MLflow SDK | Azure ML CLI/SDK | Azure ML studio |
7777
| :- | :-: | :-: | :-: |
7878
| Deploy MLflow models to managed online endpoints | [See example](how-to-deploy-mlflow-models-online-progressive.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)]<sup>1</sup> |
79-
| Deploy MLflow models to managed online endpoints (with an scoring script) | Not supported | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | Not supported |
79+
| Deploy MLflow models to managed online endpoints (with a scoring script) | Not supported | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | Not supported |
8080
| Deploy MLflow models to batch endpoints | | [See example](how-to-mlflow-batch.md) | [See example](how-to-mlflow-batch.md?tab=studio) |
81-
| Deploy MLflow models to batch endpoints (with an scoring script) | | [See example](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) | Not supported |
81+
| Deploy MLflow models to batch endpoints (with a scoring script) | | [See example](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) | Not supported |
8282
| Deploy MLflow models to web services (ACI/AKS) | Supported<sup>2</sup> | <sup>2</sup> | <sup>2</sup> |
8383
| Deploy MLflow models to web services (ACI/AKS - with a scoring script) | <sup>2</sup> | <sup>2</sup> | Supported<sup>2</sup> |
8484

8585
> [!NOTE]
86-
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with an scoring script on those scenarios.
86+
> - <sup>1</sup> Deployment to online endpoints in private link-enabled workspaces is not supported as public network access is required for package installation. We suggest to deploy with a scoring script on those scenarios.
8787
> - <sup>2</sup> We recommend switching to our [managed online endpoints](concept-endpoints.md) instead.
8888
8989
### Which option to use?
@@ -93,7 +93,7 @@ If you are familiar with MLflow or your platform support MLflow natively (like A
9393

9494
## Differences between MLflow models deployed in Azure Machine Learning and MLflow built-in server
9595

96-
MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model registered in MLflow server registry with `mlflow models serve -m my_model`. Since Azure Machine Learning online endpoints runs our influencing server technology, the behavior of these two services is different.
96+
MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model registered in MLflow server registry with `mlflow models serve -m my_model`. Since Azure Machine Learning online endpoints run our influencing server technology, the behavior of these two services is different.
9797

9898
### Input formats
9999

0 commit comments

Comments
 (0)