Skip to content

Commit c7709c4

Browse files
committed
2.0
1 parent 8eafe35 commit c7709c4

File tree

1 file changed

+16
-29
lines changed

1 file changed

+16
-29
lines changed

articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md

Lines changed: 16 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -366,6 +366,11 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
366366
input_schema = model.metadata.get_input_schema()
367367

368368
def run(raw_data):
369+
json_data = json.loads(raw_data)
370+
if "input_data" not in json_data.keys():
371+
raise Exception("Request must contain a top level key named 'input_data'")
372+
373+
serving_input = json.dumps(json_data["input_data"])
369374
data = infer_and_parse_json_input(raw_data, input_schema)
370375
result = model.predict(data)
371376

@@ -377,6 +382,9 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
377382
> [!TIP]
378383
> The previous scoring script is provided as an example about how to perform inference of an MLflow model. You can adapt this example to your needs or change any of its parts to reflect your scenario.
379384

385+
> [!WARNING]
386+
> __MLflow 2.0 advisory__: The provided scoring script will work with both MLflow 1.X and MLflow 2.X. However, be advised that the expected input/output formats on those versions may vary. Check the environment definition used to ensure you are using the expected MLflow version. Notice that MLflow 2.0 is only supported in Python 3.8+.
387+
380388
1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see The MLmodel format). We are going then to build the environment using the conda dependencies from the file. However, we need also to include the package `azureml-inference-server-http` which is required for Online Deployments in Azure Machine Learning.
381389

382390
The conda definition file looks as follows:
@@ -485,34 +493,9 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
485493

486494
1. Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using a sample request file along with the `invoke` method.
487495

488-
**sample-request-sklearn-custom.json**
496+
**sample-request-sklearn.json**
489497

490-
```json
491-
{
492-
"dataframe_split": {
493-
"columns": [
494-
"age",
495-
"sex",
496-
"bmi",
497-
"bp",
498-
"s1",
499-
"s2",
500-
"s3",
501-
"s4",
502-
"s5",
503-
"s6"
504-
],
505-
"data": [
506-
[ 1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0 ],
507-
[ 10.0,2.0,9.0,8.0,7.0,6.0,5.0,4.0,3.0,2.0]
508-
],
509-
"index": [0,1]
510-
}
511-
}
512-
```
513-
514-
> [!NOTE]
515-
> Notice how the key `dataframe_split` has been used in this example instead of `input_data`. This is because we are using an MLflow method `infer_and_parse_json_input` which uses the keys expected by MLflow serving (see [MLflow built-in deployment tools](https://www.mlflow.org/docs/latest/models.html#deploy-mlflow-models) for more input examples and formats). If you change the login in the scoring script, then the payload may be affected.
498+
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json":::
516499

517500
To submit a request to the endpoint, you can do as follows:
518501

@@ -528,7 +511,7 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
528511
ml_client.online_endpoints.invoke(
529512
endpoint_name=endpoint_name,
530513
deployment_name=deployment.name,
531-
request_file="sample-request-sklearn-custom.json",
514+
request_file="sample-request-sklearn.json",
532515
)
533516
```
534517

@@ -538,7 +521,7 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
538521

539522
1. Go to the __Endpoints__ tab and select the new endpoint created.
540523
1. Go to the __Test__ tab.
541-
1. Paste the content of the file `sample-request-sklearn-custom.json`.
524+
1. Paste the content of the file `sample-request-sklearn.json`.
542525
1. Click on __Test__.
543526
1. The predictions will show up in the box on the right.
544527

@@ -555,6 +538,10 @@ Use the following steps to deploy an MLflow model with a custom scoring script.
555538
}
556539
```
557540

541+
> [!WARNING]
542+
> __MLflow 2.0 advisory__: In MLflow 1.X, the key `predictions` will be missing.
543+
544+
558545
## Clean up resources
559546

560547
Once you're done with the endpoint, you can delete the associated resources:

0 commit comments

Comments
 (0)