You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure Machine Learning model monitoring enables you to track the objective performance of your models in production by calculating model performance metrics. These model performance metrics include accuracy (for classification models) and root mean squared error (RMSE) for (regression models).
484
+
485
+
Before you can configure your model performance signal, you need to satisfy the following requirements:
486
+
487
+
* Have production model output data (your model's predictions) with a unique ID for each row.
488
+
* Have ground truth data (or actuals) with a unique ID for each row. This data will be joined with production data.
489
+
* (Optional) Have a prejoined dataset with model outputs and ground truth data.
490
+
491
+
The key requirement for enabling model performance monitoring is that you already collected ground truth data. Since ground truth data is encountered at the application level, it's your responsibility to collect it as it becomes available. You should also maintain a data asset in Azure Machine Learning with this ground truth data.
492
+
493
+
To illustrate, suppose you have a deployed model to predict if a credit card transaction is fraudulent or not fraudulent. As you use this model in production, you can collect the model output data with the [model data collector](how-to-collect-production-data.md). Ground truth data becomes available when a credit card holder specifies whether or not the transaction was fraudulent or not. This `is_fraud` ground truth should be collected at the application level and maintained within an Azure Machine Learning data asset that the model performance monitoring signal can use.
494
+
495
+
# [Azure CLI](#tab/azure-cli)
496
+
497
+
Once you've satisfied the previous requirements, you can set up model monitoring with the following CLI command and YAML definition:
498
+
499
+
```azurecli
500
+
az ml schedule create -f ./model-performance-monitoring.yaml
501
+
```
502
+
503
+
The following YAML contains the definition for model monitoring with production inference data that you've collected.
The studio currently doesn't support model performance monitoring.
695
+
696
+
---
697
+
481
698
## Set up model monitoring by bringing your own production data to Azure Machine Learning
482
699
483
700
You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. If you have production data but no deployment, you can use the data to perform continuous model monitoring. To monitor these models, you must meet the following requirements:
0 commit comments