You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large.
15
15
---
16
16
17
-
# Detect data drift (preview) on datasets
17
+
# Data drift (preview) will be retired, and replaced by Model Monitor
18
+
19
+
Data drift(preview) will be retired at 09/01/2025, and you can start to use [Model Monitor](../how-to-monitor-model-performance.md) for your data drift tasks.
20
+
Please check the content below to understand the replacement, feature gaps and manual change steps.
@@ -49,6 +52,10 @@ To create and work with dataset monitors, you need:
49
52
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
50
53
* Structured (tabular) data with a timestamp specified in the file path, file name, or column in the data.
51
54
55
+
## Prerequisites (Migrate to Model Monitor)
56
+
When you migrate to Model Monitor, please check the prerequisites as mentioned in this article [Prerequisites of Azure Machine Learning model monitoring](../how-to-monitor-model-performance.md#prerequisites).
57
+
58
+
52
59
## What is data drift?
53
60
54
61
Model accuracy degrades over time, largely because of data drift. For machine learning models, data drift is the change in model input data that leads to model performance degradation. Monitoring data drift helps detect these model performance issues.
@@ -66,7 +73,7 @@ This top down approach makes it easy to monitor data instead of traditional rule
66
73
67
74
In Azure Machine Learning, you use dataset monitors to detect and alert for data drift.
68
75
69
-
###Dataset monitors
76
+
## Dataset monitors
70
77
71
78
With a dataset monitor you can:
72
79
@@ -103,6 +110,12 @@ You monitor [Azure Machine Learning datasets](how-to-create-register-datasets.md
103
110
104
111
The monitor compares the baseline and target datasets.
105
112
113
+
### Migrate to Model Monitor
114
+
In Model Monitor, you can find corresponding concepts as following, and you can find more details in this article [Set up model monitoring by bringing in your production data to Azure Machine Learning](../how-to-monitor-model-performance.md#set-up-out-of-box-model-monitoring):
115
+
* Reference dataset: similar to your baseline dataset for data drift detection, it is set as the recent past production inference dataset.
116
+
* Production inference data: similar to your target dataset in data drift detection, the production inference data can be collected automatically from models deployed in production. It can also be inference data you store.
117
+
118
+
106
119
## Create target dataset
107
120
108
121
The target dataset needs the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. Create the dataset with a timestamp through the [Python SDK](#sdk-dataset) or [Azure Machine Learning studio](#studio-dataset). A column representing a "timestamp" must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', create a virtual column through the path pattern setting and set it as the "partition timestamp" to enable time series API functionality.
> For a full example of using the `timeseries` trait of datasets, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) or the [datasets SDK documentation](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false----kwargs-).
144
157
145
158
# [Studio](#tab/azure-studio)
146
-
147
159
<aname="studio-dataset"></a>
148
160
149
161
If you create your dataset using Azure Machine Learning studio, ensure the path to your data contains timestamp information, include all subfolders with data, and set the partition format.
@@ -160,8 +172,17 @@ If your data is already partitioned by date or time, as is the case here, you ca
Create a dataset monitor to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
@@ -174,14 +195,17 @@ As described later, a dataset monitor runs at a set frequency (daily, weekly, mo
174
195
175
196
The **backfill** function runs a backfill job, for a specified start and end date range. A backfill job fills in expected missing data points in a data set, as a way to ensure data accuracy and completeness.
176
197
198
+
> [!NOTE]
199
+
> Azure Machine Learning model monitoring doesn't support manual **backfill** function, if you want to redo the model monitor for a specif time range, you can create another model monitor for that specific time range.
:::image type="content" source="media/how-to-monitor-datasets/wizard.png" alt-text="Create a monitor wizard":::
245
269
246
-
***Select target dataset**. The target dataset is a tabular dataset with timestamp column specified which to analyze for data drift. The target dataset must have features in common with the baseline dataset, and should be a `timeseries` dataset, which new data is appended to. Historical data in the target dataset can be analyzed, or new data can be monitored.
270
+
1.**Select target dataset**. The target dataset is a tabular dataset with timestamp column specified which to analyze for data drift. The target dataset must have features in common with the baseline dataset, and should be a `timeseries` dataset, which new data is appended to. Historical data in the target dataset can be analyzed, or new data can be monitored.
247
271
248
-
***Select baseline dataset.** Select the tabular dataset to be used as the baseline for comparison of the target dataset over time. The baseline dataset must have features in common with the target dataset. Select a time range to use a slice of the target dataset, or specify a separate dataset to use as the baseline.
272
+
1.**Select baseline dataset.** Select the tabular dataset to be used as the baseline for comparison of the target dataset over time. The baseline dataset must have features in common with the target dataset. Select a time range to use a slice of the target dataset, or specify a separate dataset to use as the baseline.
249
273
250
-
***Monitor settings**. These settings are for the scheduled dataset monitor pipeline to create.
274
+
1.**Monitor settings**. These settings are for the scheduled dataset monitor pipeline to create.
After completion of the wizard, the resulting dataset monitor will appear in the list. Select it to go to that monitor's details page.
264
288
289
+
# [Azure CLI](#tab/azure-cli)
290
+
<aname="cli-monitor"></a>
291
+
292
+
Not supported
293
+
265
294
---
266
295
296
+
297
+
## Create Model Monitor (Migrate to Model Monitor)
298
+
When you migrate to Model Monitor, if you have deployed your model to production in an Azure Machine Learning online endpoint and enabled [data collection](../how-to-collect-production-data.md) at deployment time, Azure Machine Learning collects production inference data, and automatically stores it in Microsoft Azure Blob Storage. You can then use Azure Machine Learning model monitoring to continuously monitor this production inference data, and you can directly choose the model to create target dataset (production inference data in Model Monitor).
299
+
300
+
When you migrate to Model Monitor, if you didn't deploy your model to production in an Azure Machine Learning online endpoint, or you don't want to use [data collection](../how-to-collect-production-data.md), you can also [set up model monitoring with custom signals and metrics](../how-to-monitor-model-performance.md#set-up-model-monitoring-with-custom-signals-and-metrics).
301
+
302
+
Following sections contain more details on how to migrate to Model Monitor.
303
+
304
+
## Create Model Monitor via automatically collected production data (Migrate to Model Monitor)
305
+
306
+
If you have deployed your model to production in an Azure Machine Learning online endpoint and enabled [data collection](../how-to-collect-production-data.md) at deployment time.
307
+
308
+
# [Python SDK](#tab/python)
309
+
<aname="sdk-model-monitor"></a>
310
+
311
+
You can use the following code to set up the out-of-box model monitoring:
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
381
+
1. Go to your workspace.
382
+
1. Select **Monitoring** from the **Manage** section
383
+
1. Select **Add**.
384
+
385
+
:::image type="content" source="../media/how-to-monitor-models/add-model-monitoring.png" alt-text="Screenshot showing how to add model monitoring." lightbox="../media/how-to-monitor-models/add-model-monitoring.png":::
386
+
387
+
1. On the **Basic settings** page, use **(Optional) Select model** to choose the model to monitor.
388
+
1. The **(Optional) Select deployment with data collection enabled** dropdown list should be automatically populated if the model is deployed to an Azure Machine Learning online endpoint. Select the deployment from the dropdown list.
389
+
1. Select the training data to use as the comparison reference in the **(Optional) Select training data** box.
390
+
1. Enter a name for the monitoring in **Monitor name** or keep the default name.
391
+
1. Notice that the virtual machine size is already selected for you.
392
+
1. Select your **Time zone**.
393
+
1. Select **Recurrence** or **Cron expression** scheduling.
394
+
1. For **Recurrence** scheduling, specify the repeat frequency, day, and time. For **Cron expression** scheduling, enter a cron expression for monitoring run.
395
+
396
+
:::image type="content" source="../media/how-to-monitor-models/model-monitoring-basic-setup.png" alt-text="Screenshot of basic settings page for model monitoring." lightbox="../media/how-to-monitor-models/model-monitoring-basic-setup.png":::
397
+
398
+
1. Select **Next** to go to the **Advanced settings** section.
399
+
1. Select **Next** on the **Configure data asset** page to keep the default datasets.
400
+
1. Select **Next** to go to the **Select monitoring signals** page.
401
+
1. Select **Next** to go to the **Notifications** page. Add your email to receive email notifications.
402
+
1. Review your monitoring details and select **Create** to create the monitor.
403
+
404
+
# [Azure CLI](#tab/azure-cli)
405
+
<aname="cli-model-monitor"></a>
406
+
407
+
Azure Machine Learning model monitoring uses `az ml schedule` to schedule a monitoring job. You can create the out-of-box model monitor with the following CLI command and YAML definition:
408
+
409
+
```azurecli
410
+
az ml schedule create -f ./out-of-box-monitoring.yaml
411
+
```
412
+
413
+
The following YAML contains the definition for the out-of-box model monitoring.
## Create Model Monitor via custom data preprocessing component (Migrate to Model Monitor)
421
+
When you migrate to Model Monitor, if you didn't deploy your model to production in an Azure Machine Learning online endpoint, or you don't want to use [data collection](../how-to-collect-production-data.md), you can also [set up model monitoring with custom signals and metrics](../how-to-monitor-model-performance.md#set-up-model-monitoring-with-custom-signals-and-metrics).
422
+
423
+
If you don't have a deployment, but you have production data, you can use the data to perform continuous model monitoring. To monitor these models, you must be able to:
424
+
425
+
* Collect production inference data from models deployed in production.
426
+
* Register the production inference data as an Azure Machine Learning data asset, and ensure continuous updates of the data.
427
+
* Provide a custom data preprocessing component and register it as an Azure Machine Learning component.
428
+
429
+
You must provide a custom data preprocessing component if your data isn't collected with the [data collector](../how-to-collect-production-data.md). Without this custom data preprocessing component, the Azure Machine Learning model monitoring system won't know how to process your data into tabular form with support for time windowing.
430
+
431
+
Your custom preprocessing component must have these input and output signatures:
432
+
433
+
| Input/Output | Signature name | Type | Description | Example value |
434
+
|---|---|---|---|---|
435
+
| input |`data_window_start`| literal, string | data window start-time in ISO8601 format. | 2023-05-01T04:31:57.012Z |
436
+
| input |`data_window_end`| literal, string | data window end-time in ISO8601 format. | 2023-05-01T04:31:57.012Z |
437
+
| input |`input_data`| uri_folder | The collected production inference data, which is registered as an Azure Machine Learning data asset. | azureml:myproduction_inference_data:1 |
438
+
| output |`preprocessed_data`| mltable | A tabular dataset, which matches a subset of the reference data schema. ||
439
+
440
+
For an example of a custom data preprocessing component, see [custom_preprocessing in the azuremml-examples GitHub repo](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_preprocessing).
441
+
442
+
443
+
267
444
## Understand data drift results
268
445
269
446
This section shows you the results of monitoring a dataset, found in the **Datasets** / **Dataset monitors** page in Azure studio. You can update the settings, and analyze existing data for a specific time period on this page.
@@ -318,7 +495,7 @@ Metrics in the chart depend on the type of feature.
318
495
319
496
| Metric | Description |
320
497
| ------ | ----------- |
321
-
| Euclidiandistance|Computedforcategoricalcolumns.Euclideandistance is computed on twovectors,generatedfromempiricaldistribution of thesamecategoricalcolumnfromtwodatasets. 0 indicates no difference in theempiricaldistributions.Themore it deviatesfrom 0, themorethiscolumnhasdrifted.Trendscan be observedfrom a timeseriesplot of thismetricandcan be helpful in uncovering a driftingfeature.|
498
+
| Euclidiandistance|Computedforcategoricalcolumns.Euclideandistance is computed on twovectors,generatedfromempiricaldistribution of thesamecategoricalcolumnfromtwodatasets. 0 indicates no difference in theempiricaldistributions.Themore it deviatesfrom 0, themorethiscolumnhasdrifted.Trendscan be observedfrom a timeseriesplot of thismetricandcan be helpful in uncovering a driftingfeature.|
322
499
| Unique values | Number of unique values (cardinality) of the feature. |
323
500
324
501
On this chart, select a single date to compare the feature distribution between the target and this date for the displayed feature. For numeric features, this shows two probability distributions. If the feature is numeric, a bar chart is shown.
@@ -327,7 +504,7 @@ On this chart, select a single date to compare the feature distribution between
327
504
328
505
## Metrics, alerts, and events
329
506
330
-
Metrics can be queried in the [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview) resource associated with your machine learning workspace. You have access to all features of Application Insights including set up for custom alert rules and action groups to trigger an action such as, an Email/SMS/Push/Voice or Azure Function. Refer to the complete Application Insights documentation for details.
507
+
Metrics can be queried in the [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview) resource associated with your machine learning workspace. You have access to all features of Application Insights including set up for custom alert rules and action groups to trigger an action such as an Email/SMS/Push/Voice or Azure Function. Refer to the complete Application Insights documentation for details.
331
508
332
509
To get started, navigate to the [Azure portal](https://portal.azure.com) and select your workspace's **Overview** page. The associated Application Insights resource is on the far right:
333
510
@@ -375,6 +552,7 @@ Limitations and known issues for data drift monitors:
375
552
376
553
> [!NOTE]
377
554
> Do not hard code the service principal password in your code. Instead, retrieve it from the Python environment, key store, or other secure method of accessing secrets.
0 commit comments