You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -86,7 +86,7 @@ In the Azure CLI, you use `az ml schedule` to schedule a monitoring job.
86
86
87
87
1. Create a monitoring definition in a YAML file. For a sample out-of-box definition, see the following YAML code, which is also available in the [azureml-examples repository](https://github.com/Azure/azureml-examples/blob/main/cli/monitoring/out-of-box-monitoring.yaml).
88
88
89
-
Before you use this definition, adjust the values to fit your environment. For `endpoint_deployment_id`, use a value in the format `azureml:<endpoint-name>:<model-name>`.
89
+
Before you use this definition, adjust the values to fit your environment. For `endpoint_deployment_id`, use a value in the format `azureml:<endpoint-name>:<deployment-name>`.
@@ -98,7 +98,20 @@ In the Azure CLI, you use `az ml schedule` to schedule a monitoring job.
98
98
99
99
# [Python SDK](#tab/python)
100
100
101
-
To set up the out-of-box model monitoring, use code that's similar to the following sample. For `endpoint_deployment_id`, use a value in the format `azureml:<endpoint-name>:<model-name>`.
101
+
To set up out-of-box model monitoring, use code that's similar to the following sample. Replace the following placeholders with appropriate values:
102
+
103
+
| Placeholder | Description | Example |
104
+
| --- | --- | --- |
105
+
| <subscription_ID> | The ID of your subscription | aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e |
106
+
| <resource-group-name> | The name of the resource group that contains your workspace | my-resource-group |
107
+
| <workspace_name> | The name of your workspace | my-workspace |
108
+
| <endpoint-name> | The name of the endpoint to monitor | credit-default |
109
+
| <deployment-name> | The name of the deployment to monitor | main |
110
+
| <email-address1> and <email-address2> | Email addresses to use for notifications |`[email protected]`|
111
+
| <frequency-unit> | The monitoring frequency unit, such as "minute," "hour," "day," "week," or "month" | day |
112
+
| <interval> | The interval between jobs, such as 1 or 2 days or weeks | 1 |
113
+
| <start-hour> | The hour to start monitoring, on a 24-hour clock | 3 |
114
+
| <start-minutes> | The minutes after the specified hour to start monitoring | 15 |
102
115
103
116
```python
104
117
from azure.identity import DefaultAzureCredential
@@ -116,9 +129,9 @@ from azure.ai.ml.entities import (
@@ -215,20 +227,44 @@ After enabling feature importance, you'll see a feature importance for each feat
215
227
You can use Azure CLI, the Python SDK, or the studio for advanced setup of model monitoring.
216
228
217
229
# [Azure CLI](#tab/azure-cli)
230
+
1. Create a monitoring definition in a YAML file. For a sample advanced definition, see the following YAML code, which is also available in the [azureml-examples repository](https://github.com/Azure/azureml-examples/blob/main/cli/monitoring/advanced-model-monitoring.yaml).
218
231
219
-
Create advanced model monitoring setup with the following CLI command and YAML definition:
232
+
Before you use this definition, adjust the following values and any others you need to fit your environment:
220
233
221
-
```azurecli
222
-
az ml schedule create -f ./advanced-model-monitoring.yaml
223
-
```
234
+
- For `endpoint_deployment_id`, use a value in the format `azureml:<endpoint-name>:<deployment-name>`.
235
+
- For `path` in reference input data sections, use a value in the format `azureml:<reference-data-asset-name>:<version>`.
236
+
- For `target_column`, use something.
237
+
- For `features`, list something.
238
+
- For `emails`, list the email addresses that you want to use for notifications.
224
239
225
-
The following YAML contains the definition for advancedmodelmonitoring.
To set up advanced monitoring, take the following steps:
408
446
409
-
1. Complete the entires on the **Basic settings** page as described earlier in the [Set up out-of-box model monitoring](#set-up-out-of-box-model-monitoring) section.
410
-
1. Select **Next** to open the **Configure data asset** page of the **Advanced settings** section.
411
-
1.**Add** a dataset to be used as the reference dataset. We recommend that you use the model training data as the comparison reference dataset for data drift and data quality. Also, use the model validation data as the comparison reference dataset for prediction drift.
447
+
1. In [Azure Machine Learning studio](https://ml.azure.com), go to your workspace.
448
+
1. Under **Manage**, select **Monitoring**, and then select **Add**.
449
+
1. On the Basic settings page, enter information as described earlier in [Set up out-of-box model monitoring](#set-up-out-of-box-model-monitoring).
450
+
1. Select **Next** to open the Configure data asset page of the **Advanced settings** section.
451
+
1. If you don't see the data asset that you want to use as a reference dataset, select **Add**. We recommend that you use the model training data as the comparison reference dataset for data drift and data quality. Also, use the model validation data as the comparison reference dataset for prediction drift. Add the data assets that you want to use.
412
452
413
453
:::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-configuration-data.png" alt-text="Screenshot showing how to add datasets for the monitoring signals to use." lightbox="media/how-to-monitor-models/model-monitoring-advanced-configuration-data.png":::
414
454
415
-
1. Select **Next** to go to the **Select monitoring signals** page. On this page, you see some monitoring signals already added (if you selected an Azure Machine Learning online deployment earlier). The signals (data drift, prediction drift, and data quality) use recent, past production data as the comparison reference dataset and use smart defaults for metrics and thresholds.
455
+
1. Select **Next**. The **Select monitoring signals** page opens. If you selected an Azure Machine Learning online deployment earlier, you see some monitoring signals. The data drift, prediction drift, and data quality signals use recent, past production data as the comparison reference dataset and use smart defaults for metrics and thresholds.
1. Configure the data drift in the **Edit signal** window as follows:
421
-
422
-
1. In step 1, for the production data asset, select your model inputs dataset. Also, make the following selection:
423
-
- Select the desired lookback window size.
424
-
1. In step 2, for the reference data asset, select your training dataset. Also, make the following selection:
425
-
- Select the target (output) column.
426
-
1. In step 3, select to monitor drift for the top N most important features, or monitor drift for a specific set of features.
427
-
1. In step 4, select your preferred metric and thresholds to use for numerical features.
428
-
1. In step 5, select your preferred metric and thresholds to use for categorical features.
459
+
1. Next to the data drift signal, select **Edit**.
460
+
1. In the **Edit Signal** window, take the following steps to configure the data drift signal:
461
+
1. In step 1, for the production data asset, select your model input data asset. Also select the lookback window size that you want to use.
462
+
1. In step 2, for the reference data asset, select your training dataset. Also select the target, or output, column.
463
+
1. In step 3, select **Top N features** to monitor drift for the *N* most important features. Or select specific features if you want to monitor drift for a specific set.
464
+
1. In step 4, select the metric and threshold that you want to use for numerical features.
465
+
1. In step 5, select the metric and threshold that you want to use for categorical features.
466
+
1. Select **Save**.
429
467
430
468
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-signals.png" alt-text="Screenshot showing how to configure selected monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-configure-signals.png":::
431
469
432
-
1. Select **Save** to return to the **Select monitoring signals** page.
433
-
1. Select **Add** to open the **Edit Signal** window.
434
-
1. Select **Feature attribution drift (preview)** to configure the feature attribution drift signal as follows:
435
-
436
-
1. In step 1, select the production data asset that has your model inputs
437
-
- Also, select the desired lookback window size.
438
-
1. In step 2, select the production data asset that has your model outputs.
439
-
- Also, select the common column between these data assets to join them on. If the data was collected with the [data collector](how-to-collect-production-data.md), the common column is `correlationid`.
440
-
1. (Optional) If you used the data collector to collect data that has your model inputs and outputs already joined, select the joined dataset as your production data asset (in step 1)
441
-
- Also, **Remove** step 2 in the configuration panel.
442
-
1. In step 3, select your training dataset to use as the reference dataset.
443
-
- Also, select the target (output) column for your training dataset.
444
-
1. In step 4, select your preferred metric and threshold.
470
+
1. On the Select monitoring signals page, select **Add**.
471
+
1. On the Edit Signal window, Select **Feature attribution drift (PREVIEW)**, and then take the following steps to configure the feature attribution drift signal:
472
+
473
+
1. In step 1, select the production data asset that has your model inputs. Also select the lookback window size that you want to use.
474
+
1. In step 2, select the production data asset that has your model outputs. Also select the common column to use to join the production data and the output data. If you use the [data collector](how-to-collect-production-data.md) to collect data, select **correlationid**.
475
+
1. (Optional) If you use the data collector to collect data that has your model inputs and outputs already joined, take the following steps:
476
+
1. In step 1, for the production data asset, select the joined dataset.
477
+
1. In step 2, select **Remove** to remove step 2 from the configuration panel.
478
+
1. In step 3, for the reference dataset, select your training dataset. Also select the target, or output, column for your training dataset.
479
+
1. In step 4, select the metric and threshold that you want to use.
480
+
1. Select **Save**.
445
481
446
482
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png" alt-text="Screenshot showing how to configure feature attribution drift signal." lightbox="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png":::
447
483
448
-
1. Select **Save** to return to the **Select monitoring signals** page.
484
+
1.On the Select monitoring signals page, finish configuring your monitoring signals, and then select **Next**.
449
485
450
486
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configured-signals.png" alt-text="Screenshot showing the configured signals." lightbox="media/how-to-monitor-models/model-monitoring-configured-signals.png":::
451
487
452
-
1. When you're finished with your monitoring signals configuration, select **Next** to go to the **Notifications** page.
453
-
1. On the **Notifications** page, enable alert notifications for each signal and select **Next**.
454
-
1. Review your settings on the **Review monitoring settings** page.
488
+
1. On the Notifications page, turn on notifications for each signal, and then select **Next**.
489
+
1. On the Review monitoring details page, review your settings.
455
490
456
491
:::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-configuration-review.png" alt-text="Screenshot showing review page of the advanced configuration for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-configuration-review.png":::
0 commit comments