You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1.In the data drift **Edit signal** window, configure the following:
388
+
1.Configure the data drift in the **Edit signal** window as follows:
389
389
390
-
1. For the production data asset, select your model inputs with the desired lookback window size.
391
-
1. Select your training dataset to use as the reference dataset.
392
-
1. Select the target (output) column.
393
-
1. Select to monitor drift for the top N most important features, or monitor drift for a specific set of features.
394
-
1. Select your preferred metrics and thresholds.
390
+
1. In step 1, for the production data asset, select your model inputs dataset. Also, make the following selection:
391
+
- Select the desired lookback window size.
392
+
1. In step 2, for the reference data asset, select your training dataset. Also, make the following selection:
393
+
- Select the target (output) column.
394
+
1. In step 3, select to monitor drift for the top N most important features, or monitor drift for a specific set of features.
395
+
1. In step 4, select your preferred metric and thresholds to use for numerical features.
396
+
1. In step 5, select your preferred metric and thresholds to use for categorical features.
395
397
396
398
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-signals.png" alt-text="Screenshot showing how to configure selected monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-configure-signals.png":::
397
399
@@ -403,7 +405,7 @@ To set up advanced monitoring:
403
405
- Also, select the desired lookback window size.
404
406
1. In step 2, select the production data asset that has your model outputs.
405
407
- Also, select the common column between these data assets to join them on. If the data was collected with the [data collector](how-to-collect-production-data.md), the common column is `correlationid`.
406
-
1. (Optional) If you used the data collector to collect data where your model inputs and outputs are already joined, select the joined dataset as your production data asset (in step 1)
408
+
1. (Optional) If you used the data collector to collect data that has your model inputs and outputs already joined, select the joined dataset as your production data asset (in step 1)
407
409
- Also, **Remove** step 2 in the configuration panel.
408
410
1. In step 3, select your training dataset to use as the reference dataset.
409
411
- Also, select the target (output) column for your training dataset.
@@ -457,9 +459,9 @@ You must satisfy the following requirements for you to configure your model perf
457
459
458
460
* (Optional) Have a pre-joined tabular dataset with model outputs and ground truth data already joined together.
459
461
460
-
### Example scenario for monitoring model performance
462
+
### Example workflow for monitoring model performance
461
463
462
-
To understand the concepts associated with model performance monitoring, consider this example. Suppose you're deploying a model to predict whether credit card transactions are fraudulent or not. Follow these steps to monitor the model's performance:
464
+
To understand the concepts associated with model performance monitoring, consider this example workflow. Suppose you're deploying a model to predict whether credit card transactions are fraudulent or not, you can follow these steps to monitor the model's performance:
463
465
464
466
1. Configure your deployment to use the data collector to collect the model's production inference data (input and output data). Let's say that the output data is stored in a column `is_fraud`.
465
467
1. For each row of the collected inference data, log a unique ID. The unique ID can come from your application, or you can use the `correlationid` that Azure Machine Learning uniquely generates for each logged JSON object.
@@ -671,9 +673,12 @@ To set up model performance monitoring:
671
673
672
674
1. Complete the entries on the **Basic settings** page as described earlier in the [Set up out-of-box model monitoring](#set-up-out-of-box-model-monitoring) section.
673
675
1. Select **Next** to open the **Configure data asset** page of the **Advanced settings** section.
674
-
1. Select **+ Add** to add a dataset for use as the ground truth dataset. Ensure that your model outputs dataset is also included in the list of added datasets. The ground truth dataset you add should have a unique ID column. The values in the unique ID columns for the ground truth dataset and the model outputs dataset must match in order for both datasets to be joined together prior to metric computation.
676
+
1. Select **+ Add** to add a dataset for use as the ground truth dataset.
675
677
676
-
:::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-data2.png" alt-text="Screenshot showing how to add datasets for the monitoring signals to use." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-data2.png":::
678
+
Ensure that your model outputs dataset is also included in the list of added datasets. The ground truth dataset you add should have a unique ID column.
679
+
The values in the unique ID column for both the ground truth dataset and the model outputs dataset must match in order for both datasets to be joined together prior to metric computation.
680
+
681
+
:::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-data2.png" alt-text="Screenshot showing how to add datasets to use for model performance monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-data2.png":::
677
682
678
683
:::image type="content" source="media/how-to-monitor-models/model-monitoring-added-ground-truth-dataset.png" alt-text="Screenshot showing the ground truth dataset and the model outputs and inputs datasets for the monitoring signals to connect to." lightbox="media/how-to-monitor-models/model-monitoring-added-ground-truth-dataset.png":::
679
684
@@ -683,14 +688,12 @@ To set up model performance monitoring:
683
688
1. Select **Model performance (preview)** to configure the model performance signal as follows:
684
689
685
690
1. In step 1, for the production data asset, select your model outputs dataset. Also, make the following selections:
686
-
- Select the appropriate target column (for example, `is_fraud`)
691
+
- Select the appropriate target column (for example, `is_fraud`).
687
692
- Select the desired lookback window size and lookback window offset.
688
693
1. In step 2, for the reference data asset, select the ground truth data asset that you added earlier. Also, make the following selections:
689
-
- Select the appropriate target column
694
+
- Select the appropriate target column.
690
695
- Select the column on which to perform the join with the model outputs dataset. The column used for the join should be the column that is common between the two datasets and which has a unique ID for each row in the dataset (for example, `correlationid`).
691
696
1. In step 3, select your desired performance metrics and specify their respective thresholds.
692
-
693
-
NOTE: This screenshot should show a fully configured model performance view
694
697
695
698
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-model-performance.png" alt-text="Screenshot showing how to configure a model performance signal." lightbox="media/how-to-monitor-models/model-monitoring-configure-model-performance.png":::
0 commit comments