Skip to content

Commit 228cb79

Browse files
Merge pull request #240764 from Bozhong68/momo-doc-update
minor update to MoMo doc
2 parents e51bea3 + bf9410c commit 228cb79

File tree

3 files changed

+3
-10
lines changed

3 files changed

+3
-10
lines changed

articles/machine-learning/concept-model-monitoring.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Azure Machine Learning provides the following capabilities for continuous model
3737
* **Flexibility to define your monitoring signal**. If the built-in monitoring signals aren't suitable for your business scenario, you can define your own monitoring signal with a custom monitoring signal component.
3838
* **Flexibility to bring your own production inference data**. If you deploy models outside of Azure Machine Learning, or if you deploy models to Azure Machine Learning batch endpoints, you can collect production inference data and use that data in Azure Machine Learning for model monitoring.
3939
* **Flexibility to select data window**. You have the flexibility to select a data window for both the target dataset and the baseline dataset.
40-
* By default, the data window for production inference data (the target dataset) is your monitoring frequency. That is, all data collected in the past monitoring period before the monitoring job is run will be used as the target dataset. You can use `lookback_period_days` to adjust the data window for the target dataset if needed.
40+
* By default, the data window for production inference data (the target dataset) is your monitoring frequency. That is, all data collected in the past monitoring period before the monitoring job is run will be used as the target dataset. You can use `data_window_size` to adjust the data window for the target dataset if needed.
4141
* By default, the data window for the baseline dataset is the full dataset. You can adjust the data window by using either the date range or the `trailing_days` parameter.
4242

4343
## Monitoring signals and metrics
@@ -48,7 +48,7 @@ Azure Machine Learning model monitoring (preview) supports the following list of
4848
|--|--|--|--|--|--|
4949
| Data drift | Data drift tracks changes in the distribution of a model's input data by comparing it to the model's training data or recent past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model inputs | Recent past production data or training data |
5050
| Prediction drift | Prediction drift tracks changes in the distribution of a model's prediction outputs by comparing it to validation or test labeled data or recent past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Chebyshev Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model outputs | Recent past production data or validation data |
51-
| Data quality | Data quality tracks the data integrity of a model's input by comparing it to the model's training data or recent past production data. The data quality checks include checking for null values, type mismatch, or out-of-bounds of values. | Null value rate, type error rate, out-of-bound rate | Classification (tabular data), Regression (tabular data) | production data - model inputs | Recent past production data or training data |
51+
| Data quality | Data quality tracks the data integrity of a model's input by comparing it to the model's training data or recent past production data. The data quality checks include checking for null values, type mismatch, or out-of-bounds of values. | Null value rate, data type error rate, out-of-bounds rate | Classification (tabular data), Regression (tabular data) | production data - model inputs | Recent past production data or training data |
5252
| Feature attribution drift | Feature attribution drift tracks the importance or contributions of features to prediction outputs in production by comparing it to feature importance at training time | Normalized discounted cumulative gain | Classification (tabular data), Regression (tabular data) | Production data | Training data |
5353

5454
## How model monitoring works in Azure Machine Learning

articles/machine-learning/how-to-monitor-model-performance.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -539,8 +539,6 @@ create_monitor:
539539
compute:
540540
instance_type: standard_e4s_v3
541541
runtime_version: 3.2
542-
monitoring_target:
543-
endpoint_deployment_id: azureml:fraud-detection-endpoint:fraud-detection-deployment
544542

545543
monitoring_signals:
546544
advanced_data_drift: # monitoring signal name, any user defined name works
@@ -657,10 +655,6 @@ spark_configuration = SparkResourceConfiguration(
657655
runtime_version="3.2"
658656
)
659657

660-
monitoring_target = MonitoringTarget(
661-
endpoint_deployment_id="azureml:fraud-detection-endpoint:fraud-detection-deployment"
662-
)
663-
664658
#define target dataset (production dataset)
665659
input_data = MonitorInputData(
666660
input_dataset=Input(
@@ -740,7 +734,6 @@ alert_notification = AlertNotification(
740734
# Finally monitor definition
741735
monitor_definition = MonitorDefinition(
742736
compute=spark_configuration,
743-
monitoring_target=monitoring_target,
744737
monitoring_signals=monitoring_signals,
745738
alert_notification=alert_notification
746739
)

articles/machine-learning/reference-yaml-monitor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ Data quality signal tracks data quality issues in production by comparing to tra
162162
| `alert_notification` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | |
163163
| `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_notification` is on, user will receive alert notification. | |By default, the object contains following `numerical` and ` categorical` metrics: `null_value_rate`, `data_type_error_rate`, and `out_of_bounds_rate` |
164164
| `metric_thresholds.applicable_feature_type` | String | Feature type that the metric will be applied to. | `numerical` or `categorical`| |
165-
| `metric_thresholds.metric_name` | String | The metric name for the specified feature type. | Allowed `numerical` and `categorical` metric names are: `null_value_rate`, `data_type_error_rate`, `out_of_bound_rate` | |
165+
| `metric_thresholds.metric_name` | String | The metric name for the specified feature type. | Allowed `numerical` and `categorical` metric names are: `null_value_rate`, `data_type_error_rate`, `out_of_bounds_rate` | |
166166
| `metric_thresholds.threshold` | Number | The threshold for the specified metric. | | |
167167

168168
#### Feature attribution drift

0 commit comments

Comments
 (0)