You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-collect-production-data.md
+103-1Lines changed: 103 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ To begin, add custom logging code to your scoring script (`score.py`). For custo
89
89
> [!NOTE]
90
90
> Currently, the `collect()`API logs only pandas DataFrames. If the data isnotin a DataFrame when passed to `collect()`, it won't get logged to storage and an error will be reported.
91
91
92
-
The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK. In this example, a third `Collector` called `inputs_outputs_collector` logs a joined DataFrame of the `model_inputs`and the `model_outputs`. This joined DataFrame enables more monitoring signals such as feature attribution drift. If you're not interested in these monitoring signals, you can remove this `Collector`.
92
+
The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK.
93
93
94
94
```python
95
95
import pandas as pd
@@ -132,6 +132,86 @@ def predict(input_df):
132
132
return output_df
133
133
```
134
134
135
+
#### Update your scoring script to log custom unique IDs
136
+
137
+
In addition to logging pandas DataFrames directly within your scoring script, you can log data with unique IDs of your choice. These IDs can come from your application, an external system, or can be generated by you. If you do not provide a custom ID, as detailed in this section, the Data collector will autogenerate a unique `correlationid` to help you correlate your model's inputs and outputs later. If you supply a custom ID, the `correlationid` field in the logged data will contain the value of your supplied custom ID.
138
+
139
+
1. In addition to the steps above, import the `azureml.ai.monitoring.context` package by adding the following line to your scoring script:
140
+
141
+
```python
142
+
from azureml.ai.monitoring.context import BasicCorrelationContext
143
+
```
144
+
145
+
1. In your scoring script, instantiate a `BasicCorrelationContext` object and pass in the `id` you wish to log for that row. We recommend that this `id` be a unique ID from your system, so you can uniquely identify each logged row from your Blob storage. Pass this object into your `collect()` API call as a parameter:
1. Ensure that you pass into the context into your `outputs_collector` so that your model inputs and outputs have the same unique ID logged with them, and they can be easily correlated later:
156
+
157
+
```python
158
+
# collect outputs data, pass in context so inputs and outputs data can be correlated later
159
+
outputs_collector.collect(output_df, context)
160
+
```
161
+
162
+
A comprehensive example is detailed below:
163
+
164
+
```python
165
+
import pandas as pd
166
+
import json
167
+
from azureml.ai.monitoring import Collector
168
+
from azureml.ai.monitoring.context import BasicCorrelationContext
169
+
170
+
definit():
171
+
global inputs_collector, outputs_collector, inputs_outputs_collector
172
+
173
+
# instantiate collectors with appropriate names, make sure align with deployment spec
# perform scoring with pandas Dataframe, return value is also pandas Dataframe
191
+
output_df = predict(input_df)
192
+
193
+
# collect outputs data, pass in context so inputs and outputs data can be correlated later
194
+
outputs_collector.collect(output_df, context)
195
+
196
+
return output_df.to_dict()
197
+
198
+
defpreprocess(json_data):
199
+
# preprocess the payload to ensure it can be converted to pandas DataFrame
200
+
return json_data["data"]
201
+
202
+
defpredict(input_df):
203
+
# process input and return with outputs
204
+
...
205
+
206
+
return output_df
207
+
```
208
+
209
+
#### Collect data for model performance monitoring
210
+
211
+
If you are interested in using your collected data for model performance monitoring, it is important that each logged row has a unique `correlationid` which can be used to correlate the data with ground truth data, when it becomes available. The data collector will autogenerate a unique `correlationid` for each logged row, and include it in the `correlationid` field in the JSON object. Please see [store collected data in a blob](#store-collected-data-in-a-blob) for comprehensive details on the JSON schema.
212
+
213
+
If you are interested in using your own unique ID to log with your production data, it is recommended that you log it as a separate column in your `pandas DataFrame`. The reason for this is because [the data collector will batch requests](#data-collector-batching) which fall within close proximity of one another. If you need the `correlationid` to be readily available downstream for integration with ground truth data, having it logged as a separate column is recommended.
214
+
135
215
### Update your dependencies
136
216
137
217
Before you can create your deployment with the updated scoring script, you need to create your environment with the base image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04` and the appropriate conda dependencies. Thereafter, you can build the environment using the specification in the following YAML.
@@ -301,6 +381,28 @@ With collected binary data, we show the raw file directly, with `instance_id` as
301
381
}
302
382
```
303
383
384
+
#### Data collector batching
385
+
386
+
The data collector will batch requests together into the same JSON object if they are sent within a short duration of each other. For example, if you run a script to send sample data to your endpoint, and the deployment has data collection enabled, some of the requests may get batched together, depending on the interval of time between them. If you are using data collection to use with [Azure Machine Learning model monitoring](#concept-model-monitoring.md) this behavior is handled appropriately and each request is handled as independent by the model monitoring service. However, if you expect each logged row of data have its own unique `correlationid`, you can include the `correlationid` as a column in the `pandas DataFrame` you are logging with the data collector. Information on how to do this can be found in [data collection for model performance monitoring][#collect-data-for-model-performance-monitoring].
387
+
388
+
Here is an example of two logged requests being batched together:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-monitor-model-performance.md
+55-64Lines changed: 55 additions & 64 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -451,14 +451,22 @@ You must satisfy the following requirements for you to configure your model perf
451
451
452
452
> [!NOTE]
453
453
>
454
-
> For Azure Machine Learning model performance monitoring, we recommend that you log your unique ID in its own column, using the [Azure Machine Learning data collector](how-to-collect-production-data.md). This practice will ensure that each collected row is guaranteed to have a unique ID.
454
+
> For Azure Machine Learning model performance monitoring, we recommend that you log your unique ID in its own column, using the [Azure Machine Learning data collector](how-to-collect-production-data.md).
455
455
456
456
* Have ground truth data (actuals) with a unique ID for each row. The unique ID for a given row should match the unique ID for the model outputs for that particular inference request. This unique ID is used to join your ground truth dataset with the model outputs.
457
457
458
458
Without having ground truth data, you can't perform model performance monitoring. Since ground truth data is encountered at the application level, it's your responsibility to collect it as it becomes available. You should also maintain a data asset in Azure Machine Learning that contains this ground truth data.
459
459
460
460
* (Optional) Have a pre-joined tabular dataset with model outputs and ground truth data already joined together.
461
461
462
+
### Monitoring model performance requirements when using data collector
463
+
464
+
If you use the [Azure Machine Learning data collector](concept-data-collection.md) to collect production inference data and do not supply your own unique ID for each row as a separate column, a `correlationid` will be autogenerated for you and included in the logged JSON object. However, the data collector will [batch rows](how-to-collect-production-data.md#data-collector-batching) which are sent within close proximity to each other. Batched rows will fall within the same JSON object and will thus have the same `correlationid`.
465
+
466
+
In order to differentiate between the rows in the same JSON object, Azure Machine Learning model performance monitoring uses indexing to determine the first, second, third, and so on, row in the JSON object. For example, if three rows are batched together, and the `correlationid` is `test`, row 1 will have an id of `test_0`, row 2 will have an id of `test_1`, and row 3 will have an id `test_2`. To ensure that your ground truth dataset contains unique IDs which match to the collected production inference model outputs, ensure that you index each `correlationid` appropriately. If your logged JSON object only has one row, then the `correlationid` would be `correlationid_0`.
467
+
468
+
To avoid using this indexing, we recommend that you log your unique ID in its own column within the `pandas DataFrame`, using the [Azure Machine Learning data collector](how-to-collect-production-data.md). Then, in your model monitoring configuration, you specify the name of this column to join your model output data with your ground data. As long as the IDs for each row in both datasets are the same, Azure Machine Learning model monitoring can perform model performance monitoring.
469
+
462
470
### Example workflow for monitoring model performance
463
471
464
472
To understand the concepts associated with model performance monitoring, consider this example workflow. Suppose you're deploying a model to predict whether credit card transactions are fraudulent or not, you can follow these steps to monitor the model's performance:
@@ -532,25 +540,17 @@ create_monitor:
532
540
Once you've satisfied the [prerequisites for model performance monitoring](#more-prerequisites-for-model-performance-monitoring), you can set up model monitoring with the following Python code:
533
541
534
542
```python
535
-
from azure.identity import InteractiveBrowserCredential
543
+
from azure.identity import DefaultAzureCredential
536
544
from azure.ai.ml import Input, MLClient
537
545
from azure.ai.ml.constants import (
538
-
MonitorFeatureType,
539
-
MonitorMetricName,
540
-
MonitorDatasetContext
546
+
MonitorDatasetContext,
541
547
)
542
548
from azure.ai.ml.entities import (
543
549
AlertNotification,
544
-
DataDriftSignal,
545
-
DataQualitySignal,
546
-
DataDriftMetricThreshold,
547
-
DataQualityMetricThreshold,
548
-
NumericalDriftMetrics,
549
-
CategoricalDriftMetrics,
550
-
DataQualityMetricsNumerical,
551
-
DataQualityMetricsCategorical,
552
-
MonitorFeatureFilter,
553
-
MonitorInputData,
550
+
BaselineDataRange,
551
+
ModelPerformanceMetricThreshold,
552
+
ModelPerformanceSignal,
553
+
ModelPerformanceClassificationThresholds,
554
554
MonitoringTarget,
555
555
MonitorDefinition,
556
556
MonitorSchedule,
@@ -563,79 +563,70 @@ from azure.ai.ml.entities import (
0 commit comments