You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: md-docs/user_guide/monitoring/drift_explainability.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ You can access the reports by navigating to the `Drift Explainability` tab in th
15
15
## Structure
16
16
17
17
A Drift Explainability Report consists in comparing the reference data and the portion of production data where the drift was identified, hence
18
-
those belonging to the new concept. Notice that these reports are generated after a sufficient amount of samples has been collected after the drift.
18
+
those belonging to the new data distribution. Notice that these reports are generated after a sufficient amount of samples has been collected after the drift.
19
19
This is because the elements of the report needs a significant number of samples to guarantee statistical reliability of the results.
20
20
If the distribution moves back to the reference before enough samples are collected, the report might not be generated.
Copy file name to clipboardExpand all lines: md-docs/user_guide/monitoring/index.md
+21-14Lines changed: 21 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,25 +17,36 @@ and the distribution of the data it is operating on.
17
17
18
18
## How does the ML cube Platform perform Monitoring?
19
19
20
-
The ML cube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the performance of a model
21
-
on the test set) to incoming production data. If a significant difference is detected, an alarm is raised, signaling that the monitored entity
20
+
The ML cube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the
21
+
performance of a model
22
+
on the test set) to incoming production data.
23
+
24
+
These statistical techniques, also known as _monitoring algorithms_, are tailored to the type of data
25
+
being observed; for instance, univariate data requires different monitoring techniques than multivariate data. However, you don't need to worry about
26
+
the specifics of these algorithms, as the ML cube Platform takes care of selecting the most appropriate ones for your task.
27
+
28
+
If a significant difference between reference and production data is detected, an alarm is raised, signaling that the monitored entity
22
29
is drifting away from the expected behavior and that corrective actions should be taken.
23
30
24
-
In more practical terms, the [set_model_reference] method can be used to specify the time period where the reference of a given model should be placed. As a consequence,
25
-
all algorithms associated with the specified model (not just those monitoring the performance, but also those operating on the data used by the model) will
26
-
be initialized on the specified reference. Of course, you should provide to the Platform the data you want to use as a reference before calling this method, for instance using the
27
-
[add_historical_data] method.
31
+
In practical terms, you can use the SDK to specify the time period where the reference of a given model should be placed.
32
+
As a consequence, all algorithms associated with the specified model (not just those monitoring the performance, but also those operating
33
+
on the data used by the model) will
34
+
be initialized on the specified reference. Of course, you should provide to the
35
+
Platform the data you want to use as a reference before setting the reference itself. This can be done through the SDK as well.
28
36
29
-
After setting the reference, the [add_production_data] method can be used to send production data to the platform. This data will be analyzed by the monitoring algorithms
30
-
and, if a significant difference is detected, an alarm will be raised, in the form of a [DetectionEvent].
37
+
After setting the reference, you can send production data to the platform, still using the SDK. This data will be analyzed by the monitoring algorithms
38
+
and, if a significant difference is detected, an alarm will be raised, in the form of a [Detection Event].
31
39
You can explore more about detection events and how you can set up automatic actions upon their reception in the [Detection Event]
32
40
and the [Detection Event Rule] sections respectively.
33
41
34
42
### Targets and Metrics
35
43
36
44
After explaining why monitoring is so important in modern AI systems and detailing how it is performed in the ML cube Platform,
37
-
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the ML cube Platform monitors, but they differ in their nature.
38
-
They are both automatically defined by the ML cube platform based on the [Task] attributes, such as the Task type and the data structure.
45
+
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the ML cube Platform monitors,
46
+
but they differ in their nature.
47
+
They are both automatically defined by the ML cube platform based on the [Task] attributes, such as the Task type and the data structure,
48
+
49
+
39
50
40
51
#### Monitoring Targets
41
52
@@ -111,10 +122,6 @@ You can check the status of the monitored entities in two ways:
0 commit comments