Skip to content

Commit ee83085

Browse files
Second set of corrections post pr
Plot configurations, more general description of monitoring
1 parent d48f1e9 commit ee83085

File tree

3 files changed

+31
-16
lines changed

3 files changed

+31
-16
lines changed

md-docs/user_guide/monitoring/detection_event_rules.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,23 @@ Rules are specific to a task and require the following parameters:
1717
- `actions`: A sequential list of actions to be executed when the rule is triggered.
1818

1919
## Detection Event Actions
20-
Two types of actions are currently supported: notification and retrain.
20+
Three types of actions are currently supported: notification, plot configuration and retrain.
2121

2222
### Notifications
23+
24+
These actions send notifications to external services when a detection event is triggered. The following notification actions are available:
25+
2326
- `SlackNotificationAction`: sends a notification to a Slack channel via webhook.
2427
- `DiscordNotificationAction`: sends a notification to a Discord channel via webhook.
2528
- `EmailNotificationAction`: sends an email to the provided email address.
2629
- `TeamsNotificationAction`: sends a notification to Microsoft Teams via webhook.
2730
- `MqttNotificationAction`: sends a notification to an MQTT broker.
2831

32+
### Plot Configuration
33+
34+
This action consists in creating two plot configurations when a detection event is triggered: the first one includes
35+
data preceding the event, while the second one includes data following the event.
36+
2937
### Retrain Action
3038

3139
Retrain action lets you retrain your model. Therefore, it is only available when the monitoring target of the rule is related to a model.

md-docs/user_guide/monitoring/drift_explainability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You can access the reports by navigating to the `Drift Explainability` tab in th
1515
## Structure
1616

1717
A Drift Explainability Report consists in comparing the reference data and the portion of production data where the drift was identified, hence
18-
those belonging to the new concept. Notice that these reports are generated after a sufficient amount of samples has been collected after the drift.
18+
those belonging to the new data distribution. Notice that these reports are generated after a sufficient amount of samples has been collected after the drift.
1919
This is because the elements of the report needs a significant number of samples to guarantee statistical reliability of the results.
2020
If the distribution moves back to the reference before enough samples are collected, the report might not be generated.
2121

md-docs/user_guide/monitoring/index.md

Lines changed: 21 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -17,25 +17,36 @@ and the distribution of the data it is operating on.
1717

1818
## How does the ML cube Platform perform Monitoring?
1919

20-
The ML cube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the performance of a model
21-
on the test set) to incoming production data. If a significant difference is detected, an alarm is raised, signaling that the monitored entity
20+
The ML cube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the
21+
performance of a model
22+
on the test set) to incoming production data.
23+
24+
These statistical techniques, also known as _monitoring algorithms_, are tailored to the type of data
25+
being observed; for instance, univariate data requires different monitoring techniques than multivariate data. However, you don't need to worry about
26+
the specifics of these algorithms, as the ML cube Platform takes care of selecting the most appropriate ones for your task.
27+
28+
If a significant difference between reference and production data is detected, an alarm is raised, signaling that the monitored entity
2229
is drifting away from the expected behavior and that corrective actions should be taken.
2330

24-
In more practical terms, the [set_model_reference] method can be used to specify the time period where the reference of a given model should be placed. As a consequence,
25-
all algorithms associated with the specified model (not just those monitoring the performance, but also those operating on the data used by the model) will
26-
be initialized on the specified reference. Of course, you should provide to the Platform the data you want to use as a reference before calling this method, for instance using the
27-
[add_historical_data] method.
31+
In practical terms, you can use the SDK to specify the time period where the reference of a given model should be placed.
32+
As a consequence, all algorithms associated with the specified model (not just those monitoring the performance, but also those operating
33+
on the data used by the model) will
34+
be initialized on the specified reference. Of course, you should provide to the
35+
Platform the data you want to use as a reference before setting the reference itself. This can be done through the SDK as well.
2836

29-
After setting the reference, the [add_production_data] method can be used to send production data to the platform. This data will be analyzed by the monitoring algorithms
30-
and, if a significant difference is detected, an alarm will be raised, in the form of a [DetectionEvent].
37+
After setting the reference, you can send production data to the platform, still using the SDK. This data will be analyzed by the monitoring algorithms
38+
and, if a significant difference is detected, an alarm will be raised, in the form of a [Detection Event].
3139
You can explore more about detection events and how you can set up automatic actions upon their reception in the [Detection Event]
3240
and the [Detection Event Rule] sections respectively.
3341

3442
### Targets and Metrics
3543

3644
After explaining why monitoring is so important in modern AI systems and detailing how it is performed in the ML cube Platform,
37-
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the ML cube Platform monitors, but they differ in their nature.
38-
They are both automatically defined by the ML cube platform based on the [Task] attributes, such as the Task type and the data structure.
45+
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the ML cube Platform monitors,
46+
but they differ in their nature.
47+
They are both automatically defined by the ML cube platform based on the [Task] attributes, such as the Task type and the data structure,
48+
49+
3950

4051
#### Monitoring Targets
4152

@@ -111,10 +122,6 @@ You can check the status of the monitored entities in two ways:
111122

112123

113124
[Task]: ../task.md
114-
[set_model_reference]: ../../api/python/client.md#set_model_reference
115-
[add_production_data]: ../../api/python/client.md#add_production_data
116-
[add_historical_data]: ../../api/python/client.md#add_historical_data
117-
[DetectionEvent]: ../../api/python/models.md#detectionevent
118125
[Detection Event Rule]: detection_event_rules.md
119126
[Detection Event]: detection_event.md
120127
[MonitoringStatus]: ../../api/python/enums.md#monitoringstatus

0 commit comments

Comments
 (0)