Skip to content

Commit d48f1e9

Browse files
Quick fixes post pr
1 parent 1bd4b24 commit d48f1e9

File tree

4 files changed

+16
-13
lines changed

4 files changed

+16
-13
lines changed

md-docs/user_guide/monitoring/detection_event.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Detection Event
22

3-
A [Detection Event] is raised by the MLCube Platform when a significant change is detected in one of the entities being monitored.
3+
A [Detection Event] is raised by the ML cube Platform when a significant change is detected in one of the entities being monitored.
44

55
An event is characterized by the following attributes:
66

@@ -10,8 +10,8 @@ An event is characterized by the following attributes:
1010
the criticality of the detected drift.
1111
- `monitoring_target`: the [MonitoringTarget] being monitored.
1212
- `monitoring_metric`: the [MonitoringMetric] that triggered the event, if the event is related to a metric.
13-
- `model_name`: the name of the model that raised the event.
14-
- `model_version`: the version of the model that raised the event.
13+
- `model_name`: the name of the model that raised the event. It's present only if the event is related to a model.
14+
- `model_version`: the version of the model that raised the event. It's present only if the event is related to a model.
1515
- `insert_datetime`: the time when the event was raised.
1616
- `sample_timestamp`: the timestamp of the sample that triggered the event.
1717
- 'sample_customer_id': the id of the customer that triggered the event.
@@ -23,7 +23,8 @@ You can access the detection events generated by the Platform in two ways:
2323

2424
- **SDK**: the [get_detection_events] method can be used to retrieve all detection events for a specific task programmatically.
2525
- **WebApp**: navigate to the **`Detection `** section located in the task page's sidebar. Here, all detection events are displayed in a table,
26-
with multiple filtering options available for useful event management.
26+
with multiple filtering options available for useful event management. Additionally, the latest detection events identified are shown in the Task homepage,
27+
in the section named "Latest Detection Events".
2728

2829
## User Feedback
2930

md-docs/user_guide/monitoring/detection_event_rules.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This section outlines how to configure automation to receive notifications or start retraining after a [Detection Event] occurs.
44

5-
When a detection event is produced, the MLCube Platform reviews all the detection event rules you have set
5+
When a detection event is produced, the ML cube Platform reviews all the detection event rules you have set
66
and triggers those matching the event.
77

88
Rules are specific to a task and require the following parameters:

md-docs/user_guide/monitoring/drift_explainability.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ensuring the model continues to function as expected. However, monitoring only i
66
In order to make the right decisions, you need to understand what were the main factors that led to the drift in the first place, so that
77
the correct actions can be taken to mitigate it.
88

9-
The MLCube Platform supports this process by offering what we refer to as "**Drift Explainability Reports**",
9+
The ML cube Platform supports this process by offering what we refer to as "**Drift Explainability Reports**",
1010
automatically generated upon the detection of a drift and containing several elements that should help you diagnose the root causes
1111
of the change occurred.
1212

@@ -16,6 +16,7 @@ You can access the reports by navigating to the `Drift Explainability` tab in th
1616

1717
A Drift Explainability Report consists in comparing the reference data and the portion of production data where the drift was identified, hence
1818
those belonging to the new concept. Notice that these reports are generated after a sufficient amount of samples has been collected after the drift.
19+
This is because the elements of the report needs a significant number of samples to guarantee statistical reliability of the results.
1920
If the distribution moves back to the reference before enough samples are collected, the report might not be generated.
2021

2122
Each report is composed of several entities, each providing a different perspective on the data and the drift occurred.

md-docs/user_guide/monitoring/index.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ in turn can have a negative impact on the business.
1515
Monitoring, also known as __Drift Detection__ in the literature, refers the process of continuously tracking the performance of a model
1616
and the distribution of the data it is operating on.
1717

18-
## How does the MLCube Platform perform Monitoring?
18+
## How does the ML cube Platform perform Monitoring?
1919

20-
The MLCube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the performance of a model
20+
The ML cube platform performs monitoring by employing statistical techniques to compare a certain reference (for instance, data used for training or the performance of a model
2121
on the test set) to incoming production data. If a significant difference is detected, an alarm is raised, signaling that the monitored entity
2222
is drifting away from the expected behavior and that corrective actions should be taken.
2323

@@ -34,14 +34,15 @@ and the [Detection Event Rule] sections respectively.
3434
### Targets and Metrics
3535

3636
After explaining why monitoring is so important in modern AI systems and detailing how it is performed in the ML cube Platform,
37-
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the MLCube Platform monitors, but they differ in their nature.
37+
we can introduce the concepts of Monitoring Targets and Monitoring Metrics. They both represent quantities that the ML cube Platform monitors, but they differ in their nature.
38+
They are both automatically defined by the ML cube platform based on the [Task] attributes, such as the Task type and the data structure.
3839

3940
#### Monitoring Targets
4041

4142
A Monitoring Target is a relevant entity involved in a [Task]. They represent the main quantities monitored by the platform, those whose
4243
variation can have a significant impact on the AI task success.
4344

44-
The MLCube platform supports the following monitoring targets:
45+
The ML cube platform supports the following monitoring targets:
4546

4647
- `INPUT`: the input distribution, $P(X)$.
4748
- `CONCEPT`: the joint distribution of input and target, $P(X, Y)$.
@@ -72,9 +73,9 @@ Nonetheless, the platform might not support yet all possible combinations. The t
7273
#### Monitoring Metrics
7374

7475
A Monitoring Metric is a generic quantity that can be computed on a Monitoring Target. They enable the monitoring of specific
75-
aspects of a target, which might help in identifying the root cause of a drift, as well as defining the corrective actions to be taken.
76+
aspects of an entity, which might help in identifying the root cause of a drift, as well as defining the corrective actions to be taken.
7677

77-
The following table display the monitoring metrics supported, along with their monitoring target and the conditions
78+
The following table displays the monitoring metrics supported, along with their monitoring target and the conditions
7879
under which they are actually monitored. Notice that also this table is subject to changes, as new metrics will be added.
7980

8081
| **Monitoring Metric** | Description | **Monitoring Target** | **Conditions** |
@@ -83,7 +84,7 @@ under which they are actually monitored. Notice that also this table is subject
8384
| TEXT_EMOTION | The emotion of the text | INPUT, USER_INPUT | When the data structure is text |
8485
| TEXT_SENTIMENT | The sentiment of the text | INPUT, USER_INPUT | When the data structure is text |
8586
| TEXT_LENGTH | The length of the text | INPUT, USER_INPUT, RETRIEVED_CONTEXT, PREDICTION | When the data structure is text |
86-
| MODEL_PERPLEXITY | The uncertainty of the LLM | PREDICTION | When the task type is RAG |
87+
| MODEL_PERPLEXITY | A measure of how well the LLM predicts the next words | PREDICTION | When the task type is RAG |
8788
| IMAGE_BRIGHTNESS | The brightness of the image | INPUT | When the data structure is image |
8889
| IMAGE_CONTRAST | The contrast of the image | INPUT | When the data structure is image |
8990
| BBOXES_AREA | The average area of the predicted bounding boxes | PREDICTION | When the task type is Object Detection |

0 commit comments

Comments
 (0)