You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/risks-safety-monitor.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ manager: nitinme
12
12
13
13
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
14
14
15
-
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and meet Responsible AI principles.
15
+
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and Responsible AI principles.
16
16
17
17
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
18
18
@@ -24,26 +24,26 @@ Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credent
24
24
25
25
## Content detection
26
26
27
-
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied to both the user input and model output of LLM sessions.
27
+
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied as described in the [Content filtering documentation](/azure/ai-services/openai/how-to/content-filters).
28
28
29
29
### Report description
30
30
31
31
Content filtering data is shown in the following ways:
32
32
-**Total blocked request count and block rate**: This view shows a global view of the amount and rate of content that is filtered over time. This helps you understand trends of harmful requests from users and see any unexpected activity.
33
33
-**Blocked requests by category**: This view shows the amount of content blocked for each category. This is an all-up statistic of harmful requests across the time range selected. It currently supports the harm categories hate, sexual, self-harm, and violence.
34
34
-**Block rate over time by category**: This view shows the block rate for each category over time. It currently supports the harm categories hate, sexual, self-harm, and violence.
35
-
-**Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was detected as harmful.
35
+
-**Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was flagged by the content filters.
36
36
-**Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
37
37
38
38
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
39
39
40
40
### Recommended actions
41
41
42
-
Fine-tune your content filter configuration to further align with business needs and conform to your system's Responsible AI requirements.
42
+
Adjust your content filter configuration to further align with business needs and Responsible AI principles.
43
43
44
44
## Potentially abusive user detection
45
45
46
-
The **Potentially abusive user detection** pane shows information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
46
+
The **Potentially abusive user detection** pane leverages user-level abuse reporting to show information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
47
47
48
48
<!--
49
49
To use Potentially abusive user detection, you need:
0 commit comments