You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/risks-safety-monitor.md
+13-11Lines changed: 13 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,30 +1,28 @@
1
1
---
2
-
title: How to use Risks & Safety monitoring in Azure OpenAI Studio
2
+
title: How to use Risks & Safety monitoring in Azure AI Foundry
3
3
titleSuffix: Azure OpenAI Service
4
4
description: Learn how to check statistics and insights from your Azure OpenAI content filtering activity.
5
5
author: PatrickFarley
6
6
ms.author: pafarley
7
7
ms.service: azure-ai-openai
8
8
ms.topic: how-to
9
-
ms.date: 10/03/2024
9
+
ms.date: 12/05/2024
10
10
manager: nitinme
11
11
---
12
12
13
-
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
13
+
# Use Risks & Safety monitoring in Azure AI Foundry (preview)
14
14
15
-
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and Responsible AI principles.
15
+
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your [filter configuration](/azure/ai-services/openai/how-to/content-filters) to serve your specific business needs and Responsible AI principles.
16
16
17
-
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
17
+
[Azure AI Foundry](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
18
18
19
19
## Access Risks & Safety monitoring
20
20
21
21
To access Risks & Safety monitoring, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
22
22
23
-
Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select the **Deployments** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Risks & Safety** tab at the top.
23
+
Go to [Azure AI Foundry](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select a project. Then select the **Models + endpoints** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Metrics** tab at the top. Then select **Open in Azure Monitor** to view the full report in the Azure portal.
24
24
25
-
## Content detection
26
-
27
-
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied as described in the [Content filtering documentation](/azure/ai-services/openai/how-to/content-filters).
25
+
## Configure metrics
28
26
29
27
### Report description
30
28
@@ -35,7 +33,9 @@ Content filtering data is shown in the following ways:
35
33
-**Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was flagged by the content filters.
36
34
-**Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
37
35
36
+
<!--
38
37
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
38
+
-->
39
39
40
40
### Recommended actions
41
41
@@ -56,7 +56,7 @@ To use Potentially abusive user detection, you need:
56
56
### Set up your Azure Data Explorer database
57
57
58
58
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
59
-
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
59
+
1. In Azure AI Foundry, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
60
60
1. Fill in the required information and select **Save**. We recommend you create a new database to store the analysis results.
61
61
1. After you connect the data store, take the following steps to grant permission to write analysis results to the connected database:
62
62
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
@@ -81,14 +81,16 @@ The potentially abusive user detection relies on the user information that custo
81
81
-**Total abuse request ratio/count**
82
82
-**Abuse ratio/count by category**
83
83
84
+
<!--
84
85
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/potentially-abusive-user.png":::
86
+
-->
85
87
86
88
### Recommended actions
87
89
88
90
Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
89
91
90
92
## Next steps
91
93
92
-
Next, create or edit a content filter configuration in Azure OpenAI Studio.
94
+
Next, create or edit a content filter configuration in Azure AI Foundry.
93
95
94
96
-[Configure content filters with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters)
0 commit comments