You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/risks-safety-monitor.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: How to use the Risk & Safety monitor in OpenAI Studio
2
+
title: How to use Risks & Safety monitoring in Azure OpenAI Studio
3
3
titleSuffix: Azure OpenAI Service
4
4
description: Learn how to check statistics and insights from your Azure OpenAI content filtering activity.
5
5
author: PatrickFarley
@@ -10,15 +10,15 @@ ms.date: 03/19/2024
10
10
manager: nitinme
11
11
---
12
12
13
-
# Use the Risks & Safety monitor in OpenAI Studio (preview)
13
+
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
14
14
15
15
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and meet Responsible AI principles.
16
16
17
-
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety dashboard for each of your deployments that uses a content filter configuration.
17
+
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
18
18
19
-
## Access the Risks & Safety monitor
19
+
## Access Risks & Safety monitoring
20
20
21
-
To access the Risks & Safety monitor, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
21
+
To access Risks & Safety monitoring, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
22
22
23
23
Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select the **Deployments** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Risks & Safety** tab at the top.
24
24
@@ -35,7 +35,7 @@ Content filtering data is shown in the following ways:
35
35
-**Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was detected as harmful.
36
36
-**Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
37
37
38
-
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitor." lightbox="../media/how-to/content-detection.png":::
38
+
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
39
39
40
40
### Recommended actions
41
41
@@ -56,7 +56,7 @@ To use Potentially abusive user detection, you need:
56
56
### Set up your Azure Data Explorer database
57
57
58
58
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to store potentially abusive user detection insights in a compliant way and with full control. Follow these steps to enable it:
59
-
1. In OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
59
+
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
60
60
1. Fill in the required information and select **add**. We recommend you create a new database to store the analysis results.
61
61
1. After you connect the data store, take the following steps to grant permission:
62
62
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
@@ -78,7 +78,7 @@ The potentially abusive user detection relies on the user information that custo
78
78
- **Total abuse request ratio/count**
79
79
- **Abuse ratio/count by category**
80
80
81
-
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitor." lightbox="../media/how-to/potentially-abusive-user.png":::
81
+
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/potentially-abusive-user.png":::
0 commit comments