Skip to content

Commit ac27851

Browse files
committed
wording
1 parent 484a9e8 commit ac27851

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

articles/ai-services/openai/how-to/risks-safety-monitor.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: How to use the Risk & Safety monitor in OpenAI Studio
2+
title: How to use Risks & Safety monitoring in Azure OpenAI Studio
33
titleSuffix: Azure OpenAI Service
44
description: Learn how to check statistics and insights from your Azure OpenAI content filtering activity.
55
author: PatrickFarley
@@ -10,15 +10,15 @@ ms.date: 03/19/2024
1010
manager: nitinme
1111
---
1212

13-
# Use the Risks & Safety monitor in OpenAI Studio (preview)
13+
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
1414

1515
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and meet Responsible AI principles.
1616

17-
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety dashboard for each of your deployments that uses a content filter configuration.
17+
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
1818

19-
## Access the Risks & Safety monitor
19+
## Access Risks & Safety monitoring
2020

21-
To access the Risks & Safety monitor, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
21+
To access Risks & Safety monitoring, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
2222

2323
Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select the **Deployments** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Risks & Safety** tab at the top.
2424

@@ -35,7 +35,7 @@ Content filtering data is shown in the following ways:
3535
- **Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was detected as harmful.
3636
- **Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
3737

38-
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitor." lightbox="../media/how-to/content-detection.png":::
38+
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
3939

4040
### Recommended actions
4141

@@ -56,7 +56,7 @@ To use Potentially abusive user detection, you need:
5656
### Set up your Azure Data Explorer database
5757
5858
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to store potentially abusive user detection insights in a compliant way and with full control. Follow these steps to enable it:
59-
1. In OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
59+
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
6060
1. Fill in the required information and select **add**. We recommend you create a new database to store the analysis results.
6161
1. After you connect the data store, take the following steps to grant permission:
6262
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
@@ -78,7 +78,7 @@ The potentially abusive user detection relies on the user information that custo
7878
- **Total abuse request ratio/count**
7979
- **Abuse ratio/count by category**
8080
81-
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitor." lightbox="../media/how-to/potentially-abusive-user.png":::
81+
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/potentially-abusive-user.png":::
8282
8383
### Recommended actions
8484

0 commit comments

Comments
 (0)