Skip to content

Commit 47f5c8a

Browse files
committed
cela updates
1 parent ac27851 commit 47f5c8a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/ai-services/openai/how-to/risks-safety-monitor.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ manager: nitinme
1212

1313
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
1414

15-
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and meet Responsible AI principles.
15+
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and Responsible AI principles.
1616

1717
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
1818

@@ -24,26 +24,26 @@ Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credent
2424

2525
## Content detection
2626

27-
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied to both the user input and model output of LLM sessions.
27+
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied as described in the [Content filtering documentation](/azure/ai-services/openai/how-to/content-filters).
2828

2929
### Report description
3030

3131
Content filtering data is shown in the following ways:
3232
- **Total blocked request count and block rate**: This view shows a global view of the amount and rate of content that is filtered over time. This helps you understand trends of harmful requests from users and see any unexpected activity.
3333
- **Blocked requests by category**: This view shows the amount of content blocked for each category. This is an all-up statistic of harmful requests across the time range selected. It currently supports the harm categories hate, sexual, self-harm, and violence.
3434
- **Block rate over time by category**: This view shows the block rate for each category over time. It currently supports the harm categories hate, sexual, self-harm, and violence.
35-
- **Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was detected as harmful.
35+
- **Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was flagged by the content filters.
3636
- **Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
3737

3838
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
3939

4040
### Recommended actions
4141

42-
Fine-tune your content filter configuration to further align with business needs and conform to your system's Responsible AI requirements.
42+
Adjust your content filter configuration to further align with business needs and Responsible AI principles.
4343

4444
## Potentially abusive user detection
4545

46-
The **Potentially abusive user detection** pane shows information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
46+
The **Potentially abusive user detection** pane leverages user-level abuse reporting to show information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
4747

4848
<!--
4949
To use Potentially abusive user detection, you need:

0 commit comments

Comments
 (0)