Skip to content

Commit 23e2c05

Browse files
committed
add tony's draft
1 parent 9e31ec4 commit 23e2c05

File tree

1 file changed

+66
-0
lines changed

1 file changed

+66
-0
lines changed
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
title: "Improve performance in Azure AI Content Safety"
3+
titleSuffix: Azure AI services
4+
description: tbd
5+
#services: cognitive-services
6+
author: PatrickFarley
7+
manager: nitinme
8+
ms.service: azure-ai-content-safety
9+
ms.topic: how-to
10+
ms.date: 09/18/2024
11+
ms.author: pafarley
12+
---
13+
14+
# Improve performance in Azure AI Content Safety
15+
16+
This playbook provides a step-by-step guide for users of Azure AI Content Safety on how to handle false positives and false negatives effectively. False positives occur when the system incorrectly identifies non-harmful content as harmful, while false negatives occur when harmful content is not flagged. Properly addressing these instances ensures the integrity and reliability of the content moderation process, including responsible generative AI deployment.
17+
18+
## Review and verification
19+
20+
Conduct an initial assessment to determine whether the flagged content is indeed a false positive or false negative. This can involve:
21+
- Checking the context of the flagged content.
22+
- Comparing the flagged content against the content safety risk categories and severity definitions.
23+
- If you are using content safety in Azure OpenAI, go here
24+
- If you are using the Azure AI Content Safety standalone API, go here for harm categories and here for Prompt Shields
25+
26+
## Customize your severity settings
27+
28+
If your assessment confirms that this is indeed a false positive or false negative, you can try customizing your severity settings to mitigate the issue before reaching out to Microsoft.
29+
30+
### Azure AI Content Safety standalone API users
31+
32+
If you are using the Azure AI Content Safety standalone API , try experimenting by setting the severity threshold at different levels for [harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=definitions) based on API output. Alternatively, if you prefer the no-code approeach, you can try out those settings in [content safety studio](https://contentsafety.cognitive.azure.com/) or Azure AI Studio’s [content safety page](https://ai.azure.com/explore/contentsafety). Instructions can be found [here](/azure/ai-studio/quickstarts/content-safety?tabs=moderate-text-content).
33+
34+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. More information on using blocklists for text moderation can be found in [Use blocklists for text moderation](/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest).
35+
36+
## Azure AI Studio Content Filtering users
37+
38+
Read the [Configurability](/azure/ai-studio/concepts/content-filtering#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
39+
40+
Follow the steps in the documentation to update configurations to handle false positives or negatives: [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#create-a-content-filter).
41+
42+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found here: [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#use-a-blocklist-as-a-filter).
43+
44+
## Create a custom category based on your own RAI policy
45+
46+
Sometimes, you might need a custom category to ensure the filtering aligns with your specific Responsible AI policy, as pre-built categories or content filtering may not suffice. You may require an entirely new content category.
47+
48+
Refer to the [custom categories documentation](/azure/ai-services/content-safety/custom-category) to build your own categories with the Azure AI Content Safety standalone API.
49+
50+
We are currently working on integrating the custom categories feature into Azure OpenAI and AI Studio, which will be available soon.
51+
52+
## Document issues and send feedback to Azure
53+
54+
If, after you’ve exhausted all the steps mentioned above, the false positives or negatives cannot be resolved by Azure AI Content Safety, it is likely a policy definition or model issue that needs further attention.
55+
56+
Document the details of the false positives and/or false negatives by providing the following information:
57+
- Description of the flagged content.
58+
- Context in which the content was posted.
59+
- Reason given by Azure AI Content Safety for the flagging.
60+
- Explanation of why the content is a false positive or negative.
61+
- Any adjustments already attempted in severity settings or custom categories.
62+
- Screenshots or logs of the flagged content and system responses.
63+
64+
This documentation will help in escalating the issue to the appropriate teams for resolution.
65+
66+
Send the feedback to our Azure CSS by following the instructions [here](tbd).

0 commit comments

Comments
 (0)