Skip to content

Commit 18cf67a

Browse files
committed
edit ht
1 parent 23e2c05 commit 18cf67a

File tree

1 file changed

+45
-25
lines changed

1 file changed

+45
-25
lines changed
Lines changed: 45 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,86 @@
11
---
22
title: "Improve performance in Azure AI Content Safety"
33
titleSuffix: Azure AI services
4-
description: tbd
4+
description: Learn techniques to improve the performance of Azure AI Content Safety models by handling false positives and false negatives.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.topic: how-to
1010
ms.date: 09/18/2024
1111
ms.author: pafarley
12+
#customer intent: As a user, I want to improve the performance of Azure AI Content Safety so that I can ensure accurate content moderation.
1213
---
1314

1415
# Improve performance in Azure AI Content Safety
1516

16-
This playbook provides a step-by-step guide for users of Azure AI Content Safety on how to handle false positives and false negatives effectively. False positives occur when the system incorrectly identifies non-harmful content as harmful, while false negatives occur when harmful content is not flagged. Properly addressing these instances ensures the integrity and reliability of the content moderation process, including responsible generative AI deployment.
17+
This guide provides a step-by-step process for handling false positives and false negatives from Azure AI Content Safety models.
18+
19+
False positives are when the system incorrectly flags non-harmful content as harmful; false negatives are when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
20+
21+
## Prerequisites
22+
23+
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
24+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
1725

1826
## Review and verification
1927

20-
Conduct an initial assessment to determine whether the flagged content is indeed a false positive or false negative. This can involve:
28+
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
2129
- Checking the context of the flagged content.
22-
- Comparing the flagged content against the content safety risk categories and severity definitions.
23-
- If you are using content safety in Azure OpenAI, go here
24-
- If you are using the Azure AI Content Safety standalone API, go here for harm categories and here for Prompt Shields
30+
- Comparing the flagged content against the content safety risk categories and severity definitions:
31+
- If you are using content safety in Azure OpenAI, go [here](/azure/ai-services/openai/concepts/content-filter).
32+
- If you are using the Azure AI Content Safety standalone API, go [here](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning) for harm categories and [here](/azure/ai-services/content-safety/concepts/jailbreak-detection) for Prompt Shields.
2533

26-
## Customize your severity settings
34+
## Customize your severity settings
2735

28-
If your assessment confirms that this is indeed a false positive or false negative, you can try customizing your severity settings to mitigate the issue before reaching out to Microsoft.
36+
If your assessment confirms that you found a false positive or false negative, you can try customizing your severity settings to mitigate the issue. The settings depend on which platform you're using.
2937

30-
### Azure AI Content Safety standalone API users
38+
#### [Content Safety standalone API](#tab/standalone-api)
3139

32-
If you are using the Azure AI Content Safety standalone API , try experimenting by setting the severity threshold at different levels for [harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=definitions) based on API output. Alternatively, if you prefer the no-code approeach, you can try out those settings in [content safety studio](https://contentsafety.cognitive.azure.com/) or Azure AI Studio’s [content safety page](https://ai.azure.com/explore/contentsafety). Instructions can be found [here](/azure/ai-studio/quickstarts/content-safety?tabs=moderate-text-content).
40+
If you're using the Azure AI Content Safety standalone API directly, try experimenting by setting the severity threshold at different levels for [harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=definitions) based on API output. Alternatively, if you prefer the no-code approach, you can try out those settings in [Content Safety Studio](https://contentsafety.cognitive.azure.com/) or Azure AI Studio’s [Content Safety page](https://ai.azure.com/explore/contentsafety). Instructions can be found [here](/azure/ai-studio/quickstarts/content-safety?tabs=moderate-text-content).
3341

3442
In addition to adjusting the severity levels for false negatives, you can also use blocklists. More information on using blocklists for text moderation can be found in [Use blocklists for text moderation](/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest).
3543

36-
## Azure AI Studio Content Filtering users
3744

38-
Read the [Configurability](/azure/ai-studio/concepts/content-filtering#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
45+
#### [Azure OpenAI](#tab/azure-openai-studio)
46+
47+
Read the [Configurability](/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
48+
49+
Follow the steps in the documentation to update configurations to handle false positives or negatives: [How to use content filters (preview) with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters).
50+
51+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found in [How to use blocklists with Azure OpenAI Service](/azure/ai-services/openai/how-to/use-blocklists).
52+
53+
#### [Azure AI Studio](#tab/azure-ai-studio)
54+
55+
Read the [Configurability](/azure/ai-studio/concepts/content-filtering#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
3956

4057
Follow the steps in the documentation to update configurations to handle false positives or negatives: [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#create-a-content-filter).
4158

42-
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found here: [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#use-a-blocklist-as-a-filter).
59+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found in [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#use-a-blocklist-as-a-filter).
4360

44-
## Create a custom category based on your own RAI policy
61+
---
4562

46-
Sometimes, you might need a custom category to ensure the filtering aligns with your specific Responsible AI policy, as pre-built categories or content filtering may not suffice. You may require an entirely new content category.
63+
## Create a custom category based on your own RAI policy
4764

48-
Refer to the [custom categories documentation](/azure/ai-services/content-safety/custom-category) to build your own categories with the Azure AI Content Safety standalone API.
65+
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as pre-built categories or content filtering may not be enough.
4966

50-
We are currently working on integrating the custom categories feature into Azure OpenAI and AI Studio, which will be available soon.
67+
Refer to the [Custom categories documentation](/azure/ai-services/content-safety/concepts/custom-categories.md) to build your own categories with the Azure AI Content Safety API.
5168

5269
## Document issues and send feedback to Azure
5370

54-
If, after you’ve exhausted all the steps mentioned above, the false positives or negatives cannot be resolved by Azure AI Content Safety, it is likely a policy definition or model issue that needs further attention.
71+
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still cannot resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
5572

56-
Document the details of the false positives and/or false negatives by providing the following information:
57-
- Description of the flagged content.
73+
Document the details of the false positives and/or false negatives by providing the following information to the [Content safety support team](mailto:[email protected]):
74+
- Description of the flagged content.
5875
- Context in which the content was posted.
59-
- Reason given by Azure AI Content Safety for the flagging.
60-
- Explanation of why the content is a false positive or negative.
61-
- Any adjustments already attempted in severity settings or custom categories.
76+
- Reason given by Azure AI Content Safety for the flagging (if positive).
77+
- Explanation of why the content is a false positive or negative.
78+
- Any adjustments already attempted by adjusting severity settings or using custom categories.
6279
- Screenshots or logs of the flagged content and system responses.
6380

64-
This documentation will help in escalating the issue to the appropriate teams for resolution.
81+
This documentation will help in escalating the issue to the appropriate teams for resolution.
82+
83+
## Related content
6584

66-
Send the feedback to our Azure CSS by following the instructions [here](tbd).
85+
- [Azure AI Content Safety overview](/azure/ai-services/content-safety/overview)
86+
- [Harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning)

0 commit comments

Comments
 (0)