You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/improve-performance.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,8 +28,8 @@ False positives are when the system incorrectly flags non-harmful content as har
28
28
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
29
29
- Checking the context of the flagged content.
30
30
- Comparing the flagged content against the content safety risk categories and severity definitions:
31
-
- If you are using content safety in Azure OpenAI, go [here](/azure/ai-services/openai/concepts/content-filter).
32
-
- If you are using the Azure AI Content Safety standalone API, go [here](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning)for harm categories and [here](/azure/ai-services/content-safety/concepts/jailbreak-detection) for Prompt Shields.
31
+
- If you're using content safety in Azure OpenAI, see the [Azure OpenAI content filtering doc](/azure/ai-services/openai/concepts/content-filter).
32
+
- If you're using the Azure AI Content Safety standalone API, see the [Harm categories doc](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning)or the [Prompt Shields doc](/azure/ai-services/content-safety/concepts/jailbreak-detection), depending on which API you're using.
33
33
34
34
## Customize your severity settings
35
35
@@ -62,13 +62,13 @@ In addition to adjusting the severity levels for false negatives, you can also u
62
62
63
63
## Create a custom category based on your own RAI policy
64
64
65
-
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as pre-built categories or content filtering may not be enough.
65
+
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as prebuilt categories or content filtering may not be enough.
66
66
67
67
Refer to the [Custom categories documentation](/azure/ai-services/content-safety/concepts/custom-categories.md) to build your own categories with the Azure AI Content Safety API.
68
68
69
69
## Document issues and send feedback to Azure
70
70
71
-
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still cannot resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
71
+
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still can't resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
72
72
73
73
Document the details of the false positives and/or false negatives by providing the following information to the [Content safety support team](mailto:[email protected]):
74
74
- Description of the flagged content.
@@ -78,7 +78,7 @@ Document the details of the false positives and/or false negatives by providing
78
78
- Any adjustments already attempted by adjusting severity settings or using custom categories.
79
79
- Screenshots or logs of the flagged content and system responses.
80
80
81
-
This documentation will help in escalating the issue to the appropriate teams for resolution.
81
+
This documentation helps in escalating the issue to the appropriate teams for resolution.
0 commit comments