Skip to content

Commit a052a1a

Browse files
committed
acrolinx
1 parent d7f8111 commit a052a1a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/ai-services/content-safety/how-to/improve-performance.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ False positives are when the system incorrectly flags non-harmful content as har
2828
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
2929
- Checking the context of the flagged content.
3030
- Comparing the flagged content against the content safety risk categories and severity definitions:
31-
- If you are using content safety in Azure OpenAI, go [here](/azure/ai-services/openai/concepts/content-filter).
32-
- If you are using the Azure AI Content Safety standalone API, go [here](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning) for harm categories and [here](/azure/ai-services/content-safety/concepts/jailbreak-detection) for Prompt Shields.
31+
- If you're using content safety in Azure OpenAI, see the [Azure OpenAI content filtering doc](/azure/ai-services/openai/concepts/content-filter).
32+
- If you're using the Azure AI Content Safety standalone API, see the [Harm categories doc](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning) or the [Prompt Shields doc](/azure/ai-services/content-safety/concepts/jailbreak-detection), depending on which API you're using.
3333

3434
## Customize your severity settings
3535

@@ -62,13 +62,13 @@ In addition to adjusting the severity levels for false negatives, you can also u
6262

6363
## Create a custom category based on your own RAI policy
6464

65-
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as pre-built categories or content filtering may not be enough.
65+
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as prebuilt categories or content filtering may not be enough.
6666

6767
Refer to the [Custom categories documentation](/azure/ai-services/content-safety/concepts/custom-categories.md) to build your own categories with the Azure AI Content Safety API.
6868

6969
## Document issues and send feedback to Azure
7070

71-
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still cannot resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
71+
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still can't resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
7272

7373
Document the details of the false positives and/or false negatives by providing the following information to the [Content safety support team](mailto:[email protected]):
7474
- Description of the flagged content.
@@ -78,7 +78,7 @@ Document the details of the false positives and/or false negatives by providing
7878
- Any adjustments already attempted by adjusting severity settings or using custom categories.
7979
- Screenshots or logs of the flagged content and system responses.
8080

81-
This documentation will help in escalating the issue to the appropriate teams for resolution.
81+
This documentation helps in escalating the issue to the appropriate teams for resolution.
8282

8383
## Related content
8484

0 commit comments

Comments
 (0)