Skip to content

Commit 402bc8f

Browse files
committed
Merge branch 'main' into eur/speaker-reco
2 parents 8948948 + 249690f commit 402bc8f

File tree

253 files changed

+2651
-1456
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

253 files changed

+2651
-1456
lines changed

.openpublishing.publish.config.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@
33
{
44
"docset_name": "azure-ai",
55
"build_source_folder": ".",
6+
"build_output_subfolder": "azure-ai",
7+
"locale": "en-us",
8+
"monikers": [],
9+
"moniker_ranges": [],
610
"xref_query_tags": [
711
"/dotnet",
812
"/python"
913
],
10-
"build_output_subfolder": "azure-ai",
11-
"locale": "en-us",
12-
"monikers": [],
1314
"open_to_public_contributors": true,
1415
"type_mapping": {
1516
"Conceptual": "Content",
@@ -172,4 +173,4 @@
172173
],
173174
"branch_target_mapping": {},
174175
"targets": {}
175-
}
176+
}

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,11 @@
3030
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
3131
"redirect_document_id": false
3232
},
33+
{
34+
"source_path_from_root": "/articles/ai-services/luis/luis-concept-data-conversion.md",
35+
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
36+
"redirect_document_id": false
37+
},
3338
{
3439
"source_path_from_root": "/articles/ai-services/custom-vision-service/update-application-to-3.0-sdk.md",
3540
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
@@ -410,6 +415,16 @@
410415
"redirect_url": "/azure/ai-services/speech-service/speaker-recognition-overview",
411416
"redirect_document_id": false
412417
},
418+
{
419+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md",
420+
"redirect_url": "/azure/ai-services/speech-service/intent-recognition",
421+
"redirect_document_id": false
422+
},
423+
{
424+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md",
425+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle",
426+
"redirect_document_id": false
427+
},
413428
{
414429
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
415430
"redirect_url": "/azure/ai-services/anomaly-detector/overview",

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
---
2+
title: "Mitigate false results in Azure AI Content Safety"
3+
titleSuffix: Azure AI services
4+
description: Learn techniques to improve the performance of Azure AI Content Safety models by handling false positives and false negatives.
5+
#services: cognitive-services
6+
author: PatrickFarley
7+
manager: nitinme
8+
ms.service: azure-ai-content-safety
9+
ms.topic: how-to
10+
ms.date: 09/18/2024
11+
ms.author: pafarley
12+
#customer intent: As a user, I want to improve the performance of Azure AI Content Safety so that I can ensure accurate content moderation.
13+
---
14+
15+
# Mitigate false results in Azure AI Content Safety
16+
17+
This guide provides a step-by-step process for handling false positives and false negatives from Azure AI Content Safety models.
18+
19+
False positives are when the system incorrectly flags non-harmful content as harmful; false negatives are when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
20+
21+
## Prerequisites
22+
23+
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
24+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
25+
26+
## Review and verification
27+
28+
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
29+
- Checking the context of the flagged content.
30+
- Comparing the flagged content against the content safety risk categories and severity definitions:
31+
- If you're using content safety in Azure OpenAI, see the [Azure OpenAI content filtering doc](/azure/ai-services/openai/concepts/content-filter).
32+
- If you're using the Azure AI Content Safety standalone API, see the [Harm categories doc](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning) or the [Prompt Shields doc](/azure/ai-services/content-safety/concepts/jailbreak-detection), depending on which API you're using.
33+
34+
## Customize your severity settings
35+
36+
If your assessment confirms that you found a false positive or false negative, you can try customizing your severity settings to mitigate the issue. The settings depend on which platform you're using.
37+
38+
#### [Content Safety standalone API](#tab/standalone-api)
39+
40+
If you're using the Azure AI Content Safety standalone API directly, try experimenting by setting the severity threshold at different levels for [harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=definitions) based on API output. Alternatively, if you prefer the no-code approach, you can try out those settings in [Content Safety Studio](https://contentsafety.cognitive.azure.com/) or Azure AI Studio’s [Content Safety page](https://ai.azure.com/explore/contentsafety). Instructions can be found [here](/azure/ai-studio/quickstarts/content-safety?tabs=moderate-text-content).
41+
42+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. More information on using blocklists for text moderation can be found in [Use blocklists for text moderation](/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest).
43+
44+
45+
#### [Azure OpenAI](#tab/azure-openai-studio)
46+
47+
Read the [Configurability](/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
48+
49+
Follow the steps in the documentation to update configurations to handle false positives or negatives: [How to use content filters (preview) with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters).
50+
51+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found in [How to use blocklists with Azure OpenAI Service](/azure/ai-services/openai/how-to/use-blocklists).
52+
53+
#### [Azure AI Studio](#tab/azure-ai-studio)
54+
55+
Read the [Configurability](/azure/ai-studio/concepts/content-filtering#configurability-preview) documentation, as some content filtering configurations may require approval through the process mentioned there.
56+
57+
Follow the steps in the documentation to update configurations to handle false positives or negatives: [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#create-a-content-filter).
58+
59+
In addition to adjusting the severity levels for false negatives, you can also use blocklists. Detailed instruction can be found in [Azure AI Studio content filtering](/azure/ai-studio/concepts/content-filtering#use-a-blocklist-as-a-filter).
60+
61+
---
62+
63+
## Create a custom category based on your own RAI policy
64+
65+
Sometimes you might need to create a custom category to ensure the filtering aligns with your specific Responsible AI policy, as prebuilt categories or content filtering may not be enough.
66+
67+
Refer to the [Custom categories documentation](/azure/ai-services/content-safety/concepts/custom-categories) to build your own categories with the Azure AI Content Safety API.
68+
69+
## Document issues and send feedback to Azure
70+
71+
If, after you’ve tried all the steps mentioned above, Azure AI Content Safety still can't resolve the false positives or negatives, there is likely a policy definition or model issue that needs further attention.
72+
73+
Document the details of the false positives and/or false negatives by providing the following information to the [Content safety support team](mailto:[email protected]):
74+
- Description of the flagged content.
75+
- Context in which the content was posted.
76+
- Reason given by Azure AI Content Safety for the flagging (if positive).
77+
- Explanation of why the content is a false positive or negative.
78+
- Any adjustments already attempted by adjusting severity settings or using custom categories.
79+
- Screenshots or logs of the flagged content and system responses.
80+
81+
This documentation helps in escalating the issue to the appropriate teams for resolution.
82+
83+
## Related content
84+
85+
- [Azure AI Content Safety overview](/azure/ai-services/content-safety/overview)
86+
- [Harm categories](/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning)

articles/ai-services/content-safety/quickstart-protected-material.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ curl --location --request POST '<endpoint>/contentsafety/text:detectProtectedMat
7373
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \
7474
--header 'Content-Type: application/json' \
7575
--data-raw '{
76-
"text": "to everyone, the best things in life are free. the stars belong to everyone, they gleam there for you and me. the flowers in spring, the robins that sing, the sunbeams that shine"
76+
"text": "Kiss me out of the bearded barley Nightly beside the green, green grass Swing, swing, swing the spinning step You wear those shoes and I will wear that dress Oh, kiss me beneath the milky twilight Lead me out on the moonlit floor Lift your open hand Strike up the band and make the fireflies dance Silver moon's sparkling So, kiss me Kiss me down by the broken tree house Swing me upon its hanging tire Bring, bring, bring your flowered hat We'll take the trail marked on your father's map."
7777
}'
7878
```
7979
The below fields must be included in the url:

articles/ai-services/content-safety/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,8 @@ items:
6363
href: how-to/custom-categories-rapid.md
6464
- name: Use a blocklist
6565
href: how-to/use-blocklist.md
66+
- name: Mitigate false results
67+
href: how-to/improve-performance.md
6668
- name: Encryption of data at rest
6769
href: how-to/encrypt-data-at-rest.md
6870
- name: Migrate from public preview to GA

articles/ai-services/luis/faq.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,6 @@ LUIS has several limit areas. The first is the model limit, which controls inten
2424

2525
An authoring resource lets you create, manage, train, test, and publish your applications. A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. See [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md) to learn about the differences between the authoring key and the prediction runtime key.
2626

27-
## Does LUIS support speech to text?
28-
29-
Yes, [Speech](../speech-service/how-to-recognize-intents-from-speech-csharp.md#luis-and-speech) to text is provided as an integration with LUIS.
30-
3127
## What are Synonyms and word variations?
3228

3329
LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they're used in similar contexts in the examples provided:

articles/ai-services/luis/luis-concept-data-conversion.md

Lines changed: 0 additions & 42 deletions
This file was deleted.

articles/ai-services/luis/luis-limits.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -95,10 +95,6 @@ Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS qu
9595

9696
[Sentiment analysis integration](how-to/publish.md), which provides sentiment information, is provided without requiring another Azure resource.
9797

98-
### Speech integration
99-
100-
[Speech integration](../speech-service/how-to-recognize-intents-from-speech-csharp.md) provides 1 thousand endpoint requests per unit cost.
101-
10298
[Learn more about pricing.][pricing]
10399

104100
## Keyboard controls

0 commit comments

Comments
 (0)