You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/ai-services/content-safety-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,18 +7,18 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: overview
10
-
ms.date: 02/20/2025
10
+
ms.date: 05/01/2025
11
11
ms.author: pafarley
12
12
author: PatrickFarley
13
13
---
14
14
15
15
# Content Safety in the Azure AI Foundry portal
16
16
17
-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17
+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
18
18
19
19
## Features
20
20
21
-
You can use Azure AI Content Safety for many scenarios:
21
+
You can use Azure AI Content Safety for the following scenarios:
22
22
23
23
**Text content**:
24
24
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/content-filtering.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,24 +9,24 @@ ms.custom:
9
9
- build-2024
10
10
- ignite-2024
11
11
ms.topic: conceptual
12
-
ms.date: 01/10/2025
12
+
ms.date: 04/29/2025
13
13
ms.reviewer: eur
14
14
ms.author: pafarley
15
15
author: PatrickFarley
16
16
---
17
17
18
18
# Content filtering in Azure AI Foundry portal
19
19
20
-
[Azure AI Foundry](https://ai.azure.com) includes a content filtering system that works alongside core models and DALL-E image generation models.
20
+
[Azure AI Foundry](https://ai.azure.com) includes a content filtering system that works alongside core models and image generation models.
21
21
22
22
> [!IMPORTANT]
23
23
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](../../ai-services/openai/concepts/models.md).
24
24
25
25
## How it works
26
26
27
-
This content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through an ensemble of classification models aimed at detecting and preventing the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
27
+
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
28
28
29
-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Content safety for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
29
+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Content safety for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
30
30
31
31
## Language support
32
32
@@ -89,7 +89,7 @@ The configurability feature allows customers to adjust the settings, separately
89
89
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
90
90
91
91
92
-
## Next steps
92
+
## Related content
93
93
94
94
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md).
95
95
- Azure AI Foundry content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-use-of-ai-overview.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ manager: nitinme
6
6
keywords: Azure AI services, cognitive
7
7
ms.service: azure-ai-foundry
8
8
ms.topic: overview
9
-
ms.date: 02/20/2025
9
+
ms.date: 05/01/2025
10
10
ms.author: pafarley
11
11
author: PatrickFarley
12
12
ms.custom: ignite-2024
@@ -25,10 +25,10 @@ Finally, we examine strategies for managing risks in production, including deplo
25
25
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern: Map, Measure, and Manage.":::
26
26
27
27
In alignment with Microsoft's RAI practices, these recommendations are organized into four stages:
28
-
-**Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29
-
-**Measure**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30
-
-**Mitigate**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31
-
-**Operate**: Define and execute a deployment and operational readiness plan.
28
+
-**[Map](#map)**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29
+
-**[Measure](#measure)**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30
+
-**[Mitigate](#mitigate)**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31
+
-**[Operate](#operate)**: Define and execute a deployment and operational readiness plan.
32
32
33
33
34
34
## Map
@@ -49,7 +49,7 @@ At the end of this Map stage, you should have a documented, prioritized list of
49
49
50
50
## Measure
51
51
52
-
Once you’ve identified a list of prioritized content risks, the next stage involves developing an approach for systematic measurement of each content risk and conducting evaluations of the AI system. There are manual and automated approaches to measurement. We recommend you do both, starting with manual measurement.
52
+
Once you’ve identified a list of prioritized content risks, the next stage involves developing an approach for systematic measurement of each content risk and conducting evaluations of the AI system. There are manual and automated approaches to measurement. We recommend you do both, starting with manual measurement.
53
53
54
54
Manual measurement is useful for:
55
55
- Measuring progress on a small set of priority issues. When mitigating specific content risks, it's often most productive to keep manually checking progress against a small dataset until the content risk is no longer observed before you move on to automated measurement.
@@ -78,7 +78,7 @@ Mitigating harms presented by large language models such as the Azure OpenAI mod
78
78
79
79
### System message and grounding layer
80
80
81
-
System message (otherwise known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
81
+
System message (also known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
82
82
83
83
Now the other part of the story is how you teach the base model to use that data or to answer the questions effectively in your application. When you create a system message, you're giving instructions to the model in natural language to consistently guide its behavior on the backend. Tapping into the trained data of the models is valuable but enhancing it with your information is critical.
84
84
Here's what a system message should look like. You must:
@@ -106,7 +106,7 @@ Recommended System Message Framework:
106
106
107
107
Here we outline a set of best practices instructions you can use to augment your task-based system message instructions to minimize different content risks:
108
108
109
-
### Sample metaprompt instructions for content risks
109
+
### Sample message instructions for content risks
110
110
111
111
```
112
112
- You **must not** generate content that might be harmful to someone physically or emotionally even if a user requests or creates a condition to rationalize that harmful content.
Copy file name to clipboardExpand all lines: articles/ai-services/cognitive-services-custom-subdomains.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,17 +7,17 @@ manager: nitinme
7
7
ms.service: azure-ai-services
8
8
ms.custom: devx-track-azurecli
9
9
ms.topic: conceptual
10
-
ms.date: 10/30/2024
10
+
ms.date: 05/01/2025
11
11
ms.author: pafarley
12
12
---
13
13
14
14
# Custom subdomain names for Azure AI services
15
15
16
-
Starting in July 2019, Azure AI services use custom subdomain names for each resource created through the [Azure portal](https://portal.azure.com), [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), or [Azure CLI](/cli/azure/install-azure-cli). Unlike regional endpoints, which were common for all customers in a specific Azure region, custom subdomain names are unique to the resource. Custom subdomain names are required to enable features like Microsoft Entra ID for authentication.
16
+
Since July 2019, new Azure AI service resources use custom subdomain names when created through the [Azure portal](https://portal.azure.com), [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), or [Azure CLI](/cli/azure/install-azure-cli). Unlike regional endpoints, which were common for all customers in a specific Azure region, custom subdomain names are unique to the resource. Custom subdomain names are required to enable features like Microsoft Entra ID for authentication.
17
17
18
18
## How does this impact existing resources?
19
19
20
-
Azure AI services resources created before July 1, 2019, use the regional endpoints for the associated service. These endpoints work with existing and new resources.
20
+
Azure AI services resources created before July 1, 2019 use the regional endpoints for the associated service. These endpoints work with existing and new resources.
21
21
22
22
If you'd like to migrate an existing resource to use custom subdomain names to enable features like Microsoft Entra ID, follow these instructions:
Copy file name to clipboardExpand all lines: articles/ai-services/cognitive-services-support-options.md
+14-10Lines changed: 14 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,13 @@ author: PatrickFarley
5
5
manager: nitinme
6
6
ms.service: azure-ai-services
7
7
ms.topic: conceptual
8
-
ms.date: 10/30/2024
8
+
ms.date: 05/01/2025
9
9
ms.author: pafarley
10
10
---
11
11
12
12
# Azure AI services support and help options
13
13
14
-
Are you just starting to explore the functionality of Azure AI services? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI services.
14
+
Here are the options for getting support, staying up to date, giving feedback, and reporting bugs for Azure AI services.
15
15
16
16
## Get solutions to common issues
17
17
@@ -41,6 +41,11 @@ If you can't find an answer to your problem using search, submit a new question
41
41
42
42
*[Azure AI services](/answers/topics/azure-cognitive-services.html)
* You can use the `qualityForRecognition` attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only `"high"` quality images are recommended for person enrollment and quality at or above `"medium"` is recommended for identification scenarios.
54
54
55
-
## Next steps
55
+
## Next step
56
56
57
57
Now that you're familiar with face recognition concepts, Write a script that identifies faces against a trained PersonGroup.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/concept-object-detection-40.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ manager: nitinme
7
7
8
8
ms.service: azure-ai-vision
9
9
ms.topic: conceptual
10
-
ms.date: 10/31/2024
10
+
ms.date: 05/01/2025
11
11
ms.author: pafarley
12
12
---
13
13
@@ -67,7 +67,7 @@ The following JSON response illustrates what the Image Analysis 4.0 API returns
67
67
68
68
## Limitations
69
69
70
-
It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
70
+
Note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
71
71
72
72
* Objects are generally not detected if they're small (less than 5% of the image).
73
73
* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
0 commit comments