You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/ai-services/content-safety-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,18 +7,18 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: overview
10
-
ms.date: 02/20/2025
10
+
ms.date: 05/01/2025
11
11
ms.author: pafarley
12
12
author: PatrickFarley
13
13
---
14
14
15
15
# Content Safety in the Azure AI Foundry portal
16
16
17
-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17
+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
18
18
19
19
## Features
20
20
21
-
You can use Azure AI Content Safety for many scenarios:
21
+
You can use Azure AI Content Safety for the following scenarios:
22
22
23
23
**Text content**:
24
24
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-use-of-ai-overview.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ manager: nitinme
6
6
keywords: Azure AI services, cognitive
7
7
ms.service: azure-ai-foundry
8
8
ms.topic: overview
9
-
ms.date: 02/20/2025
9
+
ms.date: 05/01/2025
10
10
ms.author: pafarley
11
11
author: PatrickFarley
12
12
ms.custom: ignite-2024
@@ -25,10 +25,10 @@ Finally, we examine strategies for managing risks in production, including deplo
25
25
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern: Map, Measure, and Manage.":::
26
26
27
27
In alignment with Microsoft's RAI practices, these recommendations are organized into four stages:
28
-
-**Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29
-
-**Measure**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30
-
-**Mitigate**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31
-
-**Operate**: Define and execute a deployment and operational readiness plan.
28
+
-**[Map](#map)**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29
+
-**[Measure](#measure)**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30
+
-**[Mitigate](#mitigate)**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31
+
-**[Operate](#operate)**: Define and execute a deployment and operational readiness plan.
32
32
33
33
34
34
## Map
@@ -78,7 +78,7 @@ Mitigating harms presented by large language models such as the Azure OpenAI mod
78
78
79
79
### System message and grounding layer
80
80
81
-
System message (otherwise known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
81
+
System message (also known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
82
82
83
83
Now the other part of the story is how you teach the base model to use that data or to answer the questions effectively in your application. When you create a system message, you're giving instructions to the model in natural language to consistently guide its behavior on the backend. Tapping into the trained data of the models is valuable but enhancing it with your information is critical.
84
84
Here's what a system message should look like. You must:
@@ -106,7 +106,7 @@ Recommended System Message Framework:
106
106
107
107
Here we outline a set of best practices instructions you can use to augment your task-based system message instructions to minimize different content risks:
108
108
109
-
### Sample metaprompt instructions for content risks
109
+
### Sample message instructions for content risks
110
110
111
111
```
112
112
- You **must not** generate content that might be harmful to someone physically or emotionally even if a user requests or creates a condition to rationalize that harmful content.
0 commit comments