Skip to content

Commit 66cadee

Browse files
committed
studio freshness
1 parent a360a6b commit 66cadee

File tree

4 files changed

+12
-12
lines changed

4 files changed

+12
-12
lines changed

articles/ai-foundry/ai-services/content-safety-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,18 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: overview
10-
ms.date: 02/20/2025
10+
ms.date: 05/01/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# Content Safety in the Azure AI Foundry portal
1616

17-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
1818

1919
## Features
2020

21-
You can use Azure AI Content Safety for many scenarios:
21+
You can use Azure AI Content Safety for the following scenarios:
2222

2323
**Text content**:
2424
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.

articles/ai-foundry/how-to/use-blocklists.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 02/20/2025
10+
ms.date: 05/01/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---

articles/ai-foundry/includes/use-blocklists.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.reviewer: pafarley
66
ms.author: pafarley
77
ms.service: azure-ai-foundry
88
ms.topic: include
9-
ms.date: 02/20/2025
9+
ms.date: 05/01/2025
1010
ms.custom: include
1111
---
1212

articles/ai-foundry/responsible-use-of-ai-overview.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ manager: nitinme
66
keywords: Azure AI services, cognitive
77
ms.service: azure-ai-foundry
88
ms.topic: overview
9-
ms.date: 02/20/2025
9+
ms.date: 05/01/2025
1010
ms.author: pafarley
1111
author: PatrickFarley
1212
ms.custom: ignite-2024
@@ -25,10 +25,10 @@ Finally, we examine strategies for managing risks in production, including deplo
2525
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern: Map, Measure, and Manage.":::
2626

2727
In alignment with Microsoft's RAI practices, these recommendations are organized into four stages:
28-
- **Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29-
- **Measure**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30-
- **Mitigate**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31-
- **Operate**: Define and execute a deployment and operational readiness plan.
28+
- **[Map](#map)**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
29+
- **[Measure](#measure)**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
30+
- **[Mitigate](#mitigate)**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
31+
- **[Operate](#operate)**: Define and execute a deployment and operational readiness plan.
3232

3333

3434
## Map
@@ -78,7 +78,7 @@ Mitigating harms presented by large language models such as the Azure OpenAI mod
7878

7979
### System message and grounding layer
8080

81-
System message (otherwise known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
81+
System message (also known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation (RAG)](/azure/ai-studio/concepts/retrieval-augmented-generation) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
8282

8383
Now the other part of the story is how you teach the base model to use that data or to answer the questions effectively in your application. When you create a system message, you're giving instructions to the model in natural language to consistently guide its behavior on the backend. Tapping into the trained data of the models is valuable but enhancing it with your information is critical.
8484
Here's what a system message should look like. You must:
@@ -106,7 +106,7 @@ Recommended System Message Framework:
106106

107107
Here we outline a set of best practices instructions you can use to augment your task-based system message instructions to minimize different content risks:
108108

109-
### Sample metaprompt instructions for content risks
109+
### Sample message instructions for content risks
110110

111111
```
112112
- You **must not** generate content that might be harmful to someone physically or emotionally even if a user requests or creates a condition to rationalize that harmful content.

0 commit comments

Comments
 (0)