Skip to content

Commit 80b2481

Browse files
Merge pull request #6269 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-29 05:05 UTC
2 parents e0fd9e5 + edb3d3c commit 80b2481

File tree

41 files changed

+335
-316
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+335
-316
lines changed

articles/ai-foundry/ai-services/content-safety-overview.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -7,41 +7,41 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: overview
10-
ms.date: 05/31/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# Content Safety in the Azure AI Foundry portal
1616

17-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that help you detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) lets you view, explore, and try out sample code for detecting harmful content across different modalities.
1818

1919
## Features
2020

21-
You can use Azure AI Content Safety for the following scenarios:
21+
Use Azure AI Content Safety for the following scenarios:
2222

23-
**Text content**:
24-
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.
25-
- Groundedness detection: This filter determines if the AI's responses are based on trusted, user-provided sources, ensuring that the answers are "grounded" in the intended material. Groundedness detection is helpful for improving the reliability and factual accuracy of responses.
26-
- Protected material detection for text: This feature identifies protected text material, such as known song lyrics, articles, or other content, ensuring that the AI doesn't output this content without permission.
27-
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories, helping to prevent uncredited or unauthorized reproduction of source code.
28-
- Prompt shields: This feature provides a unified API to address "Jailbreak" and "Indirect Attacks":
23+
### Text content
24+
- Moderate text content: Scans and moderates text content. It identifies and categorizes text based on different levels of severity to ensure appropriate responses.
25+
- Groundedness detection: Determines if the AI's responses are based on trusted, user-provided sources. This feature ensures that the answers are "grounded" in the intended material. Groundedness detection helps improve the reliability and factual accuracy of responses.
26+
- Protected material detection for text: Identifies protected text material, such as known song lyrics, articles, or other content. This feature ensures that the AI doesn't output this content without permission.
27+
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories. This feature helps prevent uncredited or unauthorized reproduction of source code.
28+
- Prompt shields: Provides a unified API to address "Jailbreak" and "Indirect Attacks":
2929
- Jailbreak Attacks: Attempts by users to manipulate the AI into bypassing its safety protocols or ethical guidelines. Examples include prompts designed to trick the AI into giving inappropriate responses or performing tasks it was programmed to avoid.
30-
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks, indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
30+
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks. Indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
3131

32-
**Image content**:
32+
### Image content
3333
- Moderate image content: Similar to text moderation, this feature filters and assesses image content to detect inappropriate or harmful visuals.
34-
- Moderate multimodal content: This is designed to handle a combination of text and images, assessing the overall context and any potential risks across multiple types of content.
34+
- Moderate multimodal content: Designed to handle a combination of text and images. It assesses the overall context and any potential risks across multiple types of content.
3535

36-
**Customize your own categories**:
37-
- Custom categories: Allows users to define specific categories for moderating and filtering content, tailoring safety protocols to unique needs.
38-
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations, reinforcing safety boundaries and helping prevent unwanted outputs.
36+
### Custom filtering
37+
- Custom categories: Allows users to define specific categories for moderating and filtering content. Tailors safety protocols to unique needs.
38+
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations. It reinforces safety boundaries and helps prevent unwanted outputs.
3939

4040
[!INCLUDE [content-safety-harm-categories](../includes/content-safety-harm-categories.md)]
4141

4242
## Limitations
4343

44-
Refer to the [Content Safety overview](/azure/ai-services/content-safety/overview) for supported regions, rate limits, and input requirements for all features. Refer to the [Language support](/azure/ai-services/content-safety/language-support) page for supported languages.
44+
For supported regions, rate limits, and input requirements for all features, see the [Content Safety overview](/azure/ai-services/content-safety/overview). For supported languages, see the [Language support](/azure/ai-services/content-safety/language-support) page.
4545

4646

4747
## Next step

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: concept-article
12-
ms.date: 05/31/2025
12+
ms.date: 07/28/2025
1313
ms.reviewer: eur
1414
ms.author: pafarley
1515
author: PatrickFarley
@@ -20,17 +20,17 @@ author: PatrickFarley
2020
[Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) includes a content filtering system that works alongside core models and image generation models.
2121

2222
> [!IMPORTANT]
23-
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
23+
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
2424
2525
## How it works
2626

27-
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
27+
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
2828

2929
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later). Models available through **serverless API deployments** have content filtering enabled by default. To learn more about the default content filter enabled for serverless API deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
3030

3131
## Language support
3232

33-
The content filtering models have been trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
33+
The content filtering models are trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
3434

3535
## Content risk filters (input and output filters)
3636

@@ -50,7 +50,7 @@ The following special filters work for both input and output of generative AI mo
5050
|Category|Description|
5151
|--------|-----------|
5252
|Safe | Content might be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
53-
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.|
53+
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature), and depictions at low intensity.|
5454
| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
5555
|High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or nonconsensual power exchange or abuse.|
5656

@@ -65,8 +65,8 @@ You can also enable special filters for generative AI scenarios:
6565
### Other output filters
6666

6767
You can also enable the following special output filters:
68-
- **Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
69-
- **Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
68+
- **Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that a large language model might output.
69+
- **Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which a large language models might output without proper citation of source repositories.
7070
- **Groundedness**: The groundedness detection filter detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
7171

7272
[!INCLUDE [create-content-filter](../includes/create-content-filter.md)]

articles/ai-foundry/foundry-models/how-to/use-blocklists.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,27 +7,27 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 05/31/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# How to use blocklists with Foundry Models in Azure AI Foundry services
1616

17-
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations in order to filter terms specific to your use case. This article illustrates how to create custom blocklists as part of your content filters in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
17+
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations to filter terms specific to your use case. This article shows how to create custom blocklists as part of your content filters in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
1818

1919
## Prerequisites
2020

21-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md) if it's your case.
21+
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. For more information, see [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md).
2222

2323
* An Azure AI Foundry services resource. For more information, see [Create an Azure AI Foundry Services resource](../../../ai-services/multi-service-resource.md?context=/azure/ai-services/model-inference/context/context).
2424

2525
* An Azure AI Foundry project [connected to your Azure AI Foundry services resource](../../model-inference/how-to/configure-project-connection.md).
2626

27-
* A model deployment. See [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md) for adding models to your resource.
27+
* A model deployment. For more information, see [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md).
2828

2929
> [!NOTE]
30-
> Blocklist (preview) is only supported for Azure OpenAI models.
30+
> Blocklist (preview) support is limited to Azure OpenAI models.
3131
3232
[!INCLUDE [use-blocklists](../../includes/use-blocklists.md)]
3333

articles/ai-foundry/includes/find-endpoint.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ms.date: 05/13/2025
99
ms.custom: include file
1010
---
1111

12+
Azure AI Foundry Models allows customers to consume the most powerful models from flagship model providers using a single endpoint and credentials. This means that you can switch between models and consume them from your application without changing a single line of code.
13+
1214
Copy the **Azure AI Foundry project endpoint** in the **Overview** section of your project. You'll use it in a moment.
1315

1416
:::image type="content" source="../media/how-to/projects/fdp-project-overview.png" alt-text="Screenshot shows the project overview for a Foundry project.":::

articles/ai-foundry/includes/get-started-fdp.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ In this quickstart, you use [Azure AI Foundry](https://ai.azure.com/?cid=learnDo
2525
The Azure AI Foundry SDK is available in multiple languages, including Python, Java, JavaScript, and C#. This quickstart provides instructions for each of these languages.
2626

2727
> [!TIP]
28-
> The rest of this article shows how to use a **[!INCLUDE [fdp](../includes/fdp-project-name.md)]**. Select **[!INCLUDE [hub](../includes/hub-project-name.md)]** at the top of this article if you want to use a [!INCLUDE [hub](../includes/hub-project-name.md)] instead. [Which type of project do I need?](../what-is-azure-ai-foundry.md#which-type-of-project-do-i-need)
28+
> The rest of this article shows how to create and use a **[!INCLUDE [fdp](../includes/fdp-project-name.md)]**. Select **[!INCLUDE [hub](../includes/hub-project-name.md)]** at the top of this article if you want to use a [!INCLUDE [hub](../includes/hub-project-name.md)] instead. [Which type of project do I need?](../what-is-azure-ai-foundry.md#which-type-of-project-do-i-need)
2929
3030
## Prerequisites
3131

@@ -37,12 +37,14 @@ The Azure AI Foundry SDK is available in multiple languages, including Python, J
3737
## Start with a project and model
3838

3939
1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
40-
1. On the home page, search and then select the **gpt-4o** model.
40+
1. In the portal, you can explore a rich catalog of cutting-edge models from Microsoft, OpenAI, DeepSeek, Hugging Face, Meta, and more. For this tutorial, search and then
41+
select the **gpt-4o** model.
4142

4243
:::image type="content" source="../media/quickstarts/start-building.png" alt-text="Screenshot shows how to start building an Agent in Azure AI Foundry portal.":::
4344

4445
1. On the model details page, select **Use this model**.
4546
1. Fill in a name to use for your project and select **Create**.
47+
1. Review the deployment information then select **Deploy**.
4648
1. Once your resources are created, you are in the chat playground.
4749

4850
## Set up your environment

0 commit comments

Comments
 (0)