You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/ai-services/content-safety-overview.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,41 +7,41 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: overview
10
-
ms.date: 05/31/2025
10
+
ms.date: 07/28/2025
11
11
ms.author: pafarley
12
12
author: PatrickFarley
13
13
---
14
14
15
15
# Content Safety in the Azure AI Foundry portal
16
16
17
-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs)allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17
+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that help you detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs)lets you view, explore, and try out sample code for detecting harmful content across different modalities.
18
18
19
19
## Features
20
20
21
-
You can use Azure AI Content Safety for the following scenarios:
21
+
Use Azure AI Content Safety for the following scenarios:
22
22
23
-
**Text content**:
24
-
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.
25
-
- Groundedness detection: This filter determines if the AI's responses are based on trusted, user-provided sources, ensuring that the answers are "grounded" in the intended material. Groundedness detection is helpful for improving the reliability and factual accuracy of responses.
26
-
- Protected material detection for text: This feature identifies protected text material, such as known song lyrics, articles, or other content, ensuring that the AI doesn't output this content without permission.
27
-
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories, helping to prevent uncredited or unauthorized reproduction of source code.
28
-
- Prompt shields: This feature provides a unified API to address "Jailbreak" and "Indirect Attacks":
23
+
### Text content
24
+
- Moderate text content: Scans and moderates text content. It identifies and categorizes text based on different levels of severity to ensure appropriate responses.
25
+
- Groundedness detection: Determines if the AI's responses are based on trusted, user-provided sources. This feature ensures that the answers are "grounded" in the intended material. Groundedness detection helps improve the reliability and factual accuracy of responses.
26
+
- Protected material detection for text: Identifies protected text material, such as known song lyrics, articles, or other content. This feature ensures that the AI doesn't output this content without permission.
27
+
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories. This feature helps prevent uncredited or unauthorized reproduction of source code.
28
+
- Prompt shields: Provides a unified API to address "Jailbreak" and "Indirect Attacks":
29
29
- Jailbreak Attacks: Attempts by users to manipulate the AI into bypassing its safety protocols or ethical guidelines. Examples include prompts designed to trick the AI into giving inappropriate responses or performing tasks it was programmed to avoid.
30
-
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks, indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
30
+
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks. Indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
31
31
32
-
**Image content**:
32
+
### Image content
33
33
- Moderate image content: Similar to text moderation, this feature filters and assesses image content to detect inappropriate or harmful visuals.
34
-
- Moderate multimodal content: This is designed to handle a combination of text and images, assessing the overall context and any potential risks across multiple types of content.
34
+
- Moderate multimodal content: Designed to handle a combination of text and images. It assesses the overall context and any potential risks across multiple types of content.
35
35
36
-
**Customize your own categories**:
37
-
- Custom categories: Allows users to define specific categories for moderating and filtering content, tailoring safety protocols to unique needs.
38
-
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations, reinforcing safety boundaries and helping prevent unwanted outputs.
36
+
### Custom filtering
37
+
- Custom categories: Allows users to define specific categories for moderating and filtering content. Tailors safety protocols to unique needs.
38
+
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations. It reinforces safety boundaries and helps prevent unwanted outputs.
Refer to the [Content Safety overview](/azure/ai-services/content-safety/overview) for supported regions, rate limits, and input requirements for all features. Refer to the [Language support](/azure/ai-services/content-safety/language-support) page for supported languages.
44
+
For supported regions, rate limits, and input requirements for all features, see the [Content Safety overview](/azure/ai-services/content-safety/overview). For supported languages, see the [Language support](/azure/ai-services/content-safety/language-support) page.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/content-filtering.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.custom:
9
9
- build-2024
10
10
- ignite-2024
11
11
ms.topic: concept-article
12
-
ms.date: 05/31/2025
12
+
ms.date: 07/28/2025
13
13
ms.reviewer: eur
14
14
ms.author: pafarley
15
15
author: PatrickFarley
@@ -20,17 +20,17 @@ author: PatrickFarley
20
20
[Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) includes a content filtering system that works alongside core models and image generation models.
21
21
22
22
> [!IMPORTANT]
23
-
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
23
+
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
24
24
25
25
## How it works
26
26
27
-
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
27
+
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
28
28
29
29
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later). Models available through **serverless API deployments** have content filtering enabled by default. To learn more about the default content filter enabled for serverless API deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
30
30
31
31
## Language support
32
32
33
-
The content filtering models have been trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
33
+
The content filtering models are trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
34
34
35
35
## Content risk filters (input and output filters)
36
36
@@ -50,7 +50,7 @@ The following special filters work for both input and output of generative AI mo
50
50
|Category|Description|
51
51
|--------|-----------|
52
52
|Safe | Content might be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
53
-
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.|
53
+
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature), and depictions at low intensity.|
54
54
| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
55
55
|High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or nonconsensual power exchange or abuse.|
56
56
@@ -65,8 +65,8 @@ You can also enable special filters for generative AI scenarios:
65
65
### Other output filters
66
66
67
67
You can also enable the following special output filters:
68
-
-**Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
69
-
-**Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
68
+
-**Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that a large language model might output.
69
+
-**Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which a large language models might output without proper citation of source repositories.
70
70
-**Groundedness**: The groundedness detection filter detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/how-to/use-blocklists.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,27 +7,27 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: how-to
10
-
ms.date: 05/31/2025
10
+
ms.date: 07/28/2025
11
11
ms.author: pafarley
12
12
author: PatrickFarley
13
13
---
14
14
15
15
# How to use blocklists with Foundry Models in Azure AI Foundry services
16
16
17
-
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations in order to filter terms specific to your use case. This article illustrates how to create custom blocklists as part of your content filters in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
17
+
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations to filter terms specific to your use case. This article shows how to create custom blocklists as part of your content filters in the[Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
18
18
19
19
## Prerequisites
20
20
21
-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md) if it's your case.
21
+
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. For more information, see [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md).
22
22
23
23
* An Azure AI Foundry services resource. For more information, see [Create an Azure AI Foundry Services resource](../../../ai-services/multi-service-resource.md?context=/azure/ai-services/model-inference/context/context).
24
24
25
25
* An Azure AI Foundry project [connected to your Azure AI Foundry services resource](../../model-inference/how-to/configure-project-connection.md).
26
26
27
-
* A model deployment. See [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md) for adding models to your resource.
27
+
* A model deployment. For more information, see [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md).
28
28
29
29
> [!NOTE]
30
-
> Blocklist (preview) is only supported for Azure OpenAI models.
30
+
> Blocklist (preview) support is limited to Azure OpenAI models.
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/find-endpoint.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,8 @@ ms.date: 05/13/2025
9
9
ms.custom: include file
10
10
---
11
11
12
+
Azure AI Foundry Models allows customers to consume the most powerful models from flagship model providers using a single endpoint and credentials. This means that you can switch between models and consume them from your application without changing a single line of code.
13
+
12
14
Copy the **Azure AI Foundry project endpoint** in the **Overview** section of your project. You'll use it in a moment.
13
15
14
16
:::image type="content" source="../media/how-to/projects/fdp-project-overview.png" alt-text="Screenshot shows the project overview for a Foundry project.":::
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/get-started-fdp.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ In this quickstart, you use [Azure AI Foundry](https://ai.azure.com/?cid=learnDo
25
25
The Azure AI Foundry SDK is available in multiple languages, including Python, Java, JavaScript, and C#. This quickstart provides instructions for each of these languages.
26
26
27
27
> [!TIP]
28
-
> The rest of this article shows how to use a **[!INCLUDE [fdp](../includes/fdp-project-name.md)]**. Select **[!INCLUDE [hub](../includes/hub-project-name.md)]** at the top of this article if you want to use a [!INCLUDE [hub](../includes/hub-project-name.md)] instead. [Which type of project do I need?](../what-is-azure-ai-foundry.md#which-type-of-project-do-i-need)
28
+
> The rest of this article shows how to create and use a **[!INCLUDE [fdp](../includes/fdp-project-name.md)]**. Select **[!INCLUDE [hub](../includes/hub-project-name.md)]** at the top of this article if you want to use a [!INCLUDE [hub](../includes/hub-project-name.md)] instead. [Which type of project do I need?](../what-is-azure-ai-foundry.md#which-type-of-project-do-i-need)
29
29
30
30
## Prerequisites
31
31
@@ -37,12 +37,14 @@ The Azure AI Foundry SDK is available in multiple languages, including Python, J
37
37
## Start with a project and model
38
38
39
39
1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
40
-
1. On the home page, search and then select the **gpt-4o** model.
40
+
1. In the portal, you can explore a rich catalog of cutting-edge models from Microsoft, OpenAI, DeepSeek, Hugging Face, Meta, and more. For this tutorial, search and then
41
+
select the **gpt-4o** model.
41
42
42
43
:::image type="content" source="../media/quickstarts/start-building.png" alt-text="Screenshot shows how to start building an Agent in Azure AI Foundry portal.":::
43
44
44
45
1. On the model details page, select **Use this model**.
45
46
1. Fill in a name to use for your project and select **Create**.
47
+
1. Review the deployment information then select **Deploy**.
46
48
1. Once your resources are created, you are in the chat playground.
0 commit comments