Skip to content

Commit eeead59

Browse files
committed
foundry freshness
1 parent 4bce020 commit eeead59

File tree

5 files changed

+36
-36
lines changed

5 files changed

+36
-36
lines changed

articles/ai-foundry/ai-services/content-safety-overview.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -7,41 +7,41 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: overview
10-
ms.date: 05/31/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# Content Safety in the Azure AI Foundry portal
1616

17-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that help you detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) lets you view, explore, and try out sample code for detecting harmful content across different modalities.
1818

1919
## Features
2020

21-
You can use Azure AI Content Safety for the following scenarios:
21+
Use Azure AI Content Safety for the following scenarios:
2222

23-
**Text content**:
24-
- Moderate text content: This feature scans and moderates text content, identifying and categorizing it based on different levels of severity to ensure appropriate responses.
25-
- Groundedness detection: This filter determines if the AI's responses are based on trusted, user-provided sources, ensuring that the answers are "grounded" in the intended material. Groundedness detection is helpful for improving the reliability and factual accuracy of responses.
26-
- Protected material detection for text: This feature identifies protected text material, such as known song lyrics, articles, or other content, ensuring that the AI doesn't output this content without permission.
27-
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories, helping to prevent uncredited or unauthorized reproduction of source code.
28-
- Prompt shields: This feature provides a unified API to address "Jailbreak" and "Indirect Attacks":
23+
### Text content
24+
- Moderate text content: Scans and moderates text content. It identifies and categorizes text based on different levels of severity to ensure appropriate responses.
25+
- Groundedness detection: Determines if the AI's responses are based on trusted, user-provided sources. This feature ensures that the answers are "grounded" in the intended material. Groundedness detection helps improve the reliability and factual accuracy of responses.
26+
- Protected material detection for text: Identifies protected text material, such as known song lyrics, articles, or other content. This feature ensures that the AI doesn't output this content without permission.
27+
- Protected material detection for code: Detects code segments in the model's output that match known code from public repositories. This feature helps prevent uncredited or unauthorized reproduction of source code.
28+
- Prompt shields: Provides a unified API to address "Jailbreak" and "Indirect Attacks":
2929
- Jailbreak Attacks: Attempts by users to manipulate the AI into bypassing its safety protocols or ethical guidelines. Examples include prompts designed to trick the AI into giving inappropriate responses or performing tasks it was programmed to avoid.
30-
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks, indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
30+
- Indirect Attacks: Also known as Cross-Domain Prompt Injection Attacks. Indirect attacks involve embedding malicious prompts within documents that the AI might process. For example, if a document contains hidden instructions, the AI might inadvertently follow them, leading to unintended or unsafe outputs.
3131

32-
**Image content**:
32+
### Image content
3333
- Moderate image content: Similar to text moderation, this feature filters and assesses image content to detect inappropriate or harmful visuals.
34-
- Moderate multimodal content: This is designed to handle a combination of text and images, assessing the overall context and any potential risks across multiple types of content.
34+
- Moderate multimodal content: Designed to handle a combination of text and images. It assesses the overall context and any potential risks across multiple types of content.
3535

36-
**Customize your own categories**:
37-
- Custom categories: Allows users to define specific categories for moderating and filtering content, tailoring safety protocols to unique needs.
38-
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations, reinforcing safety boundaries and helping prevent unwanted outputs.
36+
### Custom filtering
37+
- Custom categories: Allows users to define specific categories for moderating and filtering content. Tailors safety protocols to unique needs.
38+
- Safety system message: Provides a method for setting up a "System Message" to instruct the AI on desired behavior and limitations. It reinforces safety boundaries and helps prevent unwanted outputs.
3939

4040
[!INCLUDE [content-safety-harm-categories](../includes/content-safety-harm-categories.md)]
4141

4242
## Limitations
4343

44-
Refer to the [Content Safety overview](/azure/ai-services/content-safety/overview) for supported regions, rate limits, and input requirements for all features. Refer to the [Language support](/azure/ai-services/content-safety/language-support) page for supported languages.
44+
For supported regions, rate limits, and input requirements for all features, see the [Content Safety overview](/azure/ai-services/content-safety/overview). For supported languages, see the [Language support](/azure/ai-services/content-safety/language-support) page.
4545

4646

4747
## Next step

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: concept-article
12-
ms.date: 05/31/2025
12+
ms.date: 07/28/2025
1313
ms.reviewer: eur
1414
ms.author: pafarley
1515
author: PatrickFarley
@@ -20,17 +20,17 @@ author: PatrickFarley
2020
[Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) includes a content filtering system that works alongside core models and image generation models.
2121

2222
> [!IMPORTANT]
23-
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
23+
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
2424
2525
## How it works
2626

27-
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
27+
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
2828

2929
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later). Models available through **serverless API deployments** have content filtering enabled by default. To learn more about the default content filter enabled for serverless API deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
3030

3131
## Language support
3232

33-
The content filtering models have been trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
33+
The content filtering models are trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
3434

3535
## Content risk filters (input and output filters)
3636

@@ -50,7 +50,7 @@ The following special filters work for both input and output of generative AI mo
5050
|Category|Description|
5151
|--------|-----------|
5252
|Safe | Content might be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
53-
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.|
53+
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature), and depictions at low intensity.|
5454
| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
5555
|High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or nonconsensual power exchange or abuse.|
5656

@@ -65,8 +65,8 @@ You can also enable special filters for generative AI scenarios:
6565
### Other output filters
6666

6767
You can also enable the following special output filters:
68-
- **Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
69-
- **Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
68+
- **Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that a large language model might output.
69+
- **Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which a large language models might output without proper citation of source repositories.
7070
- **Groundedness**: The groundedness detection filter detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
7171

7272
[!INCLUDE [create-content-filter](../includes/create-content-filter.md)]

articles/ai-foundry/foundry-models/how-to/use-blocklists.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,27 +7,27 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 05/31/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# How to use blocklists with Foundry Models in Azure AI Foundry services
1616

17-
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations in order to filter terms specific to your use case. This article illustrates how to create custom blocklists as part of your content filters in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
17+
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations to filter terms specific to your use case. This article shows how to create custom blocklists as part of your content filters in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
1818

1919
## Prerequisites
2020

21-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md) if it's your case.
21+
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. For more information, see [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md).
2222

2323
* An Azure AI Foundry services resource. For more information, see [Create an Azure AI Foundry Services resource](../../../ai-services/multi-service-resource.md?context=/azure/ai-services/model-inference/context/context).
2424

2525
* An Azure AI Foundry project [connected to your Azure AI Foundry services resource](../../model-inference/how-to/configure-project-connection.md).
2626

27-
* A model deployment. See [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md) for adding models to your resource.
27+
* A model deployment. For more information, see [Add and configure models to Azure AI Foundry services](../../model-inference/how-to/create-model-deployments.md).
2828

2929
> [!NOTE]
30-
> Blocklist (preview) is only supported for Azure OpenAI models.
30+
> Blocklist (preview) support is limited to Azure OpenAI models.
3131
3232
[!INCLUDE [use-blocklists](../../includes/use-blocklists.md)]
3333

articles/ai-foundry/includes/use-blocklists.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,30 +6,30 @@ ms.reviewer: pafarley
66
ms.author: pafarley
77
ms.service: azure-ai-foundry
88
ms.topic: include
9-
ms.date: 05/01/2025
9+
ms.date: 07/28/2025
1010
ms.custom: include
1111
---
1212

1313

1414
## Create a blocklist
1515

16-
1. Go to [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) and navigate to your project. Then select the **Guardrails + controls** page on the left nav and select the **Blocklists** tab.
16+
1. Go to [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) and navigate to your project. Then select the **Guardrails + controls** page in the left navigation and select the **Blocklists** tab.
1717

1818
:::image type="content" source="../media/content-safety/content-filter/select-blocklists.png" lightbox="../media/content-safety/content-filter/select-blocklists.png" alt-text="Screenshot of the Blocklists page tab.":::
1919

2020
1. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
2121

22-
1. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
22+
1. Select your new blocklist. On the blocklist's page, select **Add new term**.
2323

24-
1. Enter the term that should be filtered and select **Add term**. You can also use a regex. You can delete each term in your blocklist.
24+
1. Enter the term that you want to filter and select **Add term**. You can also use a regex. You can delete each term in your blocklist.
2525

2626
## Attach a blocklist to a content filter configuration
2727

28-
1. Once the blocklist is ready, go back to the **Guardrails + controls** page and select the **Content filters** tab. Create a new content filter configuration. This opens a wizard with several AI content safety components.
28+
1. After you create the blocklist, return to the **Guardrails + controls** page and select the **Content filters** tab. Create a new content filter configuration. A wizard opens with several AI content safety components.
2929

3030
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" lightbox="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the Create content filter button.":::
3131

32-
1. On the **Input filter** and **Output filter** screens, toggle the **Blocklist** button on. You can then select a blocklist from the list.
32+
1. On the **Input filter** and **Output filter** screens, turn on the **Blocklist** toggle. You can then select a blocklist from the list.
3333
There are two types of blocklists: the custom blocklists you created, and prebuilt blocklists that Microsoft provides—in this case a Profanity blocklist (English).
3434

35-
1. You can now decide which of the available blocklists you want to include in your content filtering configuration. The last step is to review and finish the content filtering configuration by selecting **Next**. You can always go back and edit your configuration. Once its ready, select a **Create content filter**. The new configuration that includes your blocklists can now be applied to a deployment.
35+
1. Decide which of the available blocklists you want to include in your content filtering configuration. Review and finish the content filtering configuration by selecting **Next**. You can always go back and edit your configuration. When it's ready, select **Create content filter**. You can now apply the new configuration that includes your blocklists to a deployment.

articles/ai-foundry/responsible-use-of-ai-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ manager: nitinme
66
keywords: Azure AI services, cognitive
77
ms.service: azure-ai-foundry
88
ms.topic: overview
9-
ms.date: 05/31/2025
9+
ms.date: 07/28/2025
1010
ms.author: pafarley
1111
author: PatrickFarley
1212
ms.custom: ignite-2024

0 commit comments

Comments
 (0)