Skip to content

Commit 1103034

Browse files
authored
Merge pull request #4044 from PatrickFarley/consaf-updates
add new qs pivots
2 parents 3c33c1e + 55dc31f commit 1103034

26 files changed

+1489
-1278
lines changed

articles/ai-foundry/toc.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -557,8 +557,6 @@ items:
557557
href: ai-services/content-safety-overview.md
558558
- name: Content safety for models deployed with serverless APIs
559559
href: concepts/model-catalog-content-safety.md
560-
- name: Use Azure AI Content Safety in AI Foundry portal
561-
href: /azure/ai-services/content-safety/how-to/foundry?context=/azure/ai-foundry/context/context
562560
- name: Content filtering
563561
href: concepts/content-filtering.md
564562
- name: Use blocklists

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,16 @@
160160
"redirect_url": "/azure/ai-services/content-safety/quickstart-custom-categories",
161161
"redirect_document_id": true
162162
},
163+
{
164+
"source_path_from_root": "/articles/ai-services/content-safety/how-to/foundry.md",
165+
"redirect_url": "/azure/ai-foundry/ai-services/content-safety-overview",
166+
"redirect_document_id": false
167+
},
168+
{
169+
"source_path_from_root": "/articles/ai-services/content-safety/studio-quickstart.md",
170+
"redirect_url": "/azure/ai-foundry/ai-services/content-safety-overview?context=/azure/ai-services/content-safety/context/context",
171+
"redirect_document_id": false
172+
},
163173
{
164174
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-voice-create-voice.md",
165175
"redirect_url": "/azure/ai-services/speech-service/professional-voice-train-voice",

articles/ai-services/content-safety/how-to/foundry.md

Lines changed: 0 additions & 115 deletions
This file was deleted.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
---
2+
title: "Quickstart: Use a blocklist in the Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
## Prerequisites
13+
14+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
15+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
16+
17+
18+
## Setup
19+
20+
Follow these steps to use the Content Safety **try it out** page:
21+
22+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
23+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
24+
25+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
26+
27+
28+
### Use a blocklist
29+
30+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
31+
32+
:::image type="content" source="/azure/ai-foundry/media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel.":::
33+
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: "Quickstart: Use custom categories in the Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
19+
## Setup
20+
21+
Follow these steps to use the Content Safety **try it out** page:
22+
23+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
24+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
25+
26+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
27+
28+
29+
## Use custom categories
30+
31+
This feature lets you create and train your own custom content categories and scan text for matches.
32+
33+
1. Select the **Custom categories** panel.
34+
1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**.
35+
1. Select a category and enter your sample input text, and select **Run test**.
36+
The service returns the custom category result.
37+
38+
39+
For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories).
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: "Quickstart: Use groundedness detection"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
19+
## Setup
20+
21+
Follow these steps to use the Content Safety **try it out** page:
22+
23+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
24+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
25+
26+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
27+
28+
29+
30+
## Use Groundedness detection
31+
32+
The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
33+
34+
1. Select the **Groundedness detection** panel.
35+
1. Select a sample content set on the page, or input your own for testing.
36+
1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown.
37+
1. Select **Run test**.
38+
The service returns the groundedness detection result.
39+
40+
41+
For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness).
42+
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: "Quickstart: Analyze image content"
3+
description: In this quickstart, get started using Azure AI Content Safety to analyze image content for objectionable material.
4+
author: PatrickFarley
5+
manager: nitinme
6+
ms.service: azure-ai-content-safety
7+
ms.custom:
8+
ms.topic: include
9+
ms.date: 04/10/2025
10+
ms.author: pafarley
11+
---
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
## Setup
19+
20+
Follow these steps to use the Content Safety **try it out** page:
21+
22+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
23+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
24+
25+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
26+
27+
## Analyze images
28+
29+
The **Moderate image** page provides capability for you to quickly try out image moderation.
30+
31+
1. Select the **Moderate image content** panel.
32+
1. Select a sample image from the panels on the page, or upload your own image.
33+
1. Select **Run test**.
34+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
35+
36+
## View and export code
37+
38+
You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
39+
40+
:::image type="content" source="/azure/ai-foundry/media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button.":::
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: "Quickstart: Use prompt shields in the Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
14+
## Prerequisites
15+
16+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
17+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
18+
19+
20+
## Setup
21+
22+
Follow these steps to use the Content Safety **try it out** page:
23+
24+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
25+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
26+
27+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
28+
29+
30+
## Use Prompt Shields
31+
32+
The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
33+
34+
1. Select the **Prompt Shields** panel.
35+
1. Select a sample text on the page, or input your own content for testing.
36+
1. Select **Run test**.
37+
The service returns the risk flag and type for each sample.
38+
39+
For more information, see the [Prompt Shields conceptual guide](/azure/ai-services/content-safety/concepts/jailbreak-detection).
40+
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
title: "Quickstart: Use protected material detection"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
19+
## Setup
20+
21+
Follow these steps to use the Content Safety **try it out** page:
22+
23+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
24+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
25+
26+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
27+
28+
29+
## Use Protected material detection
30+
31+
This feature scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content).
32+
33+
1. Select the **Protected material detection for text** or **Protected material detection for code** panel.
34+
1. Select a sample text on the page, or input your own for testing.
35+
1. Select **Run test**.
36+
The service returns the protected content result.
37+
38+
For more information, see the [Protected material conceptual guide](/azure/ai-services/content-safety/concepts/protected-material).

0 commit comments

Comments
 (0)