Skip to content

Commit ef0791e

Browse files
authored
Merge pull request #2564 from PatrickFarley/freshness-pass
Freshness pass
2 parents f2842fe + c92ff16 commit ef0791e

File tree

5 files changed

+126
-11
lines changed

5 files changed

+126
-11
lines changed

articles/ai-services/cognitive-services-data-loss-prevention.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: gclarkmt
55
ms.author: gregc
66
ms.service: azure-ai-services
77
ms.topic: how-to
8-
ms.date: 03/25/2024
8+
ms.date: 01/25/2025
99
ms.custom: template-concept
1010
---
1111

@@ -24,7 +24,7 @@ There are two parts to enable data loss prevention. First, the resource property
2424
>[!NOTE]
2525
>
2626
> * The `allowedFqdnList` property value supports a maximum of 1000 URLs.
27-
> * The property supports both IP addresses and fully qualified domain names i.e., `www.microsoft.com`, values.
27+
> * The property supports both IP addresses (IPv4 only) and fully qualified domain names (i.e., `www.microsoft.com`) as values.
2828
> * It can take up to 15 minutes for the updated list to take effect.
2929
3030
# [Azure CLI](#tab/azure-cli)
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
---
2+
title: Use Content Safety in Azure AI Foundry portal
3+
titleSuffix: Azure AI services
4+
description: Learn how to use the Content Safety try it out page in Azure AI Foundry portal to experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
5+
ms.service: azure-ai-studio
6+
ms.custom:
7+
- ignite-2024
8+
ms.topic: how-to
9+
author: PatrickFarley
10+
manager: nitinme
11+
ms.date: 01/28/2025
12+
ms.author: pafarley
13+
---
14+
15+
# Use Content Safety in Azure AI Foundry portal
16+
17+
Azure AI Foundry includes a Content Safety **try it out** page that lets you use the core detection models and other content safety features.
18+
19+
## Prerequisites
20+
21+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
22+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
23+
24+
25+
## Setup
26+
27+
Follow these steps to use the Content Safety **try it out** page:
28+
29+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
30+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
31+
32+
:::image type="content" source="/azure/ai-studio/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
33+
34+
## Analyze text
35+
36+
1. Select the **Moderate text content** panel.
37+
1. Add text to the input field, or select sample text from the panels on the page.
38+
1. Select **Run test**.
39+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
40+
41+
### Use a blocklist
42+
43+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
44+
45+
:::image type="content" source="/azure/ai-studio/media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel.":::
46+
47+
## Analyze images
48+
49+
The **Moderate image** page provides capability for you to quickly try out image moderation.
50+
51+
1. Select the **Moderate image content** panel.
52+
1. Select a sample image from the panels on the page, or upload your own image.
53+
1. Select **Run test**.
54+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
55+
56+
## View and export code
57+
58+
You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
59+
60+
:::image type="content" source="/azure/ai-studio/media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button.":::
61+
62+
## Use Prompt Shields
63+
64+
The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
65+
66+
1. Select the **Prompt Shields** panel.
67+
1. Select a sample text on the page, or input your own content for testing.
68+
1. Select **Run test**.
69+
The service returns the risk flag and type for each sample.
70+
71+
For more information, see the [Prompt Shields conceptual guide](/azure/ai-services/content-safety/concepts/jailbreak-detection).
72+
73+
74+
75+
## Use Groundedness detection
76+
77+
The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
78+
79+
1. Select the **Groundedness detection** panel.
80+
1. Select a sample content set on the page, or input your own for testing.
81+
1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown.
82+
1. Select **Run test**.
83+
The service returns the groundedness detection result.
84+
85+
86+
For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness).
87+
88+
89+
## Use Protected material detection
90+
91+
This feature scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content).
92+
93+
1. Select the **Protected material detection for text** or **Protected material detection for code** panel.
94+
1. Select a sample text on the page, or input your own for testing.
95+
1. Select **Run test**.
96+
The service returns the protected content result.
97+
98+
For more information, see the [Protected material conceptual guide](/azure/ai-services/content-safety/concepts/protected-material).
99+
100+
## Use custom categories
101+
102+
This feature lets you create and train your own custom content categories and scan text for matches.
103+
104+
1. Select the **Custom categories** panel.
105+
1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**.
106+
1. Select a category and enter your sample input text, and select **Run test**.
107+
The service returns the custom category result.
108+
109+
110+
For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories).
111+
112+
113+
## Next step
114+
115+
To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](/azure/ai-studio/concepts/content-filtering) guide.

articles/ai-services/content-safety/studio-quickstart.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.custom: build-2023, build-2023-dataai
1010
ms.topic: quickstart
11-
ms.date: 10/01/2024
11+
ms.date: 01/27/2025
1212
ms.author: pafarley
1313
---
1414

15-
# QuickStart: Azure AI Content Safety Studio
15+
# Quickstart: Azure AI Content Safety Studio
1616

17-
This article explains how you can get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
17+
This article explains how to get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
1818

1919
> [!CAUTION]
2020
> Some of the sample content provided by Content Safety Studio might be offensive. Sample images are blurred by default. User discretion is advised.
@@ -38,9 +38,7 @@ The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page
3838
1. Select the **Moderate text content** panel.
3939
1. Add text to the input field, or select sample text from the panels on the page.
4040
> [!TIP]
41-
> Text size and granularity
42-
>
43-
> See [Input requirements](./overview.md#input-requirements) for maximum text length limitations.
41+
> **Text size and granularity**: See [Input requirements](./overview.md#input-requirements) for maximum text length limitations.
4442
1. Select **Run test**.
4543

4644
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.

articles/ai-services/content-safety/toc.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,8 @@ items:
6969
href: how-to/use-blocklist.md
7070
- name: Mitigate false results
7171
href: how-to/improve-performance.md
72+
- name: Use Content Safety in AI Foundry portal
73+
href: how-to/foundry.md
7274
- name: Containers (preview)
7375
items:
7476
- name: Content Safety containers overview
@@ -80,7 +82,7 @@ items:
8082
- name: Image analysis container
8183
href: how-to/containers/image-container.md
8284
- name: Embedded Content Safety (preview)
83-
href: how-to/embedded-content-safety.md
85+
href: how-to/embedded-content-safety.md
8486
- name: Encryption of data at rest
8587
href: how-to/encrypt-data-at-rest.md
8688
- name: Migrate from public preview to GA

articles/ai-studio/.openpublishing.redirection.ai-studio.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -212,8 +212,8 @@
212212
},
213213
{
214214
"source_path_from_root": "/articles/ai-studio/ai-services/how-to/content-safety.md",
215-
"redirect_url": "/azure/ai-foundry/model-inference/how-to/configure-content-filters",
216-
"redirect_document_id": false
215+
"redirect_url": "/azure/ai-services/content-safety/how-to/foundry",
216+
"redirect_document_id": true
217217
},
218218
{
219219
"source_path_from_root": "/articles/ai-studio/ai-services/concepts/quotas-limits.md",

0 commit comments

Comments
 (0)