|
| 1 | +--- |
| 2 | +title: Use Content Safety in Azure AI Foundry portal |
| 3 | +titleSuffix: Azure AI services |
| 4 | +description: Learn how to use the Content Safety try it out page in Azure AI Foundry portal to experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content. |
| 5 | +ms.service: azure-ai-studio |
| 6 | +ms.custom: |
| 7 | + - ignite-2024 |
| 8 | +ms.topic: how-to |
| 9 | +author: PatrickFarley |
| 10 | +manager: nitinme |
| 11 | +ms.date: 01/28/2025 |
| 12 | +ms.author: pafarley |
| 13 | +--- |
| 14 | + |
| 15 | +# Use Content Safety in Azure AI Foundry portal |
| 16 | + |
| 17 | +Azure AI Foundry includes a Content Safety **try it out** page that lets you use the core detection models and other content safety features. |
| 18 | + |
| 19 | +## Prerequisites |
| 20 | + |
| 21 | +- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services). |
| 22 | +- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices). |
| 23 | + |
| 24 | + |
| 25 | +## Setup |
| 26 | + |
| 27 | +Follow these steps to use the Content Safety **try it out** page: |
| 28 | + |
| 29 | +1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab. |
| 30 | +1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content. |
| 31 | + |
| 32 | +:::image type="content" source="/azure/ai-studio/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety."::: |
| 33 | + |
| 34 | +## Analyze text |
| 35 | + |
| 36 | +1. Select the **Moderate text content** panel. |
| 37 | +1. Add text to the input field, or select sample text from the panels on the page. |
| 38 | +1. Select **Run test**. |
| 39 | + The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works. |
| 40 | + |
| 41 | +### Use a blocklist |
| 42 | + |
| 43 | +The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist. |
| 44 | + |
| 45 | +:::image type="content" source="/azure/ai-studio/media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel."::: |
| 46 | + |
| 47 | +## Analyze images |
| 48 | + |
| 49 | +The **Moderate image** page provides capability for you to quickly try out image moderation. |
| 50 | + |
| 51 | +1. Select the **Moderate image content** panel. |
| 52 | +1. Select a sample image from the panels on the page, or upload your own image. |
| 53 | +1. Select **Run test**. |
| 54 | + The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works. |
| 55 | + |
| 56 | +## View and export code |
| 57 | + |
| 58 | +You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end. |
| 59 | + |
| 60 | +:::image type="content" source="/azure/ai-studio/media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button."::: |
| 61 | + |
| 62 | +## Use Prompt Shields |
| 63 | + |
| 64 | +The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective. |
| 65 | + |
| 66 | +1. Select the **Prompt Shields** panel. |
| 67 | +1. Select a sample text on the page, or input your own content for testing. |
| 68 | +1. Select **Run test**. |
| 69 | + The service returns the risk flag and type for each sample. |
| 70 | + |
| 71 | +For more information, see the [Prompt Shields conceptual guide](/azure/ai-services/content-safety/concepts/jailbreak-detection). |
| 72 | + |
| 73 | + |
| 74 | + |
| 75 | +## Use Groundedness detection |
| 76 | + |
| 77 | +The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. |
| 78 | + |
| 79 | +1. Select the **Groundedness detection** panel. |
| 80 | +1. Select a sample content set on the page, or input your own for testing. |
| 81 | +1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown. |
| 82 | +1. Select **Run test**. |
| 83 | + The service returns the groundedness detection result. |
| 84 | + |
| 85 | + |
| 86 | +For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness). |
| 87 | + |
| 88 | + |
| 89 | +## Use Protected material detection |
| 90 | + |
| 91 | +This feature scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). |
| 92 | + |
| 93 | +1. Select the **Protected material detection for text** or **Protected material detection for code** panel. |
| 94 | +1. Select a sample text on the page, or input your own for testing. |
| 95 | +1. Select **Run test**. |
| 96 | + The service returns the protected content result. |
| 97 | + |
| 98 | +For more information, see the [Protected material conceptual guide](/azure/ai-services/content-safety/concepts/protected-material). |
| 99 | + |
| 100 | +## Use custom categories |
| 101 | + |
| 102 | +This feature lets you create and train your own custom content categories and scan text for matches. |
| 103 | + |
| 104 | +1. Select the **Custom categories** panel. |
| 105 | +1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**. |
| 106 | +1. Select a category and enter your sample input text, and select **Run test**. |
| 107 | + The service returns the custom category result. |
| 108 | + |
| 109 | + |
| 110 | +For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories). |
| 111 | + |
| 112 | + |
| 113 | +## Next step |
| 114 | + |
| 115 | +To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](/azure/ai-studio/concepts/content-filtering) guide. |
0 commit comments