|
| 1 | +--- |
| 2 | +title: include file |
| 3 | +description: include file |
| 4 | +author: PatrickFarley |
| 5 | +ms.reviewer: pafarley |
| 6 | +ms.author: pafarley |
| 7 | +ms.service: azure-ai-studio |
| 8 | +ms.topic: include |
| 9 | +ms.date: 11/25/2024 |
| 10 | +ms.custom: include |
| 11 | +--- |
| 12 | + |
| 13 | + |
| 14 | +## Create a content filter in Azure AI Foundry |
| 15 | + |
| 16 | +For any model deployment in [Azure AI Foundry](https://ai.azure.com), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection. |
| 17 | + |
| 18 | +> [!TIP] |
| 19 | +> For guidance with content filters in your Azure AI Foundry project, you can read more at [Azure AI Foundry content filtering](/azure/ai-studio/concepts/content-filtering). |
| 20 | +
|
| 21 | +Follow these steps to create a content filter: |
| 22 | + |
| 23 | +1. Go to [Azure AI Foundry](https://ai.azure.com) and navigate to your project. Then select the **Safety + security** page from the left menu and select the **Content filters** tab. |
| 24 | + |
| 25 | + :::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the button to create a new content filter." lightbox="../media/content-safety/content-filter/create-content-filter.png"::: |
| 26 | +1. Select **+ Create content filter**. |
| 27 | +1. On the **Basic information** page, enter a name for your content filtering configuration. Select a connection to associate with the content filter. Then select **Next**. |
| 28 | + |
| 29 | + :::image type="content" source="../media/content-safety/content-filter/create-content-filter-basic.png" alt-text="Screenshot of the option to select or enter basic information such as the filter name when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-basic.png"::: |
| 30 | + |
| 31 | + Now you can configure the input filters (for user prompts) and output filters (for model completion). |
| 32 | +1. On the **Input filters** page, you can set the filter for the input prompt. For the first four content categories there are three severity levels that are configurable: Low, medium, and high. You can use the sliders to set the severity threshold if you determine that your application or usage scenario requires different filtering than the default values. |
| 33 | + Some filters, such as Prompt Shields and Protected material detection, enable you to determine if the model should annotate and/or block content. Selecting **Annotate only** runs the respective model and return annotations via API response, but it will not filter content. In addition to annotate, you can also choose to block content. |
| 34 | + |
| 35 | + If your use case was approved for modified content filters, you receive full control over content filtering configurations and can choose to turn filtering partially or fully off, or enable annotate only for the content harms categories (violence, hate, sexual and self-harm). |
| 36 | + |
| 37 | + Content will be annotated by category and blocked according to the threshold you set. For the violence, hate, sexual, and self-harm categories, adjust the slider to block content of high, medium, or low severity. |
| 38 | + |
| 39 | + :::image type="content" source="../media/content-safety/content-filter/input-filter.png" alt-text="Screenshot of input filter screen."::: |
| 40 | +1. On the **Output filters** page, you can configure the output filter, which will be applied to all output content generated by your model. Configure the individual filters as before. This page also provides the Streaming mode option, which lets you filter content in near-real-time as it's generated by the model, reducing latency. When you're finished select **Next**. |
| 41 | + |
| 42 | + Content will be annotated by each category and blocked according to the threshold. For violent content, hate content, sexual content, and self-harm content category, adjust the threshold to block harmful content with equal or higher severity levels. |
| 43 | + |
| 44 | + :::image type="content" source="../media/content-safety/content-filter/output-filter.png" alt-text="Screenshot of output filter screen."::: |
| 45 | +1. Optionally, on the **Deployment** page, you can associate the content filter with a deployment. If a selected deployment already has a filter attached, you must confirm that you want to replace it. You can also associate the content filter with a deployment later. Select **Create**. |
| 46 | + |
| 47 | + :::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png"::: |
| 48 | + |
| 49 | + Content filtering configurations are created at the hub level in the Azure AI Foundry portal. Learn more about configurability in the [Azure OpenAI Service documentation](/azure/ai-services/openai/how-to/content-filters). |
| 50 | + |
| 51 | + |
| 52 | +1. On the **Review** page, review the settings and then select **Create filter**. |
| 53 | + |
| 54 | +### Use a blocklist as a filter |
| 55 | + |
| 56 | +You can apply a blocklist as either an input or output filter, or both. Enable the **Blocklist** option on the **Input filter** and/or **Output filter** page. Select one or more blocklists from the dropdown, or use the built-in profanity blocklist. You can combine multiple blocklists into the same filter. |
| 57 | + |
| 58 | +## Apply a content filter |
| 59 | + |
| 60 | +The filter creation process gives you the option to apply the filter to the deployments you want. You can also change or remove content filters from your deployments at any time. |
| 61 | + |
| 62 | +Follow these steps to apply a content filter to a deployment: |
| 63 | + |
| 64 | +1. Go to [Azure AI Foundry](https://ai.azure.com) and select a project. |
| 65 | +1. Select **Models + endpoints** on the left pane and choose one of your deployments, then select **Edit**. |
| 66 | + |
| 67 | + :::image type="content" source="../media/content-safety/content-filter/deployment-edit.png" alt-text="Screenshot of the button to edit a deployment." lightbox="../media/content-safety/content-filter/deployment-edit.png"::: |
| 68 | + |
| 69 | +1. In the **Update deployment** window, select the content filter you want to apply to the deployment. Then select **Save and close**. |
| 70 | + |
| 71 | + :::image type="content" source="../media/content-safety/content-filter/apply-content-filter.png" alt-text="Screenshot of apply content filter." lightbox="../media/content-safety/content-filter/apply-content-filter.png"::: |
| 72 | + |
| 73 | + You can also edit and delete a content filter configuration if required. Before you delete a content filtering configuration, you will need to unassign and replace it from any deployment in the **Deployments** tab. |
| 74 | + |
| 75 | +Now, you can go to the playground to test whether the content filter works as expected. |
| 76 | + |
0 commit comments