Skip to content

Commit cd10db7

Browse files
committed
update images
1 parent dab6433 commit cd10db7

File tree

3 files changed

+12
-30
lines changed

3 files changed

+12
-30
lines changed

articles/ai-services/openai/how-to/content-filters.md

Lines changed: 12 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -41,60 +41,41 @@ You can configure the following filter categories in addition to the default har
4141
|Prompt Shields for indirect attacks | GA| On| User prompt | Filter / annotate Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Required: [Document ](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt)formatting. |
4242
| Protected material - code |GA| On | Completion | Filters protected code or gets the example citation and license information in annotations for code snippets that match any public code sources, powered by GitHub Copilot. For more information about consuming annotations, see the [content filtering concepts guide](/azure/ai-services/openai/concepts/content-filter#annotations-preview) |
4343
| Protected material - text | GA| On | Completion | Identifies and blocks known text content from being displayed in the model output (for example, song lyrics, recipes, and selected web content). |
44+
| Groundedness* | Preview |Off | Completion |Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. |
45+
46+
*Requires embedding documents in your prompt. [Read more](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt).
4447

4548

4649
## Configure content filters with Azure AI Studio
4750

4851
The following steps show how to set up a customized content filtering configuration for your resource.
4952

50-
1. Go to Azure AI Studio and navigate to the **Content Filters** tab (in the bottom left navigation, as designated by the red box below).
51-
52-
:::image type="content" source="../media/content-filters/studio.png" alt-text="Screenshot of the AI Studio UI with Content Filters highlighted." lightbox="../media/content-filters/studio.png":::
53-
53+
1. Go to Azure AI Studio and navigate to the **Safety + security** page on the left menu. Then select the **Content filter** tab.
5454
1. Create a new customized content filtering configuration.
5555

56-
:::image type="content" source="../media/content-filters/create-filter.png" alt-text="Screenshot of the content filtering configuration UI with create selected." lightbox="../media/content-filters/create-filter.png":::
57-
5856
This leads to the following configuration view, where you can choose a name for the custom content filtering configuration. After entering a name, you can configure the **input filters** (for user prompts) and **output filters** (for model completion).
5957

58+
:::image type="content" source="../media/content-filters/input-filter.png" alt-text="Screenshot of input filter screen.":::
59+
60+
:::image type="content" source="../media/content-filters/output-filter.png" alt-text="Screenshot of output filter screen.":::
61+
6062
For the first four content categories there are three severity levels that are configurable: Low, medium, and high. You can use the sliders to set the severity threshold if you determine that your application or usage scenario requires different filtering than the default values.
6163

6264
Some filters, such as Prompt Shields and Protected material detection, enable you to determine if the model should annotate and/or block content. Selecting **Annotate only** runs the respective model and return annotations via API response, but it will not filter content. In addition to annotate, you can also choose to block content.
6365

6466
If your use case was approved for modified content filters, you receive full control over content filtering configurations and can choose to turn filtering partially or fully off, or enable annotate only for the content harms categories (violence, hate, sexual and self-harm).
65-
66-
| Content filter category | Status | Input type | Filters input| Filters output |
67-
|---------|---------|---------|---------|---------|
68-
| Violence | GA | text, image |||
69-
| Hate | GA | text, image |||
70-
| Sexual | GA | text, image |||
71-
| Self-harm | GA | text, image |||
72-
| Prompt shields for jailbreak attacks | GA | text || |
73-
| Prompt shields for indirect attacks* | GA | text || |
74-
| Protected material for text | GA | text | ||
75-
| Protected material for code | GA | code | ||
76-
| Groundedness* | Preview | text | ||
77-
78-
*Requires embedding documents in your prompt. [Learn more](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt).
79-
80-
:::image type="content" source="../media/content-filters/filter-view.png" alt-text="Screenshot of the content filtering configuration UI." lightbox="../media/content-filters/filter-view.png":::
67+
8168

8269
1. You can create multiple content filtering configurations as per your requirements.
8370

8471
:::image type="content" source="../media/content-filters/multiple.png" alt-text="Screenshot of multiple content configurations in the Azure portal." lightbox="../media/content-filters/multiple.png":::
8572

8673
1. Next, to use a custom content filtering configuration, assign it to one or more deployments in your resource. To do this, go to the **Deployments** tab and select your deployment. Then select **Edit**.
87-
88-
:::image type="content" source="../media/content-filters/edit-deployment.png" alt-text="Screenshot of the content filtering configuration with edit deployment highlighted." lightbox="../media/content-filters/edit-deployment.png":::
89-
9074
1. In the **Update deployment** window that appears, select your custom filter from the **Content filter** dropdown menu. Then select **Save and close** to apply the selected configuration to the deployment.
9175

9276
:::image type="content" source="../media/content-filters/select-filter.png" alt-text="Screenshot of edit deployment configuration with content filter selected." lightbox="../media/content-filters/select-filter.png":::
9377

94-
1. You can also edit and delete a content filter configuration if required.
95-
96-
:::image type="content" source="../media/content-filters/delete.png" alt-text="Screenshot of content filter configuration with edit and delete highlighted." lightbox="../media/content-filters/delete.png":::
97-
78+
You can also edit and delete a content filter configuration if required.
9879

9980
Before you delete a content filtering configuration, you will need to unassign and replace it from any deployment in the **Deployments** tab.
10081

@@ -104,7 +85,8 @@ If you are encountering a content filtering issue, select the **Send Feedback**
10485

10586
When the dialog appears, select the appropriate content filtering issue. Include as much detail as possible relating to your content filtering issue, such as the specific prompt and content filtering error you encountered. Do not include any private or sensitive information.
10687

107-
For support, please [submit a support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
88+
For support, please [submit a support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
89+
10890
## Follow best practices
10991

11092
We recommend informing your content filtering configuration decisions through an iterative identification (for example, red team testing, stress-testing, and analysis) and measurement process to address the potential harms that are relevant for a specific model, application, and deployment scenario. After you implement mitigations such as content filtering, repeat measurement to test effectiveness. Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
129 KB
Loading
177 KB
Loading

0 commit comments

Comments
 (0)