You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/content-filters.md
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,26 @@
1
1
---
2
-
title: 'Use content filters (preview) with Azure OpenAI Service'
2
+
title: 'Use content filters (preview) with Azure AI Foundry'
3
3
titleSuffix: Azure OpenAI
4
-
description: Learn how to use and configure the content filters that come with Azure OpenAI Service, including getting approval for gated modifications.
4
+
description: Learn how to use and configure the content filters that come with Azure AI Foundry, including getting approval for gated modifications.
5
5
#services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: azure-ai-openai
8
8
ms.topic: how-to
9
-
ms.date: 10/04/2024
9
+
ms.date: 12/05/2024
10
10
author: mrbullwinkle
11
11
ms.author: mbullwin
12
12
recommendations: false
13
13
ms.custom: FY25Q1-Linter
14
-
# customer intent: As a developer, I want to learn how to configure content filters with Azure OpenAI Service so that I can ensure that my applications comply with our Code of Conduct.
14
+
# customer intent: As a developer, I want to learn how to configure content filters with Azure AI Foundry so that I can ensure that my applications comply with our Code of Conduct.
15
15
---
16
16
17
-
# How to configure content filters with Azure OpenAI Service
17
+
# How to configure content filters with Azure AI Foundry
18
18
19
-
The content filtering system integrated into Azure OpenAI Service runs alongside the core models, including DALL-E image generation models. It uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
19
+
The content filtering system integrated into Azure AI Foundry runs alongside the core models, including DALL-E image generation models. It uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories.
20
+
21
+
The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md).
22
+
23
+
Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
20
24
21
25
> [!NOTE]
22
26
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.
@@ -37,7 +41,7 @@ You can configure the following filter categories in addition to the default har
37
41
38
42
|Filter category |Status |Default setting |Applied to prompt or completion? |Description |
39
43
|---------|---------|---------|---------|---|
40
-
|Prompt Shields for direct attacks (jailbreak) |GA| On | User prompt | Filters / annotates user prompts that might present a Jailbreak Risk. For more information about annotations, visit [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview). |
44
+
|Prompt Shields for direct attacks (jailbreak) |GA| On | User prompt | Filters / annotates user prompts that might present a Jailbreak Risk. For more information about annotations, visit [Azure AI Foundry content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview). |
41
45
|Prompt Shields for indirect attacks | GA| Off | User prompt | Filter / annotate Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Requires: [Document embedding and formatting](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt). |
42
46
| Protected material - code |GA| On | Completion | Filters protected code or gets the example citation and license information in annotations for code snippets that match any public code sources, powered by GitHub Copilot. For more information about consuming annotations, see the [content filtering concepts guide](/azure/ai-services/openai/concepts/content-filter#annotations-preview)|
43
47
| Protected material - text | GA| On | Completion | Identifies and blocks known text content from being displayed in the model output (for example, song lyrics, recipes, and selected web content). |
@@ -60,5 +64,5 @@ We recommend informing your content filtering configuration decisions through an
60
64
## Related content
61
65
62
66
- Learn more about Responsible AI practices for Azure OpenAI: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
63
-
- Read more about [content filtering categories and severity levels](../concepts/content-filter.md) with Azure OpenAI Service.
67
+
- Read more about [content filtering categories and severity levels](../concepts/content-filter.md) with Azure AI Foundry.
64
68
- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs) article](../concepts/red-teaming.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/dall-e.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,13 +24,13 @@ OpenAI's DALL-E models generate images based on user-provided text prompts. This
24
24
#### [DALL-E 3](#tab/dalle3)
25
25
26
26
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
27
-
- An Azure OpenAI resource created in the *Sweden Central*region. For more information, see [Create and deploy an Azure OpenAI Service resource](../how-to/create-resource.md).
28
-
- Deploy a *dall-e-3* model with your Azure OpenAI resource.
27
+
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability).
28
+
--Deploy a *dall-e-3* model with your Azure OpenAI resource.
29
29
30
30
#### [DALL-E 2 (preview)](#tab/dalle2)
31
31
32
32
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
33
-
- An Azure OpenAI resource created in the *East US*region. For more information, see [Create and deploy an Azure OpenAI Service resource](../how-to/create-resource.md).
33
+
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For more information, see [Create and deploy an Azure OpenAI Service resource](../how-to/create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/risks-safety-monitor.md
+13-11Lines changed: 13 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,30 +1,28 @@
1
1
---
2
-
title: How to use Risks & Safety monitoring in Azure OpenAI Studio
2
+
title: How to use Risks & Safety monitoring in Azure AI Foundry
3
3
titleSuffix: Azure OpenAI Service
4
4
description: Learn how to check statistics and insights from your Azure OpenAI content filtering activity.
5
5
author: PatrickFarley
6
6
ms.author: pafarley
7
7
ms.service: azure-ai-openai
8
8
ms.topic: how-to
9
-
ms.date: 10/03/2024
9
+
ms.date: 12/05/2024
10
10
manager: nitinme
11
11
---
12
12
13
-
# Use Risks & Safety monitoring in Azure OpenAI Studio (preview)
13
+
# Use Risks & Safety monitoring in Azure AI Foundry (preview)
14
14
15
-
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and Responsible AI principles.
15
+
When you use an Azure OpenAI model deployment with a content filter, you may want to check the results of the filtering activity. You can use that information to further adjust your [filter configuration](/azure/ai-services/openai/how-to/content-filters) to serve your specific business needs and Responsible AI principles.
16
16
17
-
[Azure OpenAI Studio](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
17
+
[Azure AI Foundry](https://oai.azure.com/) provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
18
18
19
19
## Access Risks & Safety monitoring
20
20
21
21
To access Risks & Safety monitoring, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
22
22
23
-
Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select the **Deployments** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Risks & Safety** tab at the top.
23
+
Go to [Azure AI Foundry](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. Select a project. Then select the **Models + endpoints** tab on the left and then select your model deployment from the list. On the deployment's page, select the **Metrics** tab at the top. Then select **Open in Azure Monitor** to view the full report in the Azure portal.
24
24
25
-
## Content detection
26
-
27
-
The **Content detection** pane shows information about content filter activity. Your content filter configuration is applied as described in the [Content filtering documentation](/azure/ai-services/openai/how-to/content-filters).
25
+
## Configure metrics
28
26
29
27
### Report description
30
28
@@ -35,7 +33,9 @@ Content filtering data is shown in the following ways:
35
33
-**Severity distribution by category**: This view shows the severity levels detected for each harm category, across the whole selected time range. This is not limited to _blocked_ content but rather includes all content that was flagged by the content filters.
36
34
-**Severity rate distribution over time by category**: This view shows the rates of detected severity levels over time, for each harm category. Select the tabs to switch between supported categories.
37
35
36
+
<!--
38
37
:::image type="content" source="../media/how-to/content-detection.png" alt-text="Screenshot of the content detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/content-detection.png":::
38
+
-->
39
39
40
40
### Recommended actions
41
41
@@ -56,7 +56,7 @@ To use Potentially abusive user detection, you need:
56
56
### Set up your Azure Data Explorer database
57
57
58
58
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
59
-
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
59
+
1. In Azure AI Foundry, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
60
60
1. Fill in the required information and select **Save**. We recommend you create a new database to store the analysis results.
61
61
1. After you connect the data store, take the following steps to grant permission to write analysis results to the connected database:
62
62
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
@@ -81,14 +81,16 @@ The potentially abusive user detection relies on the user information that custo
81
81
-**Total abuse request ratio/count**
82
82
-**Abuse ratio/count by category**
83
83
84
+
<!--
84
85
:::image type="content" source="../media/how-to/potentially-abusive-user.png" alt-text="Screenshot of the Potentially abusive user detection pane in the Risks & Safety monitoring page." lightbox="../media/how-to/potentially-abusive-user.png":::
86
+
-->
85
87
86
88
### Recommended actions
87
89
88
90
Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
89
91
90
92
## Next steps
91
93
92
-
Next, create or edit a content filter configuration in Azure OpenAI Studio.
94
+
Next, create or edit a content filter configuration in Azure AI Foundry.
93
95
94
96
-[Configure content filters with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/use-blocklists.md
+11-25Lines changed: 11 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,12 +6,12 @@ description: Learn how to use blocklists with Azure OpenAI Service
6
6
manager: nitinme
7
7
ms.service: azure-ai-openai
8
8
ms.topic: how-to
9
-
ms.date: 10/03/2024
9
+
ms.date: 12/05/2024
10
10
author: PatrickFarley
11
11
ms.author: pafarley
12
12
---
13
13
14
-
# Use a blocklist in Azure OpenAI
14
+
# Use a blocklist with Azure OpenAI
15
15
16
16
The configurable content filters are sufficient for most content moderation needs. However, you may need to filter terms specific to your use case.
17
17
@@ -25,6 +25,8 @@ The configurable content filters are sufficient for most content moderation need
25
25
26
26
## Use blocklists
27
27
28
+
#### [Azure OpenAI API](#tab/api)
29
+
28
30
You can create blocklists with the Azure OpenAI API. The following steps help you get started.
29
31
30
32
### Get your token
@@ -61,7 +63,7 @@ The response code should be `201` (created a new list) or `200` (updated an exis
61
63
62
64
### Apply a blocklist to a content filter
63
65
64
-
If you haven't yet created a content filter, you can do so in the Studio in the Content Filters tab on the left hand side. In order to use the blocklist, make sure this Content Filter is applied to an Azure OpenAI deployment. You can do this in the Deployments tab on the left hand side.
66
+
If you haven't yet created a content filter, you can do so in Azure AI Foundry. See [Content filtering](/azure/ai-services/openai/how-to/content-filters#create-a-content-filter-in-azure-ai-foundry).
65
67
66
68
To apply a **completion** blocklist to a content filter, use the following cURL command:
67
69
@@ -132,9 +134,7 @@ The response code should be `200`.
132
134
133
135
### Analyze text with a blocklist
134
136
135
-
Now you can test out your deployment that has the blocklist. The easiest way to do this is in the [Azure OpenAI Studio](https://oai.azure.com/portal/). If the content was blocked either in prompt or completion, you should see an error message saying the content filtering system was triggered.
136
-
137
-
For instruction on calling the Azure OpenAI endpoints, visit the [Quickstart](/azure/ai-services/openai/quickstart).
137
+
Now you can test out your deployment that has the blocklist. For instructions on calling the Azure OpenAI endpoints, visit the [Quickstart](/azure/ai-services/openai/quickstart).
138
138
139
139
In the below example, a GPT-35-Turbo deployment with a blocklist is blocking the prompt. The response returns a `400` error.
140
140
@@ -249,26 +249,12 @@ If the completion itself is blocked, the response returns `200`, as the completi
249
249
}
250
250
```
251
251
252
-
## Use blocklists in Azure OpenAI Studio
253
-
254
-
You can also create custom blocklists in the Azure OpenAI Studio as part of your content filtering configurations (public preview). Instructions on how to create custom content filters can be found [here](/azure/ai-services/openai/how-to/content-filters). The following steps show how to create custom blocklists as part of your content filters via Azure OpenAI Studio.
255
-
256
-
1. Select Content Filters from the left menu. Select the Blocklists tab next to Content filters tab. Then select Create Blocklist.
257
-
:::image type="content" source="../media/content-filters/blocklist-select-create.png" alt-text="Screenshot of blocklist create selection." lightbox="../media/content-filters/blocklist-select-create.png":::
258
-
1. Create a name for your blocklist, add a description and select on Create Blocklist.
259
-
:::image type="content" source="../media/content-filters/create-blocklist.png" alt-text="Screenshot of blocklist name and description." lightbox="../media/content-filters/create-blocklist.png":::
260
-
1. Select your custom blocklist once it's created, and select Add new term.
261
-
:::image type="content" source="../media/content-filters/custom-blocklist-add.png" alt-text="Screenshot of custom blocklist add term." lightbox="../media/content-filters/custom-blocklist-add.png":::
262
-
1. Add a term that should be filtered, and select Add term. You can also create a regex.
263
-
:::image type="content" source="../media/content-filters/custom-blocklist-add-item.png" alt-text="Screenshot of custom blocklist add item." lightbox="../media/content-filters/custom-blocklist-add-item.png":::
264
-
1. You can delete each term in your blocklist.
265
-
:::image type="content" source="../media/content-filters/custom-blocklist-edit.png" alt-text="Screenshot of custom blocklist edit screen." lightbox="../media/content-filters/custom-blocklist-edit.png":::
266
-
1. Once the blocklist is ready, navigate to the Content filters (Preview) section and create a new customized content filter configuration. This opens a wizard with several AI content safety components. You can find more information on how to configure the main filters and optional models [here](/azure/ai-services/openai/how-to/content-filters). Go to Add blocklist (Optional).
267
-
1. You'll now see all available blocklists. There are two types of blocklists – the blocklists you created, and prebuilt blocklists that Microsoft provides, in this case a Profanity blocklist (English)
268
-
1. You can now decide which of the available blocklists you would like to include in your content filtering configuration. In the below example, we apply CustomBlocklist1 that we just created. The last step is to review and finish the content filtering configuration by clicking on Next.
269
-
:::image type="content" source="../media/content-filters/filtering-configuration-manage.png" alt-text="Screenshot of filtering configuration management." lightbox="../media/content-filters/filtering-configuration-manage.png":::
270
-
1. You can always go back and edit your configuration. Once it’s ready, select on Create content filter. The new configuration that includes your blocklists can now be applied to a deployment. Detailed instructions can be found [here](/azure/ai-services/openai/how-to/content-filters).
0 commit comments