You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/concepts/jailbreak-detection.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,7 +86,7 @@ This shield aims to safeguard against attacks that use information not directly
86
86
87
87
### Language availability
88
88
89
-
Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
89
+
Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Spanish, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/language-support.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ ms.author: pafarley
17
17
> [!IMPORTANT]
18
18
> The Azure AI Content Safety models for protected material, groundedness detection, and custom categories (standard) work with English only.
19
19
>
20
-
> Other Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, these features can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
20
+
> Other Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Spanish, Italian, Japanese, Portuguese. However, these features can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
|[Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md)|
51
-
|[Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md)|
52
-
|[Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
53
-
| Custom categories API (preview) | Lets you create and train your own [custom content categories](./concepts/custom-categories.md) and scan text for matches. [Quickstart](./quickstart-custom-categories.md)|
54
-
| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md)|
55
-
|[Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
56
-
|[Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
48
+
|Feature| Functionality | Concepts guide | Get started|
|[Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a User input attack on a Large Language Model. |[Prompt Shields concepts](/azure/ai-services/content-safety/concepts/jailbreak-detection)|[Quickstart](./quickstart-jailbreak.md)|
51
+
|[Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. |[Groundedness detection concepts](/azure/ai-services/content-safety/concepts/groundedness)|[Quickstart](./quickstart-groundedness.md)|
52
+
|[Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). |[Protected material concepts](/azure/ai-services/content-safety/concepts/protected-material)|[Quickstart](./quickstart-protected-material.md)|
53
+
| Custom categories API (preview) | Lets you create and train your own custom content categories and scan text for matches. |[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|[Quickstart](./quickstart-custom-categories.md)|
54
+
| Custom categories (rapid) API (preview) | Lets you define emerging harmful content patterns and scan text and images for matches.|[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|[How-to guide](./how-to/custom-categories-rapid.md)|
55
+
|[Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |[Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)|[Quickstart](/azure/ai-services/content-safety/quickstart-text)|
56
+
|[Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |[Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)|[Quickstart](/azure/ai-services/content-safety/quickstart-image)|
57
57
58
58
59
59
## Content Safety Studio
@@ -130,7 +130,7 @@ See the following list for the input requirements for each feature.
130
130
131
131
### Language support
132
132
133
-
Content Safety models have been specifically trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
133
+
Content Safety models have been specifically trained and tested in the following languages: English, German, Spanish, Japanese, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
134
134
135
135
Custom Categories currently only works well in English. You can try to use other languages with your own dataset, but the quality might vary across languages.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-custom-categories.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,8 @@ ms.author: pafarley
15
15
16
16
Follow this guide to use Azure AI Content Safety Custom category REST API to create your own content categories for your use case and train Azure AI Content Safety to detect them in new text content.
17
17
18
+
For more information on Custom categories, see the [Custom categories concept page](./concepts/custom-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
19
+
18
20
> [!IMPORTANT]
19
21
> This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability).
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-groundedness.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,8 @@ ms.author: pafarley
15
15
16
16
Follow this guide to use Azure AI Content Safety Groundedness detection to check whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
17
17
18
+
For more information on Groundedness detection, see the [Groundedness detection concept page](./concepts/groundedness.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
19
+
18
20
## Prerequisites
19
21
20
22
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -339,9 +341,8 @@ If you want to clean up and remove an Azure AI services subscription, you can de
Get started with the Content Studio, REST API, or client SDKs to do basic image moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
19
19
20
+
For more information on image moderation, see the [Harm categories concept page](./concepts/harm-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
21
+
20
22
> [!NOTE]
21
23
>
22
24
> The sample data and code may contain offensive content. User discretion is advised.
@@ -60,9 +62,8 @@ If you want to clean up and remove an Azure AI services subscription, you can de
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-jailbreak.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,8 @@ ms.author: pafarley
15
15
16
16
Follow this guide to use Azure AI Content Safety Prompt Shields to check your large language model (LLM) inputs for both User Prompt and Document attacks.
17
17
18
+
For more information on Prompt Shields, see the [Prompt Shields concept page](./concepts/jailbreak-detection.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
19
+
18
20
## Prerequisites
19
21
20
22
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -94,9 +96,8 @@ If you want to clean up and remove an Azure AI services subscription, you can de
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-protected-material.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,9 @@ ms.author: pafarley
15
15
16
16
Protected material text describes language that matches known text content (for example, song lyrics, articles, recipes, selected web content). This feature can be used to identify and block known text content from being displayed in language model output (English content only). For more information, see [Protected material concepts](./concepts/protected-material.md).
17
17
18
+
For more information on protected material detection, see the [Protected material detection concept page](./concepts/protected-material.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
19
+
20
+
18
21
## Prerequisites
19
22
20
23
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -87,9 +90,9 @@ If you want to clean up and remove an Azure AI services subscription, you can de
Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
19
19
20
+
For more information on text moderation, see the [Harm categories concept page](./concepts/harm-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
21
+
22
+
20
23
> [!NOTE]
21
24
>
22
25
> The sample data and code may contain offensive content. User discretion is advised.
@@ -60,8 +63,8 @@ If you want to clean up and remove an Azure AI services subscription, you can de
0 commit comments