You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/concepts/custom-categories.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Custom categories in Azure AI Content Safety"
2
+
title: "Custom categories in Azure AI Content Safety (preview)"
3
3
titleSuffix: Azure AI services
4
4
description: Learn about custom content categories and the different ways you can use Azure AI Content Safety to handle them on your platform.
5
5
#services: cognitive-services
@@ -12,7 +12,7 @@ ms.date: 07/05/2024
12
12
ms.author: pafarley
13
13
---
14
14
15
-
# Custom categories
15
+
# Custom categories (preview)
16
16
17
17
Azure AI Content Safety lets you create and manage your own content moderation categories for enhanced moderation and filtering that matches your specific policies or use cases.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/custom-categories-rapid.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Use the custom categories (rapid) API"
2
+
title: "Use the custom categories (rapid) API (preview)"
3
3
titleSuffix: Azure AI services
4
4
description: Learn how to use the custom categories (rapid) API to mitigate harmful content incidents quickly.
5
5
#services: cognitive-services
@@ -13,7 +13,7 @@ ms.author: pafarley
13
13
---
14
14
15
15
16
-
# Use the custom categories (rapid) API
16
+
# Use the custom categories (rapid) API (preview)
17
17
18
18
The custom categories (rapid) API lets you quickly respond to emerging harmful content incidents. You can define an incident with a few examples in a specific topic, and the service will start detecting similar content.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/custom-categories.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Use the custom category API"
2
+
title: "Use the custom category API (preview)"
3
3
titleSuffix: Azure AI services
4
4
description: Learn how to use the custom category API to create your own harmful content categories and train the Content Safety model for your use case.
5
5
#services: cognitive-services
@@ -12,7 +12,7 @@ ms.date: 04/11/2024
12
12
ms.author: pafarley
13
13
---
14
14
15
-
# Use the custom categories (standard) API
15
+
# Use the custom categories (standard) API (preview)
16
16
17
17
18
18
The custom categories (standard) API lets you create your own content categories for your use case and train Azure AI Content Safety to detect them in new content.
Follow this guide to use Azure AI Content Safety Custom category REST API to create your own content categories for your use case and train Azure AI Content Safety to detect them in new text content.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-jailbreak.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.date: 03/15/2024
11
11
ms.author: pafarley
12
12
---
13
13
14
-
# Quickstart: Prompt Shields (preview)
14
+
# Quickstart: Prompt Shields
15
15
16
16
"Prompt Shields" in Azure AI Content Safety are specifically designed to safeguard generative AI systems from generating harmful or inappropriate content. These shields detect and mitigate risks associated with both User Prompt Attacks (malicious or harmful user-generated inputs) and Document Attacks (inputs containing harmful content embedded within documents). The use of "Prompt Shields" is crucial in environments where GenAI is employed, ensuring that AI outputs remain safe, compliant, and trustworthy.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/whats-new.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,13 +18,13 @@ Learn what's new in the service. These items might be release notes, videos, blo
18
18
19
19
## July 2024
20
20
21
-
### Custom categories (standard) API
21
+
### Custom categories (standard) API public preview
22
22
23
23
The custom categories (standard) API lets you create and train your own custom content categories and scan text for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
24
24
25
25
## May 2024
26
26
27
-
### Custom categories (rapid) API
27
+
### Custom categories (rapid) API public preview
28
28
29
29
The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
0 commit comments