You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md)|
51
-
|[Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md)|
52
-
|[Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
53
-
| Custom categories API (preview) | Lets you create and train your own [custom content categories](./concepts/custom-categories.md) and scan text for matches. [Quickstart](./quickstart-custom-categories.md)|
54
-
| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md)|
55
-
|[Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
56
-
|[Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
48
+
|Feature| Functionality | Concepts guide | Get started|
|[Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a User input attack on a Large Language Model. |[Prompt Shields concepts](/azure/ai-services/content-safety/concepts/jailbreak-detection)|[Quickstart](./quickstart-jailbreak.md)|
51
+
|[Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. |[Groundedness detection concepts](/azure/ai-services/content-safety/concepts/groundedness)|[Quickstart](./quickstart-groundedness.md)|
52
+
|[Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). |[Protected material concepts](/azure/ai-services/content-safety/concepts/protected-material)|[Quickstart](./quickstart-protected-material.md)|
53
+
| Custom categories API (preview) | Lets you create and train your own custom content categories and scan text for matches. |[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|[Quickstart](./quickstart-custom-categories.md)|
54
+
| Custom categories (rapid) API (preview) | Lets you define emerging harmful content patterns and scan text and images for matches.|[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|[How-to guide](./how-to/custom-categories-rapid.md)|
55
+
|[Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |[Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)|[Quickstart](/azure/ai-services/content-safety/quickstart-text)|
56
+
|[Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |[Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)|[Quickstart](/azure/ai-services/content-safety/quickstart-image)|
0 commit comments