You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/ai-services/content-safety-overview.md
+2-8Lines changed: 2 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,17 +57,11 @@ You can use Azure AI Content Safety for many scenarios:
57
57
|Medium |Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
58
58
|High |Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or nonconsensual power exchange or abuse. |
59
59
60
-
## Other Content Safety features
61
-
62
-
| Feature | Functionality | Concepts guide |
63
-
|:--- |:--- | ---|
64
-
|[Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. |[Groundedness detection concepts](/azure/ai-services/content-safety/concepts/groundedness)|
65
-
|[Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material)| Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). |[Protected material concepts](/azure/ai-services/content-safety/concepts/protected-material)|
66
-
| Custom categories (standard) API (preview) | Lets you create and train your own custom content categories and scan text for matches. |[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|
67
-
| Custom categories (rapid) API (preview) | Lets you define emerging harmful content patterns and scan text and images for matches. |[Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|
60
+
## Limitations
68
61
69
62
Refer to the [Content Safety overview](/azure/ai-services/content-safety/overview) for supported regions, rate limits, and input requirements for all features. Refer to the [Language support](/azure/ai-services/content-safety/language-support) page for supported languages.
70
63
64
+
71
65
## Next step
72
66
73
67
Get started using Azure AI Content Safety in Azure AI Studio by following the [How-to guide](./how-to/content-safety.md).
Copy file name to clipboardExpand all lines: articles/ai-studio/ai-services/how-to/content-safety.md
+40Lines changed: 40 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,46 @@ The **Prompt Shields** panel lets you try out user input risk detection. Detect
68
68
69
69
For more information, see the [Prompt Shields conceptual guide](/azure/ai-services/content-safety/concepts/jailbreak-detection).
70
70
71
+
72
+
73
+
## Use Groundedness detection
74
+
75
+
The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
76
+
77
+
1. Select the **Groundedness detection** panel.
78
+
1. Select a sample content set on the page, or input your own for testing.
79
+
1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown.
80
+
1. Select **Run test**.
81
+
The service returns the groundedness detection result.
82
+
83
+
84
+
For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness).
85
+
86
+
87
+
## Use Protected material detection
88
+
89
+
This feature scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content).
90
+
91
+
1. Select the **Protected material detection for text** or **Protected material detection for code** panel.
92
+
1. Select a sample text on the page, or input your own for testing.
93
+
1. Select **Run test**.
94
+
The service returns the protected content result.
95
+
96
+
For more information, see the [Protected material conceptual guide](/azure/ai-services/content-safety/concepts/protected-material).
97
+
98
+
## Use custom categories
99
+
100
+
This feature lets you create and train your own custom content categories and scan text for matches.
101
+
102
+
1. Select the **Custom categories** panel.
103
+
1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**.
104
+
1. Select a category and enter your sample input text, and select **Run test**.
105
+
The service returns the custom category result.
106
+
107
+
108
+
For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories).
109
+
110
+
71
111
## Next step
72
112
73
113
To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](../../concepts/content-filtering.md) guide.
0 commit comments