You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/whats-new.md
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: azure-ai-content-safety
9
9
ms.custom: build-2023
10
10
ms.topic: overview
11
-
ms.date: 02/27/2024
11
+
ms.date: 09/04/2024
12
12
ms.author: pafarley
13
13
---
14
14
@@ -20,11 +20,10 @@ Learn what's new in the service. These items might be release notes, videos, blo
20
20
21
21
### Custom categories (standard) API
22
22
23
-
The custom categories API lets you create and train your own custom content categories and scan text for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
23
+
The custom categories (standard) API lets you create and train your own custom content categories and scan text for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
24
24
25
25
## May 2024
26
26
27
-
28
27
### Custom categories (rapid) API
29
28
30
29
The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
@@ -33,11 +32,11 @@ The custom categories (rapid) API lets you quickly define emerging harmful conte
33
32
34
33
### Prompt Shields public preview
35
34
36
-
Previously known as **Jailbreak risk detection**, this updated feature detects User Prompt injection attacks, in which users deliberately exploit system vulnerabilities to elicit unauthorized behavior from large language model. Prompt Shields analyzes both direct user prompt attacks and indirect attacks that are embedded in input documents or images. See [Prompt Shields](./concepts/jailbreak-detection.md) to learn more.
35
+
Previously known as **Jailbreak risk detection**, this updated feature detects prompt attacks, in which users deliberately exploit system vulnerabilities to elicit unauthorized behavior from large language model. Prompt Shields analyzes both direct user prompt attacks and indirect attacks which are embedded in input documents or images. See [Prompt Shields](./concepts/jailbreak-detection.md) to learn more.
37
36
38
37
### Groundedness detection public preview
39
38
40
-
The Groundedness detection API detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. See [Groundedness detection](./concepts/groundedness.md) to learn more.
39
+
The Groundedness detection API detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness describes instances where the LLMs produce information that is non-factual or inaccurate according to what was present in the source materials. See [Groundedness detection](./concepts/groundedness.md) to learn more.
41
40
42
41
43
42
## January 2024
@@ -56,14 +55,14 @@ The Azure AI Content Safety service is now generally available through the follo
56
55
57
56
## November 2023
58
57
59
-
### Jailbreak risk and Protected material detection (preview)
58
+
### Jailbreak risk and protected material detection (preview)
60
59
61
-
The new Jailbreak risk detection and Protected material detection APIs let you mitigate some of the risks when using generative AI.
60
+
The new Jailbreak risk detection and protected material detection APIs let you mitigate some of the risks when using generative AI.
62
61
63
62
- Jailbreak risk detection scans text for the risk of a [jailbreak attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md)
64
63
- Protected material text detection scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)
65
64
66
-
Jailbreak risk and Protected material detection are only available in select regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
65
+
Jailbreak risk and protected material detection are only available in select regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
67
66
68
67
## October 2023
69
68
@@ -72,11 +71,11 @@ Jailbreak risk and Protected material detection are only available in select reg
72
71
The Azure AI Content Safety service is now generally available as a cloud service.
73
72
- The service is available in many more Azure regions. See the [Overview](./overview.md) for a list.
74
73
- The return formats of the Analyze APIs have changed. See the [Quickstarts](./quickstart-text.md) for the latest examples.
75
-
- The names and return formats of several APIs have changed. See the [Migration guide](./how-to/migrate-to-general-availability.md) for a full list of breaking changes. Other guides and quickstarts now reflect the GA version.
74
+
- The names and return formats of several other APIs have changed. See the [Migration guide](./how-to/migrate-to-general-availability.md) for a full list of breaking changes. Other guides and quickstarts now reflect the GA version.
76
75
77
76
### Azure AI Content Safety Java and JavaScript SDKs
78
77
79
-
The Azure AI Content Safety service is now available through Java and JavaScript SDKs. The SDKs are available on [Maven](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) and [npm](https://www.npmjs.com/package/@azure-rest/ai-content-safety). Follow a [quickstart](./quickstart-text.md) to get started.
78
+
The Azure AI Content Safety service is now available through Java and JavaScript SDKs. The SDKs are available on [Maven](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) and [npm](https://www.npmjs.com/package/@azure-rest/ai-content-safety) respectively. Follow a [quickstart](./quickstart-text.md) to get started.
0 commit comments