Skip to content

Commit 4902488

Browse files
author
Jill Grant
authored
Merge pull request #270361 from PatrickFarley/content-safety-updates
add preview tags
2 parents 1655da6 + fb053ea commit 4902488

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

articles/ai-services/content-safety/overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,9 @@ There are different types of analysis available from this service. The following
4747
| :-------------------------- | :---------------------- |
4848
| Analyze text API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
4949
| Analyze image API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
50-
| Prompt Shields (new) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) |
51-
| Groundedness detection (new) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) |
52-
| Protected material text detection | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
50+
| Prompt Shields (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) |
51+
| Groundedness detection (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) |
52+
| Protected material text detection (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
5353

5454
## Content Safety Studio
5555

articles/ai-services/content-safety/quickstart-groundedness.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 03/18/2024
1111
ms.author: pafarley
1212
---
1313

14-
# Quickstart: Groundedness detection
14+
# Quickstart: Groundedness detection (preview)
1515

1616
Follow this guide to use Azure AI Content Safety Groundedness detection to check whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
1717

articles/ai-services/content-safety/quickstart-jailbreak.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 03/15/2024
1111
ms.author: pafarley
1212
---
1313

14-
# Quickstart: Prompt Shields
14+
# Quickstart: Prompt Shields (preview)
1515

1616
Follow this guide to use Azure AI Content Safety Prompt Shields to check your large language model (LLM) inputs for both User Prompt and Document attacks.
1717

articles/ai-services/content-safety/toc.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,11 @@ items:
2323
href: quickstart-text.md
2424
- name: Image moderation
2525
href: quickstart-image.md
26-
- name: Prompt Shields (new)
26+
- name: Prompt Shields (preview)
2727
href: quickstart-jailbreak.md
28-
- name: Groundedness detection (new)
28+
- name: Groundedness detection (preview)
2929
href: quickstart-groundedness.md
30-
- name: Protected material detection
30+
- name: Protected material detection (preview)
3131
href: quickstart-protected-material.md
3232

3333
- name: Samples

articles/ai-services/content-safety/whats-new.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ Learn what's new in the service. These items might be release notes, videos, blo
1818

1919
## March 2024
2020

21-
### Prompt Shields
21+
### Prompt Shields public preview
2222

2323
Previously known as **Jailbreak risk detection**, this updated feature detects User Prompt injection attacks, in which users deliberately exploit system vulnerabilities to elicit unauthorized behavior from large language model. Prompt Shields analyzes both direct user prompt attacks and indirect attacks that are embedded in input documents or images. See [Prompt Shields](./concepts/jailbreak-detection.md) to learn more.
2424

25-
### Groundedness detection
25+
### Groundedness detection public preview
2626

2727
The Groundedness detection API detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. See [Groundedness detection](./concepts/groundedness.md) to learn more.
2828

0 commit comments

Comments
 (0)