Skip to content

Commit 6b50926

Browse files
committed
writer edits
1 parent d96bbc1 commit 6b50926

File tree

4 files changed

+4
-6
lines changed

4 files changed

+4
-6
lines changed

articles/ai-services/content-safety/includes/groundedness-detection-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The Groundedness detection feature detects whether the text responses of large l
1818
- **Groundedness and Ungroundedness in LLMs**: This refers to the extent to which the model's outputs are based on provided information or reflect reliable sources accurately. A grounded response adheres closely to the given information, avoiding speculation or fabrication. In groundedness measurements, source information is crucial and serves as the grounding source.
1919

2020

21-
## Use cases
21+
## User scenarios
2222

2323
Groundedness detection supports text-based Summarization and QnA tasks to ensure that the generated summaries or answers are accurate and reliable.
2424

articles/ai-services/openai/concepts/content-filter-protected-material.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,7 @@ ms.service: azure-ai-openai
1010

1111
# Protected material detection filter
1212

13-
The Protected material detection filter scans the output of large language models (LLMs) to identify and flag known protected material. This feature is designed to help organizations prevent the generation of content that closely matches copyrighted text or code.
14-
15-
The Protected material detection filter scans the output of large language models to identify and flag known protected material. It is designed to help organizations prevent the generation of content that closely matches copyrighted text or code.
13+
The Protected material detection filter scans the output of large language models (LLMs) to identify and flag known protected material. It is designed to help organizations prevent the generation of content that closely matches copyrighted text or code.
1614

1715
The Protected material text filter flags known text content (for example, song lyrics, articles, recipes, and selected web content) that might be output by large language models.
1816

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The content filtering system integrated in the Azure OpenAI Service contains:
3232
* Neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but isn't subject to filtering and isn't configurable.
3333
* Other optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage.
3434

35-
## Risk categories
35+
## Filter categories
3636

3737
The following table summarizes the risk categories supported by Azure OpenAI's content filtering system.
3838

articles/ai-services/openai/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ items:
6060
items:
6161
- name: Content filtering overview
6262
href: ./concepts/content-filter.md
63-
- name: Content filtering risk categories
63+
- name: Content filtering severity levels
6464
href: ./concepts/content-filter-severity-levels.md
6565
- name: Prompt shields
6666
href: ./concepts/content-filter-prompt-shields.md

0 commit comments

Comments
 (0)