Skip to content

Commit 125b822

Browse files
author
Jill Grant
authored
Merge pull request #856 from PatrickFarley/freshness-pass
freshness
2 parents 0daf952 + 77436dd commit 125b822

File tree

12 files changed

+103
-105
lines changed

12 files changed

+103
-105
lines changed

articles/ai-services/computer-vision/concept-face-recognition.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
---
22
title: "Face recognition - Face"
33
titleSuffix: Azure AI services
4-
description: Learn the concept of Face recognition, its related operations, and the underlying data structures.
5-
#services: cognitive-services
4+
description: Learn the concept of Face recognition, its operations, and data structures, including PersonGroup creation, identification, and verification.
65
author: PatrickFarley
76
manager: nitinme
87

@@ -11,7 +10,7 @@ ms.subservice: azure-ai-face
1110
ms.custom:
1211
- ignite-2023
1312
ms.topic: conceptual
14-
ms.date: 02/14/2024
13+
ms.date: 10/16/2024
1514
ms.author: pafarley
1615
---
1716

@@ -50,7 +49,7 @@ Use the following tips to ensure that your input images give the most accurate r
5049

5150
[!INCLUDE [identity-input-technical](includes/identity-input-technical.md)]
5251
[!INCLUDE [identity-input-composition](includes/identity-input-composition.md)]
53-
* You can utilize the `qualityForRecognition` attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only `"high"` quality images are recommended for person enrollment and quality at or above `"medium"` is recommended for identification scenarios.
52+
* You can use the `qualityForRecognition` attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only `"high"` quality images are recommended for person enrollment and quality at or above `"medium"` is recommended for identification scenarios.
5453

5554
## Next steps
5655

articles/ai-services/computer-vision/concept-shelf-analysis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 02/14/2024
11+
ms.date: 10/16/2024
1212
ms.author: pafarley
1313
ms.custom: build-2023, build-2023-dataai
1414
---

articles/ai-services/computer-vision/how-to/image-retrieval.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,17 @@
11
---
2-
title: Do image retrieval using multimodal embeddings - Image Analysis 4.0
2+
title: Image retrieval using multimodal embeddings
33
titleSuffix: Azure AI services
4-
description: Learn how to call the image retrieval API to vectorize image and search terms.
4+
description: Learn how to use the image retrieval API to vectorize images and search terms, enabling text-based image searches without metadata.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: how-to
11-
ms.date: 02/20/2024
11+
ms.date: 10/16/2024
1212
ms.author: pafarley
13+
14+
#customer intent: As a developer, I want to use the image retrieval API to vectorize images and text so that I can perform text-based image searches.
1315
---
1416

1517
# Do image retrieval using multimodal embeddings (version 4.0)
@@ -35,7 +37,7 @@ You can try out the Multimodal embeddings feature quickly and easily in your bro
3537
> The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
3638
3739
> [!div class="nextstepaction"]
38-
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
40+
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/gallery/imageanalysis)
3941
4042
## Call the Vectorize Image API
4143

@@ -124,6 +126,7 @@ def cosine_similarity(vector1, vector2):
124126

125127
---
126128

127-
## Next steps
129+
## Next step
128130

129-
[Image retrieval concepts](../concept-image-retrieval.md)
131+
> [!div class="nextstepaction"]
132+
> [Image retrieval concepts](../concept-image-retrieval.md)

articles/ai-services/content-moderator/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-moderator
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 10/16/2024
1111
ms.author: pafarley
1212
keywords: content moderator, Azure Content Moderator, online moderator, content filtering software, content moderation service, content moderation
1313
#Customer intent: As a developer of content management software, I want to find out whether Azure Content Moderator is the right solution for my moderation needs.

articles/ai-services/content-safety/concepts/groundedness.md

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,14 @@ author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.topic: conceptual
10-
ms.date: 03/15/2024
10+
ms.date: 10/16/2024
1111
ms.author: pafarley
1212
---
1313

1414
# Groundedness detection
1515

1616
The Groundedness detection API detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials.
1717

18-
1918
## Key terms
2019

2120
- **Retrieval Augmented Generation (RAG)**: RAG is a technique for augmenting LLM knowledge with other data. LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data that was available at the time they were trained. If you want to build AI applications that can reason about private data or data introduced after a model’s cutoff date, you need to provide the model with that specific information. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG). For more information, see [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/tutorials/rag/).
@@ -49,11 +48,11 @@ Groundedness detection supports text-based Summarization and QnA tasks to ensure
4948

5049
The groundedness detection API includes a correction feature that automatically corrects any detected ungroundedness in the text based on the provided grounding sources. When the correction feature is enabled, the response includes a `corrected Text` field that presents the corrected text aligned with the grounding sources.
5150

52-
Below, see several common scenarios that illustrate how and when to apply these features to achieve the best outcomes.
51+
### Use cases
5352

53+
Below, see several common scenarios that illustrate how and when to apply these features to achieve the best outcomes.
5454

55-
### Summarization in medical contexts
56-
**Use case:**
55+
#### Summarization in medical contexts
5756

5857
You're summarizing medical documents, and it’s critical that the names of patients in the summaries are accurate and consistent with the provided grounding sources.
5958

@@ -74,8 +73,7 @@ Example API Request:
7473

7574
The correction feature detects that `Kevin` is ungrounded because it conflicts with the grounding source `Jane`. The API returns the corrected text: `"The patient name is Jane."`
7675

77-
### Question and answer (QnA) task with customer support data
78-
**Use case:**
76+
#### Question and answer (QnA) task with customer support data
7977

8078
You're implementing a QnA system for a customer support chatbot. It’s essential that the answers provided by the AI align with the most recent and accurate information available.
8179

@@ -99,8 +97,8 @@ Example API Request:
9997
The API detects that `5%` is ungrounded because it does not match the provided grounding source `4.5%`. The response includes the correction text: `"The interest rate is 4.5%."`
10098

10199

102-
### Content creation with historical data
103-
**Use case**:
100+
#### Content creation with historical data
101+
104102
You're creating content that involves historical data or events, where accuracy is critical to maintaining credibility and avoiding misinformation.
105103

106104
Example API Request:
@@ -116,11 +114,11 @@ Example API Request:
116114
}
117115
```
118116
**Expected outcome:**
117+
119118
The API detects the ungrounded date `1065` and corrects it to `1066` based on the grounding source. The response includes the corrected text: `"The Battle of Hastings occurred in 1066."`
120119

121120

122-
### Internal documentation summarization
123-
**Use case:**
121+
#### Internal documentation summarization
124122

125123
You're summarizing internal documents where product names, version numbers, or other specific data points must remain consistent.
126124

@@ -159,7 +157,7 @@ Currently, the Groundedness detection API supports English language content. Whi
159157

160158
See [Input requirements](../overview.md#input-requirements) for maximum text length limitations.
161159

162-
### Regions
160+
### Region availability
163161

164162
To use this API, you must create your Azure AI Content Safety resource in the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
165163

articles/ai-services/content-safety/concepts/harm-categories.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,12 @@
22
title: "Harm categories in Azure AI Content Safety"
33
titleSuffix: Azure AI services
44
description: Learn about the different content moderation flags and severity levels that the Azure AI Content Safety service returns.
5-
#services: cognitive-services
65
author: PatrickFarley
76
manager: nitinme
87
ms.service: azure-ai-content-safety
98
ms.custom: build-2023
109
ms.topic: conceptual
11-
ms.date: 01/20/2024
10+
ms.date: 10/16/2024
1211
ms.author: pafarley
1312
---
1413

@@ -23,9 +22,9 @@ Content Safety recognizes four distinct categories of objectionable content.
2322

2423
| Category | Description |API term |
2524
| --------- | ------------------- | -- |
26-
| Hate and Fairness | Hate and fairness-related harms refer to any content that attacks or uses discriminatory language with reference to a person or Identity group based on certain differentiating attributes of these groups. <br><br>This includes, but is not limited to:<ul><li>Race, ethnicity, nationality</li><li>Gender identity groups and expression</li><li>Sexual orientation</li><li>Religion</li><li>Personal appearance and body size</li><li>Disability status</li><li>Harassment and bullying</li></ul> | `Hate` |
25+
| Hate and Fairness | Hate and fairness harms refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups. <br><br>This includes, but is not limited to:<ul><li>Race, ethnicity, nationality</li><li>Gender identity groups and expression</li><li>Sexual orientation</li><li>Religion</li><li>Personal appearance and body size</li><li>Disability status</li><li>Harassment and bullying</li></ul> | `Hate` |
2726
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one’s will. <br><br> This includes but is not limited to:<ul><li>Vulgar content</li><li>Prostitution</li><li>Nudity and Pornography</li><li>Abuse</li><li>Child exploitation, child abuse, child grooming</li></ul> | `Sexual` |
28-
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities. <br><br>This includes, but isn't limited to: <ul><li>Weapons</li><li>Bullying and intimidation</li><li>Terrorist and violent extremism</li><li>Stalking</li></ul> | `Violence` |
27+
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities. <br><br>This includes, but isn't limited to: <ul><li>Weapons</li><li>Bullying and intimidation</li><li>Terrorist and violent extremism</li><li>Stalking</li></ul> | `Violence` |
2928
| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one’s body or kill oneself. <br><br> This includes, but isn't limited to: <ul><li>Eating Disorders</li><li>Bullying and intimidation</li></ul> | `SelfHarm` |
3029

3130
Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.
@@ -34,7 +33,7 @@ Classification can be multi-labeled. For example, when a text sample goes throug
3433

3534
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
3635

37-
**Text**: The current version of the text model supports the full 0-7 severity scale. The classifier detects amongst all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
36+
**Text**: The current version of the text model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
3837
- `[0,1]` -> `0`
3938
- `[2,3]` -> `2`
4039
- `[4,5]` -> `4`
@@ -46,7 +45,7 @@ Every harm category the service applies also comes with a severity level rating.
4645
- `4`
4746
- `6`
4847

49-
**Image with text**: The current version of the multimodal model supports the full 0-7 severity scale. The classifier detects amongst all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
48+
**Image with text**: The current version of the multimodal model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
5049
- `[0,1]` -> `0`
5150
- `[2,3]` -> `2`
5251
- `[4,5]` -> `4`

articles/ai-services/content-safety/concepts/jailbreak-detection.md

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.custom: build-2023
1010
ms.topic: conceptual
11-
ms.date: 09/25/2024
11+
ms.date: 10/16/2024
1212
ms.author: pafarley
1313
---
1414

@@ -19,6 +19,33 @@ Generative AI models can pose risks of being exploited by malicious actors. To m
1919
Prompt Shields is a unified API that analyzes LLM inputs and detects adversarial user input attacks.
2020

2121

22+
## User scenarios
23+
### AI content creation platforms: Detecting harmful prompts
24+
- Scenario: An AI content creation platform uses generative AI models to produce marketing copy, social media posts, and articles based on user-provided prompts. To prevent the generation of harmful or inappropriate content, the platform integrates "Prompt Shields."
25+
- User: Content creators, platform administrators, and compliance officers.
26+
- Action: The platform uses Azure AI Content Safety's "Prompt Shields" to analyze user prompts before generating content. If a prompt is detected as potentially harmful or likely to lead to policy-violating outputs (e.g., prompts asking for defamatory content or hate speech), the shield blocks the prompt and alerts the user to modify their input.
27+
- Outcome: The platform ensures all AI-generated content is safe, ethical, and compliant with community guidelines, enhancing user trust and protecting the platform's reputation.
28+
### AI-powered chatbots: Mitigating risk from user prompt attacks
29+
- Scenario: A customer service provider uses AI-powered chatbots for automated support. To safeguard against user prompts that could lead the AI to generate inappropriate or unsafe responses, the provider uses "Prompt Shields."
30+
- User: Customer service agents, chatbot developers, and compliance teams.
31+
- Action: The chatbot system integrates "Prompt Shields" to monitor and evaluate user inputs in real-time. If a user prompt is identified as potentially harmful or designed to exploit the AI (e.g., attempting to provoke inappropriate responses or extract sensitive information), the shield intervenes by blocking the response or redirecting the query to a human agent.
32+
- Outcome: The customer service provider maintains high standards of interaction safety and compliance, preventing the chatbot from generating responses that could harm users or breach policies.
33+
### E-learning platforms: Preventing inappropriate AI-generated educational content
34+
- Scenario: An e-learning platform employs GenAI to generate personalized educational content based on student inputs and reference documents. To avoid generating inappropriate or misleading educational content, the platform utilizes "Prompt Shields."
35+
- User: Educators, content developers, and compliance officers.
36+
- Action: The platform uses "Prompt Shields" to analyze both user prompts and uploaded documents for content that could lead to unsafe or policy-violating AI outputs. If a prompt or document is detected as likely to generate inappropriate educational content, the shield blocks it and suggests alternative, safe inputs.
37+
- Outcome: The platform ensures that all AI-generated educational materials are appropriate and compliant with academic standards, fostering a safe and effective learning environment.
38+
### Healthcare AI assistants: Blocking unsafe prompts and document inputs
39+
- Scenario: A healthcare provider uses AI assistants to offer preliminary medical advice based on user inputs and uploaded medical documents. To ensure the AI does not generate unsafe or misleading medical advice, the provider implements "Prompt Shields."
40+
- User: Healthcare providers, AI developers, and compliance teams.
41+
- Action: The AI assistant employs "Prompt Shields" to analyze patient prompts and uploaded medical documents for harmful or misleading content. If a prompt or document is identified as potentially leading to unsafe medical advice, the shield prevents the AI from generating a response and redirects the patient to a human healthcare professional.
42+
- Outcome: The healthcare provider ensures that AI-generated medical advice remains safe and accurate, protecting patient safety and maintaining compliance with healthcare regulations.
43+
### Generative AI for creative writing: Protecting against prompt manipulation
44+
- Scenario: A creative writing platform uses GenAI to assist writers in generating stories, poetry, and scripts based on user inputs. To prevent the generation of inappropriate or offensive content, the platform incorporates "Prompt Shields."
45+
- User: Writers, platform moderators, and content reviewers.
46+
- Action: The platform integrates "Prompt Shields" to evaluate user prompts for creative writing. If a prompt is detected as likely to produce offensive, defamatory, or otherwise inappropriate content, the shield blocks the AI from generating such content and suggests revisions to the user.
47+
48+
2249
## Types of input attacks
2350

2451
The types of input attacks that Prompt Shields detects are described in this table.

0 commit comments

Comments
 (0)