Skip to content

Commit b2baae4

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-ai-docs-pr into openai-updates
2 parents 2e3cf30 + 0ac592f commit b2baae4

File tree

475 files changed

+5761
-4583
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

475 files changed

+5761
-4583
lines changed

.openpublishing.publish.config.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@
33
{
44
"docset_name": "azure-ai",
55
"build_source_folder": ".",
6+
"build_output_subfolder": "azure-ai",
7+
"locale": "en-us",
8+
"monikers": [],
9+
"moniker_ranges": [],
610
"xref_query_tags": [
711
"/dotnet",
812
"/python"
913
],
10-
"build_output_subfolder": "azure-ai",
11-
"locale": "en-us",
12-
"monikers": [],
1314
"open_to_public_contributors": true,
1415
"type_mapping": {
1516
"Conceptual": "Content",
@@ -172,4 +173,4 @@
172173
],
173174
"branch_target_mapping": {},
174175
"targets": {}
175-
}
176+
}

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,11 @@
3030
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
3131
"redirect_document_id": false
3232
},
33+
{
34+
"source_path_from_root": "/articles/ai-services/luis/luis-concept-data-conversion.md",
35+
"redirect_url": "/azure/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis",
36+
"redirect_document_id": false
37+
},
3338
{
3439
"source_path_from_root": "/articles/ai-services/custom-vision-service/update-application-to-3.0-sdk.md",
3540
"redirect_url": "/azure/ai-services/custom-vision-service/overview",
@@ -405,6 +410,21 @@
405410
"redirect_url": "/azure/ai-services/speech-service/release-notes",
406411
"redirect_document_id": false
407412
},
413+
{
414+
"source_path_from_root": "/articles/ai-services/speech-service/get-started-speaker-recognition.md",
415+
"redirect_url": "/azure/ai-services/speech-service/speaker-recognition-overview",
416+
"redirect_document_id": false
417+
},
418+
{
419+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md",
420+
"redirect_url": "/azure/ai-services/speech-service/intent-recognition",
421+
"redirect_document_id": false
422+
},
423+
{
424+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md",
425+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle",
426+
"redirect_document_id": false
427+
},
408428
{
409429
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
410430
"redirect_url": "/azure/ai-services/anomaly-detector/overview",

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---

articles/ai-services/computer-vision/includes/image-analysis-curl-quickstart-40.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ A successful response is returned in JSON, similar to the following example:
5353

5454
```json
5555
{
56-
"modelVersion": "2024-02-01",
56+
"modelVersion": "2023-10-01",
5757
"captionResult":
5858
{
5959
"text": "a man pointing at a screen",

articles/ai-services/content-safety/concepts/custom-categories.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ This implementation works on text content and image content.
5151

5252
#### [Custom categories (standard) API](#tab/standard)
5353

54-
The Azure AI Content Safety custom category feature uses a multi-step process for creating, training, and using custom content classification models. Here's a look at the workflow:
54+
The Azure AI Content Safety custom categories feature uses a multi-step process for creating, training, and using custom content classification models. Here's a look at the workflow:
5555

5656
### Step 1: Definition and setup
5757

@@ -73,7 +73,7 @@ You use the **analyzeCustomCategory** API to analyze text content and determine
7373

7474
#### [Custom categories (rapid) API](#tab/rapid)
7575

76-
To use the custom category (rapid) API, you first create an **incident** object with a text description. Then, you upload any number of image or text samples to the incident. The LLM on the backend will then use these to evaluate future input content. No training step is needed.
76+
To use the custom categories (rapid) API, you first create an **incident** object with a text description. Then, you upload any number of image or text samples to the incident. The LLM on the backend will then use these to evaluate future input content. No training step is needed.
7777

7878
You can include your defined incident in a regular text analysis or image analysis request. The service will indicate whether the submitted content is an instance of your incident. The service can still do other content moderation tasks in the same API call.
7979

articles/ai-services/content-safety/concepts/groundedness.md

Lines changed: 111 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,14 @@ The Groundedness detection API detects whether the text responses of large langu
1818

1919
## Key terms
2020

21-
- **Retrieval Augmented Generation (RAG)**: RAG is a technique for augmenting LLM knowledge with other data. LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data that was available at the time they were trained. If you want to build AI applications that can reason about private data or data introduced after a model’s cutoff date, you need to provide the model with that specific information. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG). For more information, see [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/use_cases/question_answering/).
21+
- **Retrieval Augmented Generation (RAG)**: RAG is a technique for augmenting LLM knowledge with other data. LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data that was available at the time they were trained. If you want to build AI applications that can reason about private data or data introduced after a model’s cutoff date, you need to provide the model with that specific information. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG). For more information, see [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/tutorials/rag/).
22+
- **Groundedness and Ungroundedness in LLMs**: This refers to the extent to which the model's outputs are based on provided information or reflect reliable sources accurately. A grounded response adheres closely to the given information, avoiding speculation or fabrication. In groundedness measurements, source information is crucial and serves as the grounding source.
2223

23-
- **Groundedness and Ungroundedness in LLMs**: This refers to the extent to which the model’s outputs are based on provided information or reflect reliable sources accurately. A grounded response adheres closely to the given information, avoiding speculation or fabrication. In groundedness measurements, source information is crucial and serves as the grounding source.
24+
## Groundedness detection options
2425

25-
## Groundedness detection features
26+
The following options are available for gGroundedness detection in Azure AI Content Safety:
2627

27-
- **Domain Selection**: Users can choose an established domain to ensure more tailored detection that aligns with the specific needs of their field. Currently the available domains are `MEDICAL` and `GENERIC`.
28+
- **Domain Selection**: Users can choose an established domain to ensure more tailored detection that aligns with the specific needs of their field. The current available domains are `MEDICAL` and `GENERIC`.
2829
- **Task Specification**: This feature lets you select the task you're doing, such as QnA (question & answering) and Summarization, with adjustable settings according to the task type.
2930
- **Speed vs Interpretability**: There are two modes that trade off speed with result interpretability.
3031
- Non-Reasoning mode: Offers fast detection capability; easy to embed into online applications.
@@ -43,6 +44,111 @@ Groundedness detection supports text-based Summarization and QnA tasks to ensure
4344
- Medical QnA: For medical QnA, the function helps verify the accuracy of medical answers and advice provided by AI systems to healthcare professionals and patients, reducing the risk of medical errors.
4445
- Educational QnA: In educational settings, the function can be applied to QnA tasks to confirm that answers to academic questions or test prep queries are factually accurate, supporting the learning process.
4546

47+
48+
## Groundedness correction
49+
50+
The groundedness detection API includes a correction feature that automatically corrects any detected ungroundedness in the text based on the provided grounding sources. When the correction feature is enabled, the response includes a `corrected Text` field that presents the corrected text aligned with the grounding sources.
51+
52+
Below, see several common scenarios that illustrate how and when to apply these features to achieve the best outcomes.
53+
54+
55+
### Summarization in medical contexts
56+
**Use case:**
57+
58+
You're summarizing medical documents, and it’s critical that the names of patients in the summaries are accurate and consistent with the provided grounding sources.
59+
60+
Example API Request:
61+
62+
```json
63+
{
64+
"domain": "Medical",
65+
"task": "Summarization",
66+
"text": "The patient name is Kevin.",
67+
"groundingSources": [
68+
"The patient name is Jane."
69+
],
70+
}
71+
```
72+
73+
**Expected outcome:**
74+
75+
The correction feature detects that `Kevin` is ungrounded because it conflicts with the grounding source `Jane`. The API returns the corrected text: `"The patient name is Jane."`
76+
77+
### Question and answer (QnA) task with customer support data
78+
**Use case:**
79+
80+
You're implementing a QnA system for a customer support chatbot. It’s essential that the answers provided by the AI align with the most recent and accurate information available.
81+
82+
Example API Request:
83+
84+
```json
85+
{
86+
"domain": "Generic",
87+
"task": "QnA",
88+
"qna": {
89+
"query": "What is the current interest rate?"
90+
},
91+
"text": "The interest rate is 5%.",
92+
"groundingSources": [
93+
"As of July 2024, the interest rate is 4.5%."
94+
],
95+
}
96+
```
97+
**Expected outcome:**
98+
99+
The API detects that `5%` is ungrounded because it does not match the provided grounding source `4.5%`. The response includes the correction text: `"The interest rate is 4.5%."`
100+
101+
102+
### Content creation with historical data
103+
**Use case**:
104+
You're creating content that involves historical data or events, where accuracy is critical to maintaining credibility and avoiding misinformation.
105+
106+
Example API Request:
107+
108+
```json
109+
{
110+
"domain": "Generic",
111+
"task": "Summarization",
112+
"text": "The Battle of Hastings occurred in 1065.",
113+
"groundingSources": [
114+
"The Battle of Hastings occurred in 1066."
115+
],
116+
}
117+
```
118+
**Expected outcome:**
119+
The API detects the ungrounded date `1065` and correct it to `1066` based on the grounding source. The response includes the corrected text: `"The Battle of Hastings occurred in 1066."`
120+
121+
122+
### Internal documentation summarization
123+
**Use case:**
124+
125+
You're summarizing internal documents where product names, version numbers, or other specific data points must remain consistent.
126+
127+
Example API Request:
128+
129+
```json
130+
{
131+
"domain": "Generic",
132+
"task": "Summarization",
133+
"text": "Our latest product is SuperWidget v2.1.",
134+
"groundingSources": [
135+
"Our latest product is SuperWidget v2.2."
136+
],
137+
}
138+
```
139+
140+
**Expected outcome:**
141+
142+
The correction feature identifies `SuperWidget v2.1` as ungrounded and update it to `SuperWidget v2.2` in the response. The response returns the corrected text: `"Our latest product is SuperWidget v2.2."`
143+
144+
## Best practices
145+
146+
Adhere to the following best practices when setting up RAG systems to get the best performance out of the groundedness detection API:
147+
- When dealing with product names or version numbers, use grounding sources directly from internal release notes or official product documentation to ensure accuracy.
148+
- For historical content, cross-reference your grounding sources with trusted academic or historical databases to ensure the highest level of accuracy.
149+
- In a dynamic environment like finance, always use the most recent and reliable grounding sources to ensure your AI system provides accurate and timely information.
150+
- Always ensure that your grounding sources are accurate and up-to-date, particularly in sensitive fields like healthcare. This minimizes the risk of errors in the summarization process.
151+
46152
## Limitations
47153

48154
### Language availability
@@ -57,7 +163,7 @@ See [Input requirements](../overview.md#input-requirements) for maximum text len
57163

58164
To use this API, you must create your Azure AI Content Safety resource in the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
59165

60-
### TPS limitations
166+
### Rate limitations
61167

62168
See [Query rates](/azure/ai-services/content-safety/overview#query-rates).
63169

articles/ai-services/content-safety/concepts/harm-categories.md

Lines changed: 18 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -35,21 +35,29 @@ Classification can be multi-labeled. For example, when a text sample goes throug
3535
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
3636

3737
**Text**: The current version of the text model supports the full 0-7 severity scale. The classifier detects amongst all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
38-
- [0,1] -> 0
39-
- [2,3] -> 2
40-
- [4,5] -> 4
41-
- [6,7] -> 6
42-
43-
**Image**: The current version of the image model supports the trimmed version of the full 0-7 severity scale. The classifier only returns severities 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
44-
- [0,1] -> 0
45-
- [2,3] -> 2
46-
- [4,5] -> 4
47-
- [6,7] -> 6
38+
- `[0,1]` -> `0`
39+
- `[2,3]` -> `2`
40+
- `[4,5]` -> `4`
41+
- `[6,7]` -> `6`
42+
43+
**Image**: The current version of the image model supports the trimmed version of the full 0-7 severity scale. The classifier only returns severities 0, 2, 4, and 6.
44+
- `0`
45+
- `2`
46+
- `4`
47+
- `6`
48+
49+
**Image with text**: The current version of the multimodal model supports the full 0-7 severity scale. The classifier detects amongst all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
50+
- `[0,1]` -> `0`
51+
- `[2,3]` -> `2`
52+
- `[4,5]` -> `4`
53+
- `[6,7]` -> `6`
4854

4955
[!INCLUDE [severity-levels text](../includes/severity-levels-text.md)]
5056

5157
[!INCLUDE [severity-levels image](../includes/severity-levels-image.md)]
5258

59+
[!INCLUDE [severity-levels multimodal](../includes/severity-levels-multimodal.md)]
60+
5361

5462
## Next steps
5563

0 commit comments

Comments
 (0)