Skip to content

Commit 1309a14

Browse files
committed
Merge branch 'release-preview-2-cu' into jan-4592-overview
2 parents 7447963 + 5da9d3e commit 1309a14

File tree

2 files changed

+211
-15
lines changed

2 files changed

+211
-15
lines changed

.vscode/settings.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
{
22
"cSpell.words": [
33
"DALL"
4-
]
4+
],
5+
"DockerRun.DisableAutoGenerateConfig": true
56
}

articles/ai-services/content-understanding/audio/overview.md

Lines changed: 209 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: Azure AI Content Understanding audio overview
33
titleSuffix: Azure AI services
44
description: Learn about Azure AI Content Understanding audio solutions
5-
author: laujan
5+
author: jagoerge
66
ms.author: lajanuar
77
manager: nitinme
88
ms.service: azure-ai-content-understanding
99
ms.topic: overview
10-
ms.date: 03/18/2025
11-
ms.custom: ignite-2024-understanding-release
10+
ms.date: 05/06/2025
11+
ms.custom: release-preview-2-cu
1212
---
1313

1414

@@ -38,11 +38,26 @@ Content Understanding serves as a cornerstone for Media Asset Management solutio
3838

3939
* **Transcription**. Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.
4040

41-
* **`Diarization`**. Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers.
41+
> [!NOTE]
42+
> Content Understanding supports the full set of [Azure AI Speech Speech to text languages](https://learn.microsoft.com/azure/ai-services/speech-service/language-support?tabs=stt).
43+
> For languages with Fast transcriptions support and for files ≤ 300 MB and/or ≤ 2 hours, transcription time is reduced substantially.
44+
45+
* **Diarization**. Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers.
4246

4347
* **Speaker role detection**. Identifies agent and customer roles within contact center call data.
4448

45-
* **Language detection**. Automatically detects the language in the audio or uses specified language/locale hints.
49+
* **Multilingual transcription**. Generates multilingual transcripts, applying language/locale per phrase. Deviating from language detection this feature is enabled when no language/locale is specified or language is set to 'auto'.
50+
51+
> [!NOTE]
52+
> The following locales are currently supported:
53+
> **Files ≤ 300 MB and/or ≤ 2 hours**: de-DE, en-AU, en-CA, en-GB, en-IN, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, and zh-CN.
54+
> **Files larger than 300 MB and/or longer than 4 hours**: en-US, es-ES, es-MX, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, pt-BR, zh-CN.
55+
56+
* **Language detection**. Automatically detects the dominant language/locale which is used to transcribe the file. Set multiple languages/locales to enable language detection.
57+
58+
> [!NOTE]
59+
> For files larger than 300 MB and/or longer than 2 hours and locales unsupported by Fast transcription, the file is processed generating a multilingual transcript based on the specified locales.
60+
> In case language detection fails, the first language/locale defined is used to transcribe the file.
4661
4762
### Field extraction
4863

@@ -59,15 +74,195 @@ Content Understanding offers advanced audio capabilities, including:
5974

6075
* **Scenario adaptability**. Adapt the service to your requirements by generating custom fields and extract relevant data.
6176

62-
## Content Understanding audio analyzer templates
63-
64-
Content Understanding offers customizable audio analyzer templates:
65-
66-
* **Post-call analysis**. Analyze call recordings to generate conversation transcripts, call summaries, sentiment assessments, and more.
67-
68-
* **Conversation analysis**. Generate transcriptions, summaries, and sentiment assessments from conversation audio recordings.
69-
70-
Start with a template or create a custom analyzer to meet your specific business needs.
77+
## Content Understanding prebuild audio analyzers
78+
79+
The prebuild analyzers allow extracting valuable insights into audio content without the need to create an analyzer setup.
80+
81+
All audio analyzers generate transcripts in standard WEBVTT format separated by speaker.
82+
83+
> [!NOTE]
84+
> Prebuild analyzers are set to use multilingual transcription and returnDetails enabled!
85+
86+
Content Understanding offers the following prebuild analyzers:
87+
88+
**Post-call analysis (prebuilt-callCenter)**. Analyze call recordings to generate:
89+
- conversation transcripts with speaker role detection result
90+
- call summary
91+
- call sentiment
92+
- top five articles mentioned
93+
- list of companies mentioned
94+
- list of people (name and title/role) mentioned
95+
- list of relevant call categories
96+
97+
**Example result:**
98+
```json
99+
{
100+
"id": "bc36da27-004f-475e-b808-8b8aead3b566",
101+
"status": "Succeeded",
102+
"result": {
103+
"analyzerId": "prebuilt-callCenter",
104+
"apiVersion": "2025-05-01-preview",
105+
"createdAt": "2025-05-06T22:53:28Z",
106+
"stringEncoding": "utf8",
107+
"warnings": [],
108+
"contents": [
109+
{
110+
"markdown": "# Audio: 00:00.000 => 00:32.183\n\nTranscript\n```\nWEBVTT\n\n00:00.080 --> 00:00.640\n<v Agent>Good day.\n\n00:00.960 --> 00:02.240\n<v Agent>Welcome to Contoso.\n\n00:02.560 --> 00:03.760\n<v Agent>My name is John Doe.\n\n00:03.920 --> 00:05.120\n<v Agent>How can I help you today?\n\n00:05.440 --> 00:06.320\n<v Agent>Yes, good day.\n\n00:06.720 --> 00:08.160\n<v Agent>My name is Maria Smith.\n\n00:08.560 --> 00:11.280\n<v Agent>I would like to inquire about my current point balance.\n\n00:11.680 --> 00:12.560\n<v Agent>No problem.\n\n00:12.880 --> 00:13.920\n<v Agent>I am happy to help.\n\n00:14.240 --> 00:16.720\n<v Agent>I need your date of birth to confirm your identity.\n\n00:17.120 --> 00:19.600\n<v Agent>It is April 19th, 1988.\n\n00:20.000 --> 00:20.480\n<v Agent>Great.\n\n00:20.800 --> 00:24.160\n<v Agent>Your current point balance is 599 points.\n\n00:24.560 --> 00:26.160\n<v Agent>Do you need any more information?\n\n00:26.480 --> 00:27.200\n<v Agent>No, thank you.\n\n00:27.600 --> 00:28.320\n<v Agent>That was all.\n\n00:28.720 --> 00:29.280\n<v Agent>Goodbye.\n\n00:29.680 --> 00:30.320\n<v Agent>You're welcome.\n\n00:30.640 --> 00:31.840\n<v Agent>Goodbye at Contoso.\n```",
111+
"fields": {
112+
"Summary": {
113+
"type": "string",
114+
"valueString": "Maria Smith contacted Contoso to inquire about her current point balance. After confirming her identity with her date of birth, the agent, John Doe, informed her that her balance was 599 points. Maria did not require any further assistance, and the call concluded politely."
115+
},
116+
"Topics": {
117+
"type": "array",
118+
"valueArray": [
119+
{
120+
"type": "string",
121+
"valueString": "Point balance inquiry"
122+
},
123+
{
124+
"type": "string",
125+
"valueString": "Identity confirmation"
126+
},
127+
{
128+
"type": "string",
129+
"valueString": "Customer service"
130+
}
131+
]
132+
},
133+
"Companies": {
134+
"type": "array",
135+
"valueArray": [
136+
{
137+
"type": "string",
138+
"valueString": "Contoso"
139+
}
140+
]
141+
},
142+
"People": {
143+
"type": "array",
144+
"valueArray": [
145+
{
146+
"type": "object",
147+
"valueObject": {
148+
"Name": {
149+
"type": "string",
150+
"valueString": "John Doe"
151+
},
152+
"Role": {
153+
"type": "string",
154+
"valueString": "Agent"
155+
}
156+
}
157+
},
158+
{
159+
"type": "object",
160+
"valueObject": {
161+
"Name": {
162+
"type": "string",
163+
"valueString": "Maria Smith"
164+
},
165+
"Role": {
166+
"type": "string",
167+
"valueString": "Customer"
168+
}
169+
}
170+
}
171+
]
172+
},
173+
"Sentiment": {
174+
"type": "string",
175+
"valueString": "Positive"
176+
},
177+
"Categories": {
178+
"type": "array",
179+
"valueArray": [
180+
{
181+
"type": "string",
182+
"valueString": "Business"
183+
}
184+
]
185+
}
186+
},
187+
"kind": "audioVisual",
188+
"startTimeMs": 0,
189+
"endTimeMs": 32183,
190+
"transcriptPhrases": [
191+
{
192+
"speaker": "Agent",
193+
"startTimeMs": 80,
194+
"endTimeMs": 640,
195+
"text": "Good day.",
196+
"words": []
197+
}, ...
198+
{
199+
"speaker": "Customer",
200+
"startTimeMs": 5440,
201+
"endTimeMs": 6320,
202+
"text": "Yes, good day.",
203+
"words": []
204+
}, ...
205+
]
206+
}
207+
]
208+
}
209+
}
210+
```
211+
212+
**Conversation analysis (prebuilt-audioAnalyzer)**. Analyze recordings to generate:
213+
- conversation transcripts
214+
- conversation summary
215+
216+
**Example result:**
217+
```json
218+
{
219+
"id": "9624cc49-b6b3-4ce5-be6c-e895d8c2484d",
220+
"status": "Succeeded",
221+
"result": {
222+
"analyzerId": "prebuilt-audioAnalyzer",
223+
"apiVersion": "2025-05-01-preview",
224+
"createdAt": "2025-05-06T23:00:12Z",
225+
"stringEncoding": "utf8",
226+
"warnings": [],
227+
"contents": [
228+
{
229+
"markdown": "# Audio: 00:00.000 => 00:32.183\n\nTranscript\n```\nWEBVTT\n\n00:00.080 --> 00:00.640\n<v Speaker 1>Good day.\n\n00:00.960 --> 00:02.240\n<v Speaker 1>Welcome to Contoso.\n\n00:02.560 --> 00:03.760\n<v Speaker 1>My name is John Doe.\n\n00:03.920 --> 00:05.120\n<v Speaker 1>How can I help you today?\n\n00:05.440 --> 00:06.320\n<v Speaker 1>Yes, good day.\n\n00:06.720 --> 00:08.160\n<v Speaker 1>My name is Maria Smith.\n\n00:08.560 --> 00:11.280\n<v Speaker 1>I would like to inquire about my current point balance.\n\n00:11.680 --> 00:12.560\n<v Speaker 1>No problem.\n\n00:12.880 --> 00:13.920\n<v Speaker 1>I am happy to help.\n\n00:14.240 --> 00:16.720\n<v Speaker 1>I need your date of birth to confirm your identity.\n\n00:17.120 --> 00:19.600\n<v Speaker 1>It is April 19th, 1988.\n\n00:20.000 --> 00:20.480\n<v Speaker 1>Great.\n\n00:20.800 --> 00:24.160\n<v Speaker 1>Your current point balance is 599 points.\n\n00:24.560 --> 00:26.160\n<v Speaker 1>Do you need any more information?\n\n00:26.480 --> 00:27.200\n<v Speaker 1>No, thank you.\n\n00:27.600 --> 00:28.320\n<v Speaker 1>That was all.\n\n00:28.720 --> 00:29.280\n<v Speaker 1>Goodbye.\n\n00:29.680 --> 00:30.320\n<v Speaker 1>You're welcome.\n\n00:30.640 --> 00:31.840\n<v Speaker 1>Goodbye at Contoso.\n```",
230+
"fields": {
231+
"Summary": {
232+
"type": "string",
233+
"valueString": "Maria Smith contacted Contoso to inquire about her current point balance. John Doe assisted her by confirming her identity using her date of birth and informed her that her balance was 599 points. Maria expressed no further inquiries, and the conversation concluded politely."
234+
}
235+
},
236+
"kind": "audioVisual",
237+
"startTimeMs": 0,
238+
"endTimeMs": 32183,
239+
"transcriptPhrases": [
240+
{
241+
"speaker": "Speaker 1",
242+
"startTimeMs": 80,
243+
"endTimeMs": 640,
244+
"text": "Good day.",
245+
"words": []
246+
}, ...
247+
{
248+
"speaker": "Speaker 2",
249+
"startTimeMs": 5440,
250+
"endTimeMs": 6320,
251+
"text": "Yes, good day.",
252+
"words": []
253+
}, ...
254+
]
255+
}
256+
]
257+
}
258+
}
259+
```
260+
261+
You can also customize prebuild analyzers for more fine-grained control of the output by defining custom fields. Customization allows you to use the full power of generative models to extract deep insights from the audio. For example, customization allows you to:
262+
- Generate other insights
263+
- Control the language of the field extraction output
264+
- Configure the transcription behavior
265+
- and more
71266

72267
## Input requirements
73268
For a detailed list of supported audio formats, refer to our [Service limits and codecs](../service-limits.md) page.

0 commit comments

Comments
 (0)