You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learn about Azure AI Content Understanding audio solutions
5
5
author: laujan
6
-
ms.author: jagoerge
6
+
ms.author: jagoerge
7
7
manager: nitinme
8
8
ms.service: azure-ai-content-understanding
9
9
ms.topic: overview
10
10
ms.date: 05/19/2025
11
11
---
12
12
13
-
14
13
# Content Understanding audio solutions (preview)
15
14
16
15
> [!IMPORTANT]
17
-
>
18
16
> * Azure AI Content Understanding is available in preview. Public preview releases provide early access to features that are in active development.
19
17
> * Features, approaches, and processes can change or have limited capabilities, before General Availability (GA).
20
18
> * For more information, *see*[**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
21
19
22
-
Content Understanding audio analyzers enable transcription and diarization of conversational audio, extracting structured fields such as summaries, sentiments, and key topics. Customize an audio analyzer template to your business needs using [Azure AI Foundry portal](https://ai.azure.com/) to start generating results.
20
+
Audio analyzers enable transcription and diarization of conversational audio, extracting structured fields such as summaries, sentiments, and key topics. Customize an audio analyzer template to your business needs using [Azure AI Foundry portal](https://ai.azure.com/) to start generating results.
23
21
24
-
Here are common scenarios for using Content Understanding with conversational audio data:
22
+
Here are common scenarios for conversational audio data processing:
25
23
26
24
* Gain customer insights through summarization and sentiment analysis.
27
25
* Assess and verify call quality and compliance in call centers.
28
26
* Create automated summaries and metadata for podcast publishing.
29
27
30
28
## Audio analyzer capabilities
31
29
32
-
:::image type="content" source="../media/audio/overview/workflow-diagram.png" lightbox="../media/audio/overview/workflow-diagram.png" alt-text="Illustration of Content Understanding audio workflow.":::
30
+
:::image type="content" source="../media/audio/overview/workflow-diagram-preview.png" lightbox="../media/audio/overview/workflow-diagram-preview.png" alt-text="Illustration of Content Understanding audio capabilities.":::
33
31
34
-
Content Understanding serves as a cornerstone for Media Asset Management solutions, enabling the following capabilities for audio files:
32
+
Content Understanding serves as a cornerstone for Speech Analytics solutions, enabling the following capabilities for audio files:
35
33
36
34
### Content extraction
37
35
38
-
***Transcription**. Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.
36
+
Audio content extraction is the process of isolating and retrieving specific elements or features from an audio file. This process can include separating individual audio sources; identifying specific segments within a sound file; or detecting and categorizing various characteristics of the audio content.
39
37
40
-
> [!NOTE]
41
-
>
42
-
> Content Understanding supports the full set of [Azure AI Speech Speech to text languages](../../speech-service/language-support.md).
43
-
> For languages with fast transcriptions support and for files ≤ 300 MB and/or ≤ 2 hours, transcription time is reduced substantially.
38
+
#### Language handling
39
+
We support different options to handle language processing during transcription.
40
+
41
+
The following table provides an overview of the options controlled via the 'locales' configuration:
|**multiple locales**|≤ 1 GB and/or ≤ 4 hours|Single language transcription (based on language detection)|All supported locales[^1]|• ≤ 300 MB and/or ≤ 2 hours: Near-real-time<br>• > 300 MB and >2 HR ≤ 4 hours: Regular|
49
+
50
+
[^1]: Content Understanding supports the full set of [Azure AI Speech Speech to text languages](../../speech-service/language-support.md).
51
+
For languages with Fast transcriptions support and for files ≤ 300 MB and/or ≤ 2 hours, transcription time is reduced substantially.
52
+
53
+
***Transcription**. Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.
44
54
45
55
***Diarization**. Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers.
46
56
@@ -49,24 +59,18 @@ Content Understanding serves as a cornerstone for Media Asset Management solutio
49
59
***Multilingual transcription**. Generates multilingual transcripts, applying language/locale per phrase. Deviating from language detection this feature is enabled when no language/locale is specified or language is set to `auto`.
> When Multilingual transcription is used, a file with an unsupported locale produces a result. This result is based on the closest locale but most likely not correct.
63
+
> This result is a known behavior. Make sure to configure locales when not using Multilingual transcription!
56
64
57
65
***Language detection**. Automatically detects the dominant language/locale which is used to transcribe the file. Set multiple languages/locales to enable language detection.
58
66
59
-
> [!NOTE]
60
-
>
61
-
> For files larger than 300 MB and/or longer than 2 hours and locales unsupported by Fast transcription, the file is processed generating a multilingual transcript based on the specified locales.
62
-
> In case language detection fails, the first language/locale defined is used to transcribe the file.
63
-
64
67
### Field extraction
65
68
66
69
Field extraction allows you to extract structured data from audio files, such as summaries, sentiments, and mentioned entities from call logs. You can begin by customizing a suggested analyzer template or creating one from scratch.
***Customizable data extraction**. Tailor the output to your specific needs by modifying the field schema, allowing for precise data generation and extraction.
For an end-2-end quickstart for Speech Analytics solutions, refer to the [Conversation knowledge mining solution accelerator](https://aka.ms/Conversational-Knowledge-Mining).
277
+
278
+
Gain actionable insights from large volumes of conversational data by identifying key themes, patterns, and relationships. By using Azure AI Foundry, Azure AI Content Understanding, Azure OpenAI Service, and Azure AI Search, this solution analyzes unstructured dialogue and maps it to meaningful, structured insights.
279
+
280
+
Capabilities such as topic modeling, key phrase extraction, speech-to-text transcription, and interactive chat enable users to explore data naturally and make faster, more informed decisions.
281
+
282
+
Analysts working with large volumes of conversational data can use this solution to extract insights through natural language interaction. It supports tasks like identifying customer support trends, improving contact center quality, and uncovering operational intelligence—enabling teams to spot patterns, act on feedback, and make informed decisions faster.
283
+
271
284
## Input requirements
272
-
For a detailed list of supported audio formats, refer to our [Service limits and codecs](../service-limits.md) page.
285
+
286
+
For a detailed list of supported audio formats, *see*[Service limits and codecs](../service-limits.md).
273
287
274
288
## Supported languages and regions
275
289
276
-
For a complete list of supported regions, languages, and locales, see our [Language and region support](../language-region-support.md)) page.
290
+
For a complete list of supported regions, languages, and locales, see [Language and region support](../language-region-support.md).
277
291
278
292
## Data privacy and security
279
293
280
-
Developers using Content Understanding should review Microsoft's policies on customer data. For more information, visit our [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) page.
294
+
Developers using this service should review Microsoft's policies on customer data. For more information, *see*[Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy).
281
295
282
296
## Next steps
283
297
284
-
* Try processing your audio content using Content Understanding in [**Azure AI Foundry portal**](https://aka.ms/cu-landing).
285
-
* Learn how to analyze audio content [**analyzer templates**](../quickstart/use-ai-foundry.md).
0 commit comments