Skip to content

Commit 5ab9fd5

Browse files
authored
Merge pull request #3180 from laujan/tonye-2746
Tonye 2746
2 parents 5dfc401 + 68e0293 commit 5ab9fd5

File tree

6 files changed

+167
-0
lines changed

6 files changed

+167
-0
lines changed
Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
---
2+
title: Azure AI Content Understanding Capabilities Overview
3+
titleSuffix: Azure AI services
4+
description: Learn about Azure AI Content Understanding Capabilities.
5+
author: laujan
6+
ms.author: lajanuar
7+
manager: nitinme
8+
ms.service: azure-ai-content-understanding
9+
ms.topic: overview
10+
ms.date: 02/25/2025
11+
ms.custom: 2025-understanding-release
12+
---
13+
14+
# Content Understanding Capabilities (preview)
15+
16+
> [!IMPORTANT]
17+
>
18+
> * Azure AI Content Understanding is available in preview. Public preview releases provide early access to features that are in active development.
19+
> * Features, approaches, and processes can change or have limited capabilities, before General Availability (GA).
20+
> * For more information, *see* [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
21+
22+
Content Understanding provides an advanced approach to processing and interpreting vast amounts of unstructured data. It offers various capabilities that accelerate time-to-value, reducing the time required to derive meaningful insights. By generating outputs that seamlessly integrate into analytical workflows and Retrieval-Augmented Generation (RAG) applications, it enhances data-driven decision-making and boosts overall productivity.
23+
24+
## Overview of Key Capabilities in Content Understanding
25+
26+
### Multimodal Data Ingestion
27+
28+
Content Understanding delivers a unified solution for processing diverse data types - documents, text, images, audio, and video - through an intelligent pipeline that transforms unstructured content into structured, analyzable formats. This consolidated approach eliminates the complexity of managing separate Azure resources for speech, vision, and document processing.
29+
30+
The service employs a customizable dual-pipeline architecture that combines [content extraction](#content-extraction) and [field extraction](#field-extraction) capabilities. Content extraction provides foundational structuring of raw data, while field extraction applies schema-based analysis to derive specific insights. This integrated approach streamlines workflows, reduces operational overhead, and enables sophisticated analysis across multiple modalities through a single, cohesive interface.
31+
32+
### Content Extraction
33+
34+
Content extraction in Content Understanding is a powerful feature that transforms unstructured data into structured data, powering advanced AI processing capabilities. The structured data enables efficient downstream processing while maintaining contextual relationships in the source content.
35+
36+
Content extraction provides foundational data that grounds the generative capabilities of Field Extraction, offering essential context about the input content. Users find content extraction invaluable for converting diverse data formats into a structured format. This capability excels in scenarios requiring:
37+
38+
* Document digitization, indexing, and retrieval by structure
39+
* Audio/video transcription
40+
* Metadata generation at scale
41+
42+
Content Understanding enhances its core extraction capabilities through optional add-on features that provide deeper content analysis. These add-ons can extract ancillary elements like layout information, speaker roles, and face grouping. While some add-ons can incur added costs, they can be selectively enabled based on your specific requirements to optimize both functionality and cost-efficiency. The modular nature of these add-on features allows for customized processing pipelines tailored to your use case.
43+
44+
The following section details the content extraction capabilities and optional add-on features available for each supported modality. Select your target modality from the following tabs and view its specific capabilities.
45+
46+
# [Document](#tab/document)
47+
48+
|Content Extraction|Add-on Capabilities|
49+
|-------------|-------------|
50+
|&bullet; **`Optical Character Recognition (OCR)`**: Extract printed and handwritten text from documents in various file formats, converting it into structured data. </br>| &bullet; **`Layout`**: Extracts layout information such as paragraphs, sections, and tables</br> &bullet; **`Barcode`**: Identifies and decodes all barcodes in the documents.</br> &bullet; **`Formula`**: Recognizes all identified mathematical equations from the documents. </br> |
51+
52+
# [Image](#tab/image)
53+
> [!NOTE]
54+
> Content extraction for images is currently not fully supported. The image modality currently supports field extraction capabilities only.
55+
56+
# [Audio](#tab/audio)
57+
58+
|Content Extraction|Add-on Capabilities|
59+
|-------------|-------------|
60+
|&bullet; **`Transcription`**: Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.</br> &bullet; **`Diarization`**: Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers. </br> &bullet; **`Language detection`**: Automatically detects the language spoken in the audio to be processed.</br>| &bullet; **`Speaker role detection`**: Identifies speaker roles based on diarization results and replaces generic labels like "Speaker 1" with specific role names, such as "Agent" or "Customer." </br>|
61+
62+
# [Video](#tab/video)
63+
64+
|Content Extraction|Add-on Capabilities|
65+
|-------------|-------------|
66+
|&bullet; **`Transcription`**: Converts speech to structured, searchable text via Azure AI Speech, allowing users to specify recognition languages. </br>&bullet; **`Shot Detection`**: Identifies segments of the video aligned with shot boundaries where possible, allowing for precise editing and repackaging of content with breaks exactly on shot boundaries.</br> &bullet; **`Key Frame Extraction`**: Extracts key frames from videos to represent each shot completely, ensuring each shot has enough key frames to enable Field Extraction to work effectively.</br> | &bullet; **`Face Grouping`**: Grouped faces appearing in a video to extract one representative face image for each person and provides segments where each one is present. The grouped face data is available as metadata and can be used to generate customized metadata fields. This feature is limited access and involves face identification and grouping; customers need to register for access at Face Recognition. |
67+
68+
----
69+
### Field Extraction
70+
Field extraction in Content Understanding uses generative AI models to define schemas that extract, infer, or abstract information from various data types into structured outputs. This capability is powerful because by defining schemas with natural language field descriptions it eliminates the need for complex prompt engineering, making it accessible for users to create standardized outputs.
71+
72+
Field extraction is optimized for scenarios requiring:
73+
74+
* Consistent metadata extraction across content types
75+
* Workflow automation with structured output
76+
* Compliance monitoring and validation
77+
78+
The value lies in its ability to handle multiple content types (text, audio, video, images) while maintaining accuracy and scalability through AI-powered schema extraction and confidence scoring.
79+
80+
Each modality supports specific generation approaches optimized for that content type. Review the following tabs to understand the generation capabilities and methods available for your target modality.
81+
82+
# [Document](#tab/document)
83+
84+
|Supported generation methods|
85+
|--------------|
86+
|&bullet; **Extract**: In document, users can extract field values from input content, such as dates from receipts or item details from invoices. |
87+
88+
:::image type="content" source="../media/capabilities/document-extraction.gif" alt-text="Illustration of Document extraction method workflow.":::
89+
90+
# [Image](#tab/image)
91+
92+
|Supported generation methods|
93+
|--------------|
94+
|&bullet; **Generate**: In images, users can derive values from the input content, such as generating titles, descriptions, and summaries for figures and charts. <br> &bullet; **Classify**: In images, users can categorize elements from the input content, such as identifying different types of charts like histograms, bar graphs, etc.<br> |
95+
96+
:::image type="content" source="../media/capabilities/chart-analysis.gif" alt-text="Illustration of Image Generation and Classification workflow.":::
97+
98+
# [Audio](#tab/audio)
99+
100+
|Supported generation methods|
101+
|--------------|
102+
|&bullet; **Generate**: In audio, users can derive values from the input content, such as conversation summaries and topics. <br> &bullet; **Classify**: In audio, users can categorize values from the input content, such as determining the sentiment of a conversation (positive, neutral, or negative).<br> |
103+
104+
:::image type="content" source="../media/capabilities/audio-analysis.gif" alt-text="Illustration of Audio Generation and Classification workflow.":::
105+
106+
# [Video](#tab/video)
107+
108+
|Supported generation methods|
109+
|--------------|
110+
|&bullet; **Generate**: In video, users can derive values from the input content, such as summaries of video segments and product characteristics. <br> &bullet; **Classify**: In video, users can categorize values from the input content, such as determining the sentiment of conversations (positive, neutral, or negative). <br>|
111+
112+
:::image type="content" source="../media/capabilities/media-asset.gif" alt-text="Illustration of Video Generation and Classification workflow.":::
113+
114+
-------
115+
116+
117+
Follow our quickstart guide [to build your first schema](../quickstart/use-ai-foundry.md#build-your-first-analyzer).
118+
119+
#### Grounding and Confidence Scores
120+
121+
Content Understanding ensures that the results from field and content extraction are precisely aligned with the input content. It also provides confidence scores for the extracted data, enhancing the reliability of automation and validation processes.
122+
123+
### Analyzers
124+
125+
Analyzers are the core processing units in Content Understanding that define how your content should be processed and what insights should be extracted. Think of an analyzer as a custom pipeline that combines:
126+
127+
* Content extraction configurations - determining what foundational elements to extract.
128+
* Field extraction schemas - specifying what insights to generate from the content.
129+
130+
Key benefits of analyzers include:
131+
132+
* **Consistency**: Analyzers ensure uniform processing across all content by applying the same extraction rules and schemas, delivering reliable and predictable results.
133+
134+
* **Scalability**: Once configured, analyzers can handle large volumes of content through API integration, making them ideal for production scenarios.
135+
136+
* **Reusability**: A single analyzer can be reused across multiple workflows and applications, reducing development overhead.
137+
138+
* **Customization**: Start with prebuilt templates. You can then enhance their functionality with analyzers that can be fully customized to match your specific business requirements and use cases.
139+
140+
For example, you might create an analyzer for processing customer service calls that combines audio transcription (content extraction) with sentiment analysis and topic classification (field extraction). This analyzer can then consistently process thousands of calls, providing structured insights for your customer experience analytics.
141+
142+
To get started, you can follow our guide for [building your first analyzer](../concepts/analyzer-templates.md).
143+
144+
### Best Practices
145+
146+
For guidance on optimizing your Content Understanding implementations, including schema design tips, see our detailed [Best practices guide](best-practices.md). This guide helps you maximize the value of Content Understanding while avoiding common pitfalls.
147+
148+
### Input requirements
149+
For detailed information on supported input document formats, refer to our [Service quotas and limits](../service-limits.md) page.
150+
151+
### Supported languages and regions
152+
For a detailed list of supported languages and regions, visit our [Language and region support](../language-region-support.md) page.
153+
154+
### Data privacy and security
155+
Developers using Content Understanding should review Microsoft's policies on customer data. For more information, visit our [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) page.
156+
157+
## Next steps
158+
159+
* Try processing your document content using Content Understanding in [Azure ](https://ai.azure.com/).
160+
* Learn to analyze content [**analyzer templates**](../quickstart/use-ai-foundry.md).
161+
* Review code sample: [**analyzer templates**](https://github.com/Azure-Samples/azure-ai-content-understanding-python/tree/main/analyzer_templates).
162+
* Take a look at our [**glossary**](../glossary.md)
163+
164+
89.1 KB
Loading
122 KB
Loading
101 KB
Loading
313 KB
Loading

articles/ai-services/content-understanding/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,9 @@ items:
3636
href: quickstart/use-ai-foundry.md
3737
- name: Capabilities
3838
items:
39+
- name: Overview
40+
displayName: content understanding capabilities, document, text, images, video, audio, visual, structured, content, field, extraction
41+
href: concepts/capabilities.md
3942
- name: Document
4043
displayName: document, text, images, video, audio, visual, structured, content, field, extraction
4144
href: document/overview.md

0 commit comments

Comments
 (0)