You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> * Features, approaches, and processes can change or have limited capabilities, before General Availability (GA).
20
20
> * For more information, *see*[**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
21
21
22
-
Content Understanding offers a streamlined process and various capabilities to reason over large amounts of unstructured data, accelerating time-to-value by generating an output that can be integrated into analytical workflows and retrieval augmented generation (RAG) applications.
22
+
Content Understanding provides an advanced approach to processing and interpreting vast amounts of unstructured data. It offers various capabilities that accelerate time-to-value, reducing the time required to derive meaningful insights. By generating outputs that seamlessly integrate into analytical workflows and Retrieval-Augmented Generation (RAG) applications, it enhances data-driven decision-making and boosts overall productivity.
23
23
24
24
## Overview of Key Capabilities in Content Understanding
25
25
@@ -33,73 +33,75 @@ The service employs a customizable dual-pipeline architecture that combines [con
33
33
34
34
Content extraction in Content Understanding is a powerful feature that transforms unstructured data into structured data, powering advanced AI processing capabilities. The structured data enables efficient downstream processing while maintaining contextual relationships in the source content.
35
35
36
-
Content extraction provides foundational data that grounds the generative capabilities of Field Extraction, offering essential context about the input content. Users will find content extraction invaluable for converting diverse data formats into a structured format, this capability excels in scenarios requiring:
37
-
* Document digitization, indexing and retrieval by structure
36
+
Content extraction provides foundational data that grounds the generative capabilities of Field Extraction, offering essential context about the input content. Users find content extraction invaluable for converting diverse data formats into a structured format. This capability excels in scenarios requiring:
37
+
38
+
* Document digitization, indexing, and retrieval by structure
38
39
* Audio/video transcription
39
40
* Metadata generation at scale
40
41
41
-
Content Understanding enhances its core extraction capabilities through optional add-on features that provide deeper content analysis. These add-ons can extract additional elements like layout information, speaker roles and face grouping. While some add-ons may incur additional costs, they can be selectively enabled based on your specific requirements to optimize both functionality and cost-efficiency. The modular nature of these add-ons allows for customized processing pipelines tailored to your use case.
42
+
Content Understanding enhances its core extraction capabilities through optional add-on features that provide deeper content analysis. These add-ons can extract ancillary elements like layout information, speaker roles, and face grouping. While some add-ons can incur added costs, they can be selectively enabled based on your specific requirements to optimize both functionality and cost-efficiency. The modular nature of these add-on features allows for customized processing pipelines tailored to your use case.
42
43
43
-
The following section details the content extraction capabilities and optional add-on features available for each supported modality. Select your target modality from the tabs below to view its specific capabilities.
44
+
The following section details the content extraction capabilities and optional add-on features available for each supported modality. Select your target modality from the following tabs and view its specific capabilities.
44
45
45
46
# [Document](#tab/document)
46
47
47
48
|Content Extraction|Add-on Capabilities|
48
49
|-------------|-------------|
49
-
|•**Optical Character Recognition (OCR)**: Extract printed and handwritten text from documents in various file formats, converting it into structured data. </br>|•**Layout**:Extracts layout information such as paragraphs, sections, tables, and more.. </br> •**Barcode**: Identifies and decodes all barcodes in the documents.</br> •**Formula**: Recognizes all identified mathematical equations from the documents. </br> |
50
+
|•**`Optical Character Recognition (OCR)`**: Extract printed and handwritten text from documents in various file formats, converting it into structured data. </br>|•**`Layout`**:Extracts layout information such as paragraphs, sections, and tables</br> •**`Barcode`**: Identifies and decodes all barcodes in the documents.</br> •**`Formula`**: Recognizes all identified mathematical equations from the documents. </br> |
50
51
51
52
# [Image](#tab/image)
52
53
> [!NOTE]
53
-
> Content extraction for images is currently not supported. At present, the Image modality supports field extraction capabilities only.
54
+
> Content extraction for images is currently not fully supported. The image modality currently supports field extraction capabilities only.
54
55
55
56
# [Audio](#tab/audio)
56
57
57
58
|Content Extraction|Add-on Capabilities|
58
59
|-------------|-------------|
59
-
|•**Transcription**:Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.</br> •**Diarization**: Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers. </br> •**Language detection**: Automatically detects the language spoken in the audio to be processed.</br>|•**Speaker role detection**: Identifies speaker roles based on diarization results and replaces generic labels like "Speaker 1" with specific role names, such as "Agent" or "Customer." </br>|
60
+
|•**`Transcription`**:Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Customizable fields can be generated from transcription data. Sentence-level and word-level timestamps are available upon request.</br> •**`Diarization`**: Distinguishes between speakers in a conversation, attributing parts of the transcript to specific speakers. </br> •**`Language detection`**: Automatically detects the language spoken in the audio to be processed.</br>|•**`Speaker role detection`**: Identifies speaker roles based on diarization results and replaces generic labels like "Speaker 1" with specific role names, such as "Agent" or "Customer." </br>|
60
61
61
62
# [Video](#tab/video)
62
63
63
64
|Content Extraction|Add-on Capabilities|
64
65
|-------------|-------------|
65
-
|•**Transcription**: Converts speech to structured, searchable text via Azure AI Speech, allowing users to specify recognition languages. </br>•**Shot Detection**: Identifies segments of the video aligned with shot boundaries where possible, allowing for precise editing and repackaging of content with breaks exactly on shot boundaries.</br> •**Key Frame Extraction**: Extracts key frames from videos to represent each shot completely, ensuring each shot has enough key frames to enable Field Extraction to work effectively.</br> |•**Face Grouping**: Grouped faces appearing in a video to extract one representative face image for each person and provides segments where each one is present. The grouped face data is available as metadata and can be used to generate customized metadata fields.This feature is limited access and involves face identification and grouping; customers need to register for access at Face Recognition. |
66
+
|•**`Transcription`**: Converts speech to structured, searchable text via Azure AI Speech, allowing users to specify recognition languages. </br>•**`Shot Detection`**: Identifies segments of the video aligned with shot boundaries where possible, allowing for precise editing and repackaging of content with breaks exactly on shot boundaries.</br> •**`Key Frame Extraction`**: Extracts key frames from videos to represent each shot completely, ensuring each shot has enough key frames to enable Field Extraction to work effectively.</br> |•**`Face Grouping`**: Grouped faces appearing in a video to extract one representative face image for each person and provides segments where each one is present. The grouped face data is available as metadata and can be used to generate customized metadata fields.This feature is limited access and involves face identification and grouping; customers need to register for access at Face Recognition. |
66
67
67
68
----
68
69
### Field Extraction
69
-
Field extraction in Content Understanding leverages generative AI models to define schemas that extract, infer, or abstract information from various data types into structured outputs. This capability is powerful because by defining schemas with natural language field descriptions it eliminates the need for complex prompt engineering, making it accessible for users to create standardized outputs.
70
+
Field extraction in Content Understanding uses generative AI models to define schemas that extract, infer, or abstract information from various data types into structured outputs. This capability is powerful because by defining schemas with natural language field descriptions it eliminates the need for complex prompt engineering, making it accessible for users to create standardized outputs.
71
+
72
+
Field extraction is optimized for scenarios requiring:
70
73
71
-
Field extraction is particularly optimized for scenarios requiring:
72
74
* Consistent metadata extraction across content types
73
75
* Workflow automation with structured output
74
-
* Compliance monitoring and validation
76
+
* Compliance monitoring and validation
75
77
76
-
The value lies in its ability to handle multiple content types (text, audio, video, images) while maintaining accuracy and scalability through AI-powered schema extraction and confidence scoring.
78
+
The value lies in its ability to handle multiple content types (text, audio, video, images) while maintaining accuracy and scalability through AI-powered schema extraction and confidence scoring.
77
79
78
-
Each modality supports specific generation approaches optimized for that content type. Review the tabs below to understand the generation capabilities and methods available for your target modality.
80
+
Each modality supports specific generation approaches optimized for that content type. Review the following tabs to understand the generation capabilities and methods available for your target modality.
79
81
80
82
# [Document](#tab/document)
81
83
82
84
|Supported generation methods|
83
85
|--------------|
84
86
|•**Extract**: In document, users can extract field values from input content, such as dates from receipts or item details from invoices. |
85
87
86
-
:::image type="content" source="../media/capabilities/documentextraction.gif" alt-text="Illustration of Document extraction method workflow.":::
88
+
:::image type="content" source="../media/capabilities/document-extraction.gif" alt-text="Illustration of Document extraction method workflow.":::
87
89
88
90
# [Image](#tab/image)
89
91
90
92
|Supported generation methods|
91
93
|--------------|
92
94
|•**Generate**: In images, users can derive values from the input content, such as generating titles, descriptions, and summaries for figures and charts. <br> •**Classify**: In images, users can categorize elements from the input content, such as identifying different types of charts like histograms, bar graphs, etc.<br> |
93
95
94
-
:::image type="content" source="../media/capabilities/chartanalysis.gif" alt-text="Illustration of Image Generation and Classification workflow.":::
96
+
:::image type="content" source="../media/capabilities/chart-analysis.gif" alt-text="Illustration of Image Generation and Classification workflow.":::
95
97
96
98
# [Audio](#tab/audio)
97
99
98
100
|Supported generation methods|
99
101
|--------------|
100
102
|•**Generate**: In audio, users can derive values from the input content, such as conversation summaries and topics. <br> •**Classify**: In audio, users can categorize values from the input content, such as determining the sentiment of a conversation (positive, neutral, or negative).<br> |
101
103
102
-
:::image type="content" source="../media/capabilities/audioanalysis.gif" alt-text="Illustration of Audio Generation and Classification workflow.":::
104
+
:::image type="content" source="../media/capabilities/audio-analysis.gif" alt-text="Illustration of Audio Generation and Classification workflow.":::
103
105
104
106
# [Video](#tab/video)
105
107
@@ -116,7 +118,7 @@ Follow our quickstart guide [to build your first schema](../quickstart/use-ai-fo
116
118
117
119
#### Grounding and Confidence Scores
118
120
119
-
Content Understanding ensures that the results from field and content extraction are accurately grounded to the input content and provide confidence scores for the extracted data, making automation and validation more reliable.
121
+
Content Understanding ensures that the results from field and content extraction are precisely aligned with the input content. It also provides confidence scores for the extracted data, enhancing the reliability of automation and validation processes.
120
122
121
123
### Analyzers
122
124
@@ -133,15 +135,15 @@ Key benefits of analyzers include:
133
135
134
136
***Reusability**: A single analyzer can be reused across multiple workflows and applications, reducing development overhead.
135
137
136
-
***Customization**: While starting with prebuilt templates, analyzers can be fully customized to match your specific business requirements and use cases.
138
+
***Customization**: Start with prebuilt templates. You can then enhance their functionality with analyzers that can be fully customized to match your specific business requirements and use cases.
137
139
138
140
For example, you might create an analyzer for processing customer service calls that combines audio transcription (content extraction) with sentiment analysis and topic classification (field extraction). This analyzer can then consistently process thousands of calls, providing structured insights for your customer experience analytics.
139
141
140
142
Follow our quickstart guide to [build your first analyzer](../quickstart/use-ai-foundry.md#analyzer-templates).
141
143
142
144
### Best Practices
143
145
144
-
For guidance on optimizing your Content Understanding implementations, including schema design tips, see our detailed [Best practices guide](../best-practices.md). This guide helps you maximize the value of Content Understanding while avoiding common pitfalls.
146
+
For guidance on optimizing your Content Understanding implementations, including schema design tips, see our detailed [Best practices guide](best-practices.md). This guide helps you maximize the value of Content Understanding while avoiding common pitfalls.
145
147
146
148
### Input requirements
147
149
For detailed information on supported input document formats, refer to our [Service quotas and limits](../service-limits.md) page.
@@ -153,6 +155,7 @@ For a detailed list of supported languages and regions, visit our [Language and
153
155
Developers using Content Understanding should review Microsoft's policies on customer data. For more information, visit our [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) page.
154
156
155
157
## Next steps
158
+
156
159
* Try processing your document content using Content Understanding in [Azure ](https://ai.azure.com/).
157
160
* Learn to analyze content [**analyzer templates**](../quickstart/use-ai-foundry.md).
0 commit comments