You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/overview.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,6 +85,11 @@ Content Understanding now supports modified content filtering for approved custo
85
85
> * Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
86
86
> * For more information, *see*[**Content filtering**](../openai/concepts/content-filter.md).
87
87
88
+
## Face capabilities
89
+
90
+
The Face capabilities feature in Content Understanding is a limited Access service and registration is required for access. Face grouping and identification feature in Content Understanding is limited based on eligibility and usage criteria. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see [Microsoft's Limited Access Policy](../../ai-services/cognitive-services-limited-access.md).
91
+
92
+
88
93
## Data privacy and security
89
94
Developers using the Content Understanding service should review Microsoft's policies on customer data. For more information, visit our [**Data, protection and privacy**](https://www.microsoft.com/trust-center/privacy) page.
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/quickstart/use-ai-foundry.md
+14-2Lines changed: 14 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,9 +51,16 @@ You can manage the users and their individual roles here:
51
51
52
52
:::image type="content" source="../media/quickstarts/cu-management-center.png" alt-text="Screenshot of Project users section of management center.":::
53
53
54
-
## Build your first analyzer
54
+
## Create your first task and analyzer
55
55
56
-
Now that everything is configured to get started, we can walk through, step-by-step, how to build your first analyzer, starting with building the schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
56
+
Now that everything is configured to get started, we can walk through, step-by-step, how to create a task and build your first analyzer. The type of task that you create depends on what data you plan to bring in.
57
+
58
+
***Single-file task:** A single-file task utilizes Content Understanding Standard mode and allows you to bring in one file to create your analyzer.
59
+
***Multi-file task:** A multi-file task utilizes Content Understanding Pro mode and allows you to bring in multiple files to create your analyzer. You can also bring in a set of reference data that the service can use to perform multi-step reasoning and make conclusions about your data. To learn more about the difference between Content Understanding Standard and Pro mode, check out [Azure AI Content Understanding pro and standard modes](../concepts/standard-pro-modes.md).
To create a single-file Content Understanding task, start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
57
64
58
65
1. Upload a sample file of an invoice document or any other data relevant to your scenario.
59
66
@@ -89,6 +96,11 @@ Now that everything is configured to get started, we can walk through, step-by-s
89
96
90
97
Now you successfully built your first Content Understanding analyzer, and are ready to start extracting insights from your data. Check out [Quickstart: Azure AI Content Understanding REST APIs](./use-rest-api.md) to utilize the REST API to call your analyzer.
91
98
99
+
# [Multi-file task (Pro mode)](#tab/pro)
100
+
101
+
To create a multi-file Content Understanding task, start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any document based data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
Under the hood, two stages transform raw pixels into business-ready insights. The diagram below shows how extraction feeds generation, ensuring each downstream step has the context it needs.
114
114
@@ -186,21 +186,21 @@ Shape the output to match your business vocabulary. Use a `fieldSchema` object w
186
186
187
187
Content Understanding offers three ways to slice a video, letting you get the output you need for whole videos or short clips. You can use these options by setting the `SegmentationMode` property on a custom analyzer.
You describe the logic in natural language and the model creates segments to match. Set `segmentationDefinition` with a string describing how you'd like the video to be segmented. Custom allows segments of varying length from seconds to minutes depending on the prompt.
205
205
206
206
**Example:**
@@ -213,7 +213,7 @@ Content Understanding offers three ways to slice a video, letting you get the ou
213
213
}
214
214
```
215
215
216
-
## Face identification description add-on
216
+
## Face identification and description add-on
217
217
218
218
> [!NOTE]
219
219
>
@@ -230,7 +230,7 @@ The face add-on enables grouping and identification as output from the content e
230
230
231
231
### Field Extraction – Face description
232
232
233
-
The field extraction capability is enhanced by providing detailed descriptions of identified faces in the video. This capability includes attributes such as facial hair, emotions, and the presence of celebrities, which can be crucial for various analytical and indexing purposes. To enable face capabilities set `disableFaceBlurring=true` in the analyzer configuration.
233
+
The field extraction capability is enhanced by providing detailed descriptions of identified faces in the video. This capability includes attributes such as facial hair, emotions, and the presence of celebrities, which can be crucial for various analytical and indexing purposes. To enable face description capabilities set `disableFaceBlurring : true` in the analyzer configuration.
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/whats-new.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,6 +31,13 @@ The `pro` mode is currently limited to documents as inputs, with support other t
31
31
Common challenges that the pro mode addresses are aggregating a schema across content from different input files, validating results across documents, and using external knowledge to generate an output schema.
32
32
Learn more about the [pro mode](concepts/standard-pro-modes.md).
33
33
34
+
### AI Foundry experience
35
+
36
+
With this release, the following updates are now available to the Content Understanding experience in Azure AI Foundry:
37
+
38
+
* Added support for creating both `standard` mode and `pro` mode tasks in the existing Content Understanding experience. Now with pro mode, you have the ability to bring in your own reference data and create a task that executes multi-step reasoning on your data. Read more about the two different task types in [Use Azure AI Content Understanding in the Azure AI Foundry](./quickstart/use-ai-foundry.md).
39
+
* Try-out experiences are now available for general document analysis and invoice analysis. Try out these prebuilt features on your own data and start getting insights without having to create a custom task.
40
+
34
41
### Document classification and splitting
35
42
36
43
This release introduces a new [classification API](concepts/classifier.md). This API supports classifying and logically splitting a single file containing multiple documents with optional routing to field extraction analyzers. You can create a custom classifier to split and classify a file into multiple logical documents and route the individual documents to a downstream field extraction model in a single API call.
0 commit comments