Skip to content

Commit d16fc6d

Browse files
authored
Merge pull request #5128 from laujan/5091-5095-5100-kate-jp-joe
5091 5095 5100 kate jp joe
2 parents cc49d01 + d7bc455 commit d16fc6d

File tree

4 files changed

+34
-10
lines changed

4 files changed

+34
-10
lines changed

articles/ai-services/content-understanding/overview.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,11 @@ Content Understanding now supports modified content filtering for approved custo
8585
> * Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
8686
> * For more information, *see* [**Content filtering**](../openai/concepts/content-filter.md).
8787
88+
## Face capabilities
89+
90+
The Face capabilities feature in Content Understanding is a limited Access service and registration is required for access. Face grouping and identification feature in Content Understanding is limited based on eligibility and usage criteria. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see [Microsoft's Limited Access Policy](../../ai-services/cognitive-services-limited-access.md).
91+
92+
8893
## Data privacy and security
8994
Developers using the Content Understanding service should review Microsoft's policies on customer data. For more information, visit our [**Data, protection and privacy**](https://www.microsoft.com/trust-center/privacy) page.
9095

articles/ai-services/content-understanding/quickstart/use-ai-foundry.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,16 @@ You can manage the users and their individual roles here:
5151

5252
:::image type="content" source="../media/quickstarts/cu-management-center.png" alt-text="Screenshot of Project users section of management center.":::
5353

54-
## Build your first analyzer
54+
## Create your first task and analyzer
5555

56-
Now that everything is configured to get started, we can walk through, step-by-step, how to build your first analyzer, starting with building the schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
56+
Now that everything is configured to get started, we can walk through, step-by-step, how to create a task and build your first analyzer. The type of task that you create depends on what data you plan to bring in.
57+
58+
* **Single-file task:** A single-file task utilizes Content Understanding Standard mode and allows you to bring in one file to create your analyzer.
59+
* **Multi-file task:** A multi-file task utilizes Content Understanding Pro mode and allows you to bring in multiple files to create your analyzer. You can also bring in a set of reference data that the service can use to perform multi-step reasoning and make conclusions about your data. To learn more about the difference between Content Understanding Standard and Pro mode, check out [Azure AI Content Understanding pro and standard modes](../concepts/standard-pro-modes.md).
60+
61+
# [Single-file task (Standard mode)](#tab/standard)
62+
63+
To create a single-file Content Understanding task, start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
5764

5865
1. Upload a sample file of an invoice document or any other data relevant to your scenario.
5966

@@ -89,6 +96,11 @@ Now that everything is configured to get started, we can walk through, step-by-s
8996

9097
Now you successfully built your first Content Understanding analyzer, and are ready to start extracting insights from your data. Check out [Quickstart: Azure AI Content Understanding REST APIs](./use-rest-api.md) to utilize the REST API to call your analyzer.
9198

99+
# [Multi-file task (Pro mode)](#tab/pro)
100+
101+
To create a multi-file Content Understanding task, start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any document based data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
102+
103+
92104

93105
## Next steps
94106

articles/ai-services/content-understanding/video/overview.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ With the prebuilt video analyzer (prebuilt-videoAnalyzer), you can upload a vide
7676
WEBVTT
7777

7878
00:03.600 --> 00:06.000
79-
<Speaker 1 Speaker>Get new years ready.
79+
<Speaker 1>Get new years ready.
8080

8181
Key Frames
8282
- 00:00.600 ![](keyFrame.600.jpg)
@@ -90,7 +90,7 @@ With the prebuilt video analyzer (prebuilt-videoAnalyzer), you can upload a vide
9090
WEBVTT
9191

9292
00:03.600 --> 00:06.000
93-
<Speaker 1 Speaker>Go team!
93+
<Speaker 1>Go team!
9494

9595
Key Frames
9696
- 00:06.200 ![](keyFrame.6200.jpg)
@@ -108,7 +108,7 @@ We recently published a walk-through for RAG on Video using Content Understandin
108108

109109
1. [Content extraction](#content-extraction-capabilities)
110110
1. [Field extraction](#field-extraction-and-segmentation)
111-
1. [Face identification](#face-identification-description-add-on)
111+
1. [Face identification](#face-identification-and-description-add-on)
112112

113113
Under the hood, two stages transform raw pixels into business-ready insights. The diagram below shows how extraction feeds generation, ensuring each downstream step has the context it needs.
114114

@@ -186,21 +186,21 @@ Shape the output to match your business vocabulary. Use a `fieldSchema` object w
186186

187187
Content Understanding offers three ways to slice a video, letting you get the output you need for whole videos or short clips. You can use these options by setting the `SegmentationMode` property on a custom analyzer.
188188

189-
* **Whole-video**`SegmentationMode = NoSegmentation`
189+
* **Whole-video**`segmentationMode : noSegmentation`
190190
The service treats the entire video file as a single segment and extracts metadata across its full duration.
191191

192192
**Example:**
193193
* Compliance checks that look for specific brand-safety issues anywhere in an ad
194194
* full-length descriptive summaries
195195

196-
* **Automatic segmentation**`SegmentationMode = Auto`
196+
* **Automatic segmentation**`segmentationMode = auto`
197197
The service analyzes the timeline and breaks it up for you. Groups successive shots into coherent scenes, capped at one minute each.
198198

199199
**Example:**
200200
* Create storyboards from a show
201201
* Inserting mid-roll ads at logical pauses.
202202

203-
* **Custom segmentation**`SegmentationMode = Custom`
203+
* **Custom segmentation**`segmentationMode : custom`
204204
You describe the logic in natural language and the model creates segments to match. Set `segmentationDefinition` with a string describing how you'd like the video to be segmented. Custom allows segments of varying length from seconds to minutes depending on the prompt.
205205

206206
**Example:**
@@ -213,7 +213,7 @@ Content Understanding offers three ways to slice a video, letting you get the ou
213213
}
214214
```
215215

216-
## Face identification description add-on
216+
## Face identification and description add-on
217217

218218
> [!NOTE]
219219
>
@@ -230,7 +230,7 @@ The face add-on enables grouping and identification as output from the content e
230230

231231
### Field Extraction – Face description
232232

233-
The field extraction capability is enhanced by providing detailed descriptions of identified faces in the video. This capability includes attributes such as facial hair, emotions, and the presence of celebrities, which can be crucial for various analytical and indexing purposes. To enable face capabilities set `disableFaceBlurring=true` in the analyzer configuration.
233+
The field extraction capability is enhanced by providing detailed descriptions of identified faces in the video. This capability includes attributes such as facial hair, emotions, and the presence of celebrities, which can be crucial for various analytical and indexing purposes. To enable face description capabilities set `disableFaceBlurring : true` in the analyzer configuration.
234234

235235
**Examples:**
236236

articles/ai-services/content-understanding/whats-new.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,13 @@ The `pro` mode is currently limited to documents as inputs, with support other t
3131
Common challenges that the pro mode addresses are aggregating a schema across content from different input files, validating results across documents, and using external knowledge to generate an output schema.
3232
Learn more about the [pro mode](concepts/standard-pro-modes.md).
3333

34+
### AI Foundry experience
35+
36+
With this release, the following updates are now available to the Content Understanding experience in Azure AI Foundry:
37+
38+
* Added support for creating both `standard` mode and `pro` mode tasks in the existing Content Understanding experience. Now with pro mode, you have the ability to bring in your own reference data and create a task that executes multi-step reasoning on your data. Read more about the two different task types in [Use Azure AI Content Understanding in the Azure AI Foundry](./quickstart/use-ai-foundry.md).
39+
* Try-out experiences are now available for general document analysis and invoice analysis. Try out these prebuilt features on your own data and start getting insights without having to create a custom task.
40+
3441
### Document classification and splitting
3542

3643
This release introduces a new [classification API](concepts/classifier.md). This API supports classifying and logically splitting a single file containing multiple documents with optional routing to field extraction analyzers. You can create a custom classifier to split and classify a file into multiple logical documents and route the individual documents to a downstream field extraction model in a single API call.

0 commit comments

Comments
 (0)