You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/video/overview.md
+13-14Lines changed: 13 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ This format can drop straight into a vector store to enable an agent or RAG work
34
34
From there you can **customize the analyzer** for more fine-grained control of the output. You can define custom fields, segments, or enable face identification. Customization allows you to use the full power of generative models to extract deep insights from the visual and audio details of the video. For example, customization allows you to:
35
35
36
36
- Identify what products and brands are seen or mentioned in the video.
37
-
- Segment a basketball video by different plays such as `offensive play`, `defensive play`, `free throw`.
37
+
- Segment a news broadcast into chapters based on the topics or news stories discussed.
38
38
- Use face identification to label speakers as executives, for example, `CEO John Doe`, `CFO Jane Smith`.
39
39
40
40
## Why use Content Understanding for video?
@@ -48,7 +48,7 @@ Content understanding for video has broad potential uses. For example, you can c
48
48
49
49
## Prebuilt video analyzer example
50
50
51
-
With the prebuilt video analyzer, you can upload a video and get an immediately usable knowledge asset. The service packages every clip into both richly formatted Markdown and JSON. This process allows your search index or chat agent to ingest without custom glue code.
51
+
With the prebuilt video analyzer (prebuilt-videoAnalyzer), you can upload a video and get an immediately usable knowledge asset. The service packages every clip into both richly formatted Markdown and JSON. This process allows your search index or chat agent to ingest without custom glue code.
52
52
Calling prebuilt-video with no custom schema returns a document like the following (abridged) example:
53
53
54
54
```markdown
@@ -104,19 +104,18 @@ The service operates in two stages. The first stage, content extraction, involve
104
104
105
105
The first pass is all about extracting a first set of details—who's speaking, where are the cuts, and which faces recur. It creates a solid metadata backbone that later steps can reason over.
106
106
107
-
* **Transcription:** Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Sentence-level and word-level timestamps are available upon request. Content Understanding supports the full set of Azure AI Speech speech-to-text languages. For languages with Fast transcriptions support and for files ≤ 300 MB and/or ≤ 2 hours, transcription time is reduced substantially. Additionally, the following transcription details are important to consider:
107
+
* **Transcription:** Converts conversational audio into searchable and analyzable text-based transcripts in WebVTT format. Sentence-level timestamps are available if `returnDetails=true` is set. Content Understanding supports the full set of Azure AI Speech speech-to-text languages. For more information on supported languages, *see* [Language and region support](../language-region-support.md#language-support). The following transcription details are important to consider:
108
+
108
109
* **Diarization:** Distinguishes between speakers in a conversation in the output, attributing parts of the transcript to specific speakers.
109
-
* **Multilingual transcription:** Generates multilingual transcripts, applying language/locale per phrase. Deviating from language detection this feature is enabled when no language/locale is specified or language is set to `auto`.
110
+
* **Multilingual transcription:** Generates multilingual transcripts. Language/locale is applied per phrase in the transcript. Phrases output when `returnDetails=true` is set. Deviating from language detection this feature is enabled when no language/locale is specified or language is set to `auto`.
110
111
111
-
> [!WARNING]
112
+
> [!NOTE]
112
113
> When multilingual transcription is used, a file with an unsupported locale still produces a result. This result is based on the closest locale but most likely not correct.
113
114
> This transcription behavior is known. Make sure to configure locales when not using multilingual transcription!
114
-
115
+
115
116
* **Shot detection:** Identifies segments of the video aligned with shot boundaries where possible, allowing for precise editing and repackaging of content with breaks exactly on shot boundaries.
116
117
* **Key frame extraction:** Extracts key frames from videos to represent each shot completely, ensuring each shot has enough key frames to enable field extraction to work effectively.
117
118
118
-
119
-
120
119
## Field extraction and segmentation
121
120
122
121
Next, the generative model layers meaning—tagging scenes, summarizing actions, and slicing footage into segments per your request. This action is where prompts turn into structured data.
@@ -129,7 +128,7 @@ Shape the output to match your business vocabulary. Use a `fieldSchema` object w
129
128
130
129
* **Media asset management:**
131
130
132
-
* **Shot type:** Helps editors and producers organize content, simplifying editing, and understanding the visual language of the video. Useful for metadata tagging and quicker scene retrieval.
131
+
* **Video Category:** Helps editors and producers organize content, by classifying it as News, Sports, Interview, Documentary, Advertisement, etc. Useful for metadata tagging and quicker content filtering and retrieval.
133
132
* **Color scheme:** Conveys mood and atmosphere, essential for narrative consistency and viewer engagement. Identifying color themes helps in finding matching clips for accelerated video editing.
134
133
135
134
* **Advertising:**
@@ -203,12 +202,12 @@ Content Understanding offers three ways to slice a video, letting you get the ou
203
202
Face identification description is an add-on that provides context to content extraction and field extraction using face information.
204
203
205
204
> [!NOTE]
206
-
>
207
-
> Face features incur additional cost. This feature is limited access and involves face identification and grouping; customers need to register for access at Face Recognition.
205
+
>
206
+
> This feature is limited access and involves face identification and grouping; customers need to register for access at [Face Recognition](https://aka.ms/facerecognition). Face features incur added costs.
208
207
209
208
### Content extraction: grouping and identification
210
209
211
-
The face add-on enables grouping and identification as output from the content extraction section.
210
+
The face add-on enables grouping and identification as output from the content extraction section. To enable face capabilities set `enableFace=true` in the analyzer configuration.
212
211
213
212
***Grouping:** Grouped faces appearing in a video to extract one representative face image for each person and provides segments where each one is present. The grouped face data is available as metadata and can be used to generate customized metadata fields when `returnDetails: true` for the analyzer.
214
213
***Identification:** Labels individuals in the video with names based on a Face API person directory. Customers can enable this feature by supplying a name for a Face API directory in the current resource in the `personDirectoryId` property of the analyzer.
@@ -240,7 +239,7 @@ Specific limitations of video processing to keep in mind:
240
239
241
240
***Frame sampling (\~ 1 FPS)**: The analyzer inspects about one frame per second. Rapid motions or single-frame events may be missed.
242
241
***Frame resolution (512 × 512 px)**: Sampled frames are resized to 512 pixels square. Small text or distant objects can be lost.
243
-
***Speech**: Only spoken words are transcribed. Music, sound effects, and ambient noise are ignored. Specific of supported locals are document.
242
+
***Speech**: Only spoken words are transcribed. Music, sound effects, and ambient noise are ignored.
244
243
245
244
## Input requirements
246
245
@@ -259,7 +258,7 @@ See [Language and region support](../language-region-support.md).
259
258
As with all Azure AI services, review Microsoft's [Data, protection, and privacy](https://www.microsoft.com/trust-center/privacy) documentation.
260
259
261
260
> [!IMPORTANT]
262
-
>
261
+
>
263
262
> If you process **Biometric Data** (for example, enable **Face Grouping** or **Face Identification**), you must meet all notice, consent, and deletion requirements under GDPR or other applicable laws. See [Data and Privacy for Face](/legal/cognitive-services/face/data-privacy-security).
0 commit comments