You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/glossary.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ ms.author: lajanuar
24
24
|**Field schema**| A formal description of the fields to extract from the input. It specifies the name, description, value type, generation method, and more for each field. |
25
25
|**Generation method**| The process of determining the extracted value of a specified field. Content Understanding supports: <br/> •**Extract**: Directly extract values from the input content, such as dates from receipts or item details from invoices. <br/> •**Classify**: Classify content into predefined categories, such as call sentiment or chart type. <br/> •**Generate**: Generate values from input data, such as summarizing an audio conversation or generating scene descriptions from videos. |
26
26
|**Span**| A reference indicating the location of an element (for example, field, word) within the extracted Markdown content. A character offset and length represent a span. Different programming languages use various character encodings, which can affect the exact offset and length values for Unicode text. To avoid confusion, spans are only returned if the desired encoding is explicitly specified in the request. Some elements can map to multiple spans if they aren't contiguous in the markdown (for example, page). |
27
-
| **Processing Location** | An API request parameter that defines the geographic region where your data is analyzed by Azure AI services. You can choose from three options: `geography`, `dataZone`, and `global` to control where processing occurs. This setting helps meet data residency requirements and optimize performance or scalability based on your needs. For more details, see the [API reference documentation](/rest/api/contentunderstanding/content-analyzers/analyze?view=rest-contentunderstanding-2025-05-01-preview&tabs=HTTP#uri-parameters).
27
+
| **Processing Location** | An API request parameter that defines the geographic region where Azure AI Services analyzes your data. You can choose from three options: `geography`, `dataZone`, and `global` to control where processing occurs. This setting helps meet data residency requirements and optimize performance or scalability based on your needs. For more information, *see* the Content Understanding API reference documentation.
28
28
|**Grounding source**| The specific regions in content where a value was generated. It has different representations depending on the file type: <br>•**Image** - A polygon in the image, often an axis-aligned rectangle (bounding box). <br>•**PDF/TIFF** - A polygon on a specific page, often a quadrilateral. <br>•**Audio** - A start and end time range. <br>•**Video** - A start and end time range with an optional polygon in each frame, often a bounding box.|
29
29
|**Person directory**| A structured way to store face data for recognition tasks. You can add individual faces to the directory and later search for visually similar faces. You can also create person profiles, associate faces to them, and match new face images to known individuals. This setup supports both flexible face matching and identity recognition across images and videos. |
30
30
|**Confidence score**| The level of certainty that the extracted data is accurate. |
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/image/overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ Get started with processing images with Content Understanding by following our [
55
55
56
56
## Next steps
57
57
58
-
* For guidance on optimizing your Content Understanding implementations, including schema design tips, see our detailed [Best practices guide](best-practices.md).
58
+
* For guidance on optimizing your Content Understanding implementations, including schema design tips, see our detailed [Best practices guide](../concepts/best-practices.md).
59
59
* For detailed information on supported input image formats, refer to our [Service quotas and limits](../service-limits.md) page.
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,12 +70,11 @@ See [Quickstart](quickstart/use-ai-foundry.md) for more examples.
70
70
|Grounding source| Content Understanding identifies the specific regions in the content where the value was generated from. Source grounding allows users in automation scenarios to quickly verify the correctness of the field values, leading to higher confidence in the extracted data. |
71
71
|Confidence score | Content Understanding provides confidence scores from 0 to 1 to estimate the reliability of the results. High scores indicate accurate data extraction, enabling straight-through processing in automation workflows.|
72
72
73
-
74
73
## Responsible AI
75
74
76
75
Azure AI Content Understanding is designed to guard against processing harmful content, such as graphic violence and gore, hateful speech and bullying, exploitation, abuse, and more. For more information and a full list of prohibited content, *see* our [**Transparency note**](/legal/cognitive-services/content-understanding/transparency-note?toc=/azure/ai-services/content-understanding/toc.json&bc=/azure/ai-services/content-understanding/breadcrumb/toc.json) and our [**Code of Conduct**](https://aka.ms/AI-CoC).
77
76
78
-
### Modified Content Filtering
77
+
### Modified content filtering
79
78
80
79
Content Understanding now supports modified content filtering for approved customers. The subscription IDs with approved modified content filtering impacts Content Understanding output. By default, Content Understanding employs a content filtering system that identifies specific risk categories for potentially harmful content in both submitted prompts and generated outputs. Modified content filtering allows the system to annotate rather than block potentially harmful output, giving you the ability to determine how to handle potentially harmful content. For more information on content filter types, *see*[Content filtering: filter types](../openai/concepts/content-filter.md#content-filter-types).
81
80
@@ -93,6 +92,7 @@ Developers using the Content Understanding service should review Microsoft's pol
93
92
> If you're using Microsoft products or services to process Biometric Data, you're responsible for: (i) providing notice to data subjects, including with respect to retention periods and destruction; (ii) obtaining consent from data subjects; and (iii) deleting the Biometric Data, all as appropriate, and required under applicable Data Protection Requirements. "Biometric Data" has the meaning articulated in Article 4 of the GDPR and, if applicable, equivalent terms in other data protection requirements. For related information, see [Data and Privacy for Face](/legal/cognitive-services/face/data-privacy-security).
94
93
95
94
## Getting started
95
+
96
96
Our quickstart guides help you quickly start using the Content Understanding service:
97
97
98
98
*[**Azure AI Foundry portal Quickstart**](quickstart/use-ai-foundry.md)
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/quickstart/use-rest-api.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,13 +12,13 @@ ms.date: 05/19/2025
12
12
13
13
# Quickstart: Azure AI Content Understanding REST APIs
14
14
15
-
* This quickstart shows you how to use the [Content Understanding REST API](/rest/api/contentunderstanding/operation-groups?view=rest-contentunderstanding-2025-05-01-preview&preserve-view=true) to get structured data from multimodal content in document, image, audio, and video files.
15
+
* This quickstart shows you how to use the Content Understanding REST API to get structured data from multimodal content in document, image, audio, and video files.
16
16
17
17
* Try [Content Understanding with no code on Azure AI Foundry](https://ai.azure.com/explore/aiservices/vision/contentunderstanding)
18
18
19
19
## Prerequisites
20
20
21
-
To get started, you need **An Active Azure Subscription**. If you don't have an Azure Account, [create one for free](https://azure.microsoft.com/free/).
21
+
To get started, you need **an active Azure subscription**. If you don't have an Azure account, [create one for free](https://azure.microsoft.com/free/).
22
22
23
23
* Once you have your Azure subscription, create an [Azure AI Services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAIServices) in the Azure portal. This multi-service resource enables access to multiple Azure AI services with a single set of credentials.
24
24
@@ -36,8 +36,8 @@ To get started, you need **An Active Azure Subscription**. If you don't have an
36
36
37
37
38
38
## Get Started with a prebuilt analyzer
39
-
Analyzers define how your content will be processed and the insights that will be extracted. We offer [pre-built analyzers](link to pre-built analyzer page) for common use cases. You can [customize pre-built analyzers](/how-to/create-a-custom-analyzer) to better fit your specific needs and use cases.
40
-
This quickstart uses pre-built document, image, audio, and video analyzers to help you get started.
39
+
Analyzers define how your content is processed and the insights that are extracted. We offer [prebuilt analyzers](../concepts/prebuilt-analyzers.md) for common use cases. You can [customize prebuilt analyzers](/how-to/create-a-custom-analyzer.md) to better fit your specific needs and use cases.
40
+
This quickstart uses prebuilt document, image, audio, and video analyzers to help you get started.
41
41
42
42
### Send file for analysis
43
43
#### POST request
@@ -52,7 +52,7 @@ Before running the cURL command, make the following changes to the HTTP request:
52
52
# [Document](#tab/document)
53
53
54
54
1. Replace `{endpoint}` and `{key}` with the corresponding values from your Azure AI Services instance in the Azure portal.
55
-
2. Replace `{analyzerId}` with `prebuilt-documentAnalyzer`. This analyzer extracts text and layout elements such as paragraphs, sections, and tables from a document..
55
+
2. Replace `{analyzerId}` with `prebuilt-documentAnalyzer`. This analyzer extracts text and layout elements such as paragraphs, sections, and tables from a document.
56
56
3. Replace `{fileUrl}` with a publicly accessible URL of the file to analyze—such as a path to an Azure Storage Blob with a shared access signature (SAS), or use the sample URL: `https://github.com/Azure-Samples/azure-ai-content-understanding-python/raw/refs/heads/main/data/invoice.pdf`.
57
57
58
58
# [Image](#tab/image)
@@ -95,7 +95,7 @@ The response returns `resultId` that you can use to track the status of this asy
95
95
96
96
### Get analyze result
97
97
98
-
Use the `resultId` from the `POST` response above and retrieve the result of the analysis.
98
+
Use the `resultId` from the [`POST` response](#post-response) and retrieve the result of the analysis.
99
99
100
100
1. Replace `{endpoint}` and `{key}` with the endpoint and key values from your Azure portal Azure AI Services instance.
101
101
2. Replace `{resultId}` with the `resultId` from the `POST` response.
@@ -108,7 +108,7 @@ curl -i -X GET "{endpoint}/contentunderstanding/analyzerResults/{resultId}?api-v
108
108
109
109
#### GET response
110
110
111
-
The 200 (`OK`) JSON response includes a `status` field indicating the status of the operation. If the operation isn't complete, the value of `status` is `running` or `notStarted`. In such cases, you should send the GET request again, either manually or through a script. Wait an interval of one second or more between calls.
111
+
The 200 (`OK`) JSON response includes a `status` field indicating the status of the operation. If the operation isn't complete, the value of `status` is `running` or `notStarted`. In such cases, you should send the `GET` request again, either manually or through a script. Wait an interval of one second or more between calls.
112
112
113
113
# [Document](#tab/document)
114
114
@@ -349,6 +349,6 @@ The 200 (`OK`) JSON response includes a `status` field indicating the status of
349
349
350
350
## Next steps
351
351
352
-
* In this quickstart, you learned how to call the [REST API](/rest/api/contentunderstanding/content-analyzers/analyze?view=rest-contentunderstanding-2025-05-01-preview) using a pre-built analyzer. See how you can [create a custom analyzer](/how-to/create-a-custom-analyzer) to better fit your use case.
352
+
* In this quickstart, you learned how to call the [REST API](/rest/api/contentunderstanding/content-analyzers/analyze?view=rest-contentunderstanding-2025-05-01-preview) using a prebuilt analyzer. See how you can [create a custom analyzer](/how-to/create-a-custom-analyzer) to better fit your use case.
0 commit comments