You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/document/overview.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: lajanuar
7
7
manager: nitinme
8
8
ms.service: azure-ai-content-understanding
9
9
ms.topic: overview
10
-
ms.date: 02/19/2025
10
+
ms.date: 05/01/2025
11
11
ms.custom: ignite-2024-understanding-release
12
12
---
13
13
@@ -23,6 +23,12 @@ Content Understanding is a cloud-based [Azure AI Service](../../what-are-ai-serv
23
23
24
24
Content Understanding enables organization to streamline data collection and processing, enhance operational efficiency, optimize data-driven decision making, and empower innovation. With customizable analyzers, Content Understanding allows for easy extraction of content or fields from documents and forms, tailored to specific business needs.
25
25
26
+
## April updates
27
+
28
+
***Invoice prebuilt template**: Extract predefined schemas from various invoice formats. The out-of-the-box schema can be customized by adding or removing fields to suit your specific needs.
29
+
30
+
***Generative and classify methods**: Added support for both generative and classification-based methods, enabling you to create generative fields such as summaries or categorize document details into multiple classes using the classify method.
31
+
26
32
## Business use cases
27
33
28
34
Document analyzers can process complex documents in various formats and templates:
* The *Max fields* limit includes all named fields. For example, a list of strings counts as one field, while a group with string and number subfields counts as three fields. To extract beyond default limits, contact us at [email protected].
76
+
* The *Max fields* limit includes all named fields. For example, a list of strings counts as one field, while a group with string and number subfields counts as three fields. To extend the limit for documents fields up to 100, contact us at `[email protected]`.
77
77
* The *Max classify field categories* limit is the total number of categories across all fields using the `classify` generation method.
78
78
* The generation method currently applies only to basic fields.
79
79
@@ -83,7 +83,7 @@ The following limits apply as of version 2024-12-01-preview.
83
83
| --- | --- | --- | --- | --- | --- |
84
84
| Basic | No *boolean*| No *date*, *time*|*string*|*string*| No *date*, *time*|
85
85
| List | N/A | No *date*, *time*|*string*|*string*| No *date*, *time*|
86
-
| Group | N/A | No *date*, *time*| N/A | N/A| No *date*, *time*|
86
+
| Group | N/A | No *date*, *time*|*string*|*string*| No *date*, *time*|
87
87
| Table | No *boolean*| No *date*, *time*|*string*|*string*| No *date*, *time*|
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/whats-new.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,4 +41,5 @@ The Content Understanding **2024-12-01-preview** REST API is now available. This
41
41
* Add download code samples for quick setup added.
42
42
43
43
## November 2024
44
-
Welcome! The Azure AI Content Understanding API version `2024-12-01-preview` is now in public preview. This version allows you to generate a structured representation of content tailored to specific tasks from various modalities or formats. Content Understanding uses a defined schema to extract content suitable for processing by large language models and subsequent applications.
44
+
45
+
Welcome! The Azure AI Content Understanding API version `2024-12-01-preview` is now in public preview. This version allows you to generate a structured representation of content tailored to specific tasks from various modalities or formats. Content Understanding uses a defined schema to extract content suitable for processing by large language models and subsequent applications.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,23 +72,24 @@ Customers that require long-term usage of provisioned, data zoned provisioned, a
72
72
> Charges for deployments on a deleted resource will continue until the resource is purged. To prevent this, delete a resource’s deployment before deleting the resource. For more information, see [Recover or purge deleted Azure AI services resources](../../recover-purge-resources.md).
73
73
74
74
## How much throughput per PTU you get for each model
75
-
The amount of throughput (measured in tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in a given minute.
76
75
77
-
Generating output tokens requires more processing than input tokens. For the models specified in the table below, 1 output token counts as 3 input tokens towards your TPM-per-PTU limit. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload.
78
76
79
-
To help with simplifying the sizing effort, the following table outlines the TPM-per-PTU for the specified models. To understand the impact of output tokens on the TPM-per-PTU limit, use the 3 input token to 1 output token ratio.
80
77
81
-
For a detailed understanding of how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://ai.azure.com/resource/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
82
78
83
79
84
-
|Topic|**gpt-4o**|**gpt-4o-mini**|**o1**|
85
-
| --- | --- | --- | --- |
86
-
|Global & data zone provisioned minimum deployment|15|15|15|
87
-
|Global & data zone provisioned scale increment|5|5|5|
|Latency Target Value |25 Tokens Per Second|33 Tokens Per Second|25 Tokens Per Second|
80
+
The amount of throughput (measured in tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in a given minute. Generating output tokens requires more processing than input tokens. Starting with GPT 4.1 models and later, the system matches the global standard price ratio between input and output tokens. Cached tokens are deducted 100% from the utilization.
81
+
82
+
For example, for `gpt-4.1:2025-04-14`, 1 output token counts as 4 input tokens towards your utilization limit which matches the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). Older models use a different ratio and for a deeper understanding on how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://ai.azure.com/resource/calculator).
0 commit comments