You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/faq.yml
+19-19Lines changed: 19 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ metadata:
7
7
ms.service: azure-ai-document-intelligence
8
8
ms.custom: references_regions
9
9
ms.topic: faq
10
-
ms.date: 05/13/2024
10
+
ms.date: 05/23/2024
11
11
ms.author: lajanuar
12
12
title: Azure AI Document Intelligence FAQ
13
13
summary: |
@@ -30,15 +30,15 @@ sections:
30
30
31
31
There are no changes to pricing. The names Cognitive Services and Applied AI Services continue to be used in Azure billing, cost analysis, price lists, and price APIs.
32
32
33
-
There are no breaking changes to APIs or client libraries (SDKs). REST APIs and SDK versions 2024-02-29-preview, 2023-10-31-preview, and later are renamed `document intelligence`.
33
+
There are no breaking changes to APIs or client libraries. REST APIs and SDK versions 2024-02-29-preview, 2023-10-31-preview, and going forward are renamed `document intelligence`.
34
34
35
35
Some platforms are still awaiting the renaming update. In Microsoft documentation, all mentions of Form Recognizer and Document Intelligence refer to the same Azure service.
36
36
37
37
- question: |
38
38
How is Document Intelligence related to document generative AI?
39
39
answer: |
40
40
41
-
You can use a document generative AI solution to chat with your documents, generate captivating content from those documents, and access Azure OpenAI Service models on your data. With Azure AI Document Intelligence and Azure OpenAI combined, you can build an enterprise application to seamlessly interact with your documents by using natural languages, easily find answers and gain valuable insights, and generate new and engaging content from your existing documents. Find more details in the [technical community blog](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/document-generative-ai-the-power-of-azure-ai-document/ba-p/3875015).
41
+
You can use a document generative AI solution to chat with your documents, generate captivating content from those documents, and access Azure OpenAI Service models on your data. With Azure AI Document Intelligence and Azure OpenAI combined, you can build an enterprise application to seamlessly interact with your documents. Find more details in the [technical community blog](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/document-generative-ai-the-power-of-azure-ai-document/ba-p/3875015).
42
42
43
43
- question: |
44
44
How is Document Intelligence related to retrieval-augmented generation?
@@ -158,7 +158,7 @@ sections:
158
158
What is a bounding box?
159
159
answer: |
160
160
161
-
A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements in a document or form. It's used as a reference point for object detection.
161
+
A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements in a document used as a reference point for object detection.
162
162
163
163
The bounding box specifies position by using an x and y coordinate plane presented in an array of four numerical pairs. Each pair represents a corner of the box in the following order: upper left, upper right, lower right, lower left.
164
164
@@ -189,35 +189,35 @@ sections:
189
189
190
190
- Basic
191
191
192
-
- **Cognitive Services User**: You need this role for a [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multiple-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to read/write data and is **required to call the API**.
192
+
- **Cognitive Services User**: You need this role for a [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Azure Cognitive Services multiple-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to use Document Intelligence Studio.
193
193
194
194
- Advanced
195
195
196
-
- **Contributor**: You need this role to create a resource group or a Document Intelligence resource. The Contributor role doesn't allow you to list keys for Cognitive Services and doesn't give you access to use the created resources or storage, it only allows a user to read/write the resource itself. To use Document Intelligence Studio, you still need the Cognitive Services User role.
196
+
- **Contributor**: You need this role to create a resource group or a Document Intelligence resource.
197
197
198
198
For custom model projects, here are the role requirements for user scenarios:
199
199
200
200
- Basic
201
201
202
-
- **Cognitive Services User**: You need this role for a [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multiple-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to read/write data and is **required to call the API**. This role is also the minimum necessary to train a custom model or analyze with trained models.
202
+
- **Cognitive Services User**: You need this role for a [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multiple-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to train a custom model or analyze with trained models.
203
203
204
204
- **Storage Blob Data Contributor**: You need this role for a storage account to create project and label data.
205
205
206
206
- Advanced
207
207
208
208
- **Storage Account Contributor**: You need this role for the storage account to set up cross-origin resource sharing (CORS) settings. It's a one-time effort if you reuse the same storage account.
209
209
210
-
The Contributor role doesn't allow you to access data in your blob. To use Document Intelligence Studio, you still need the Storage Blob Data Contributor role.
210
+
- **Contributor**: You need this role to create a resource group and resources.
211
211
212
-
- **Contributor**: You need this role to create a resource group and resources. The Contributor role doesn't give you access to use the created resources or storage, it only allows a user to read/write the resource itself. To use Document Intelligence Studio, you still need basic roles.
212
+
Having Contributor or Storage Account Contributor role doesn't give you access to use your Document Intelligence resource or storage account if local (key-based) authentication is disabled. You still need the basic roles (Cognitive Services User and Storage Data Blob Contributor) to use the functions on Document Intelligence Studio.
213
213
214
214
For more information, see [Microsoft Entra built-in roles](../../role-based-access-control/built-in-roles.md) and the sections about Azure role assignments in the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md).
215
215
216
216
- question: |
217
217
I have multiple pages in a document. Why are only two pages analyzed in Document Intelligence Studio?
218
218
answer: |
219
219
220
-
For free-tier (F0) resources, only the first two pages are analyzed whether you're using Document Intelligence Studio, the REST API, or SDKs.
220
+
For free-tier (F0) resources, only the first two pages are analyzed whether you're using Document Intelligence Studio, the REST API, or client libraries.
221
221
222
222
In Document Intelligence Studio, select the **Settings** (gear) button, select the **Resources** tab, and check the price tier that you're using to analyze the documents. If you want to analyze all pages in a document, change to a paid (S0) resource.
223
223
@@ -262,9 +262,9 @@ sections:
262
262
263
263
"Yes. Document Intelligence Studio has separate URL endpoints for sovereign cloud regions:"
264
264
265
-
- "URL for the Azure US Government cloud (Azure Fairfax): [Document Intelligence Studio US Government](https://formrecognizer.appliedai.azure.us/studio)".
265
+
- "URL for the Azure US Government cloud (Azure Fairfax): [Document Intelligence Studio US Government](https://formrecognizer.appliedai.azure.us/studio)."
266
266
267
-
- "URL Microsoft Azure operated by 21Vianet (Azure in China): [Document Intelligence Studio China](https://formrecognizer.appliedai.azure.cn/studio)".
267
+
- "URL Microsoft Azure operated by 21Vianet (Azure China): [Document Intelligence Studio China](https://formrecognizer.appliedai.azure.cn/studio)."
Where can I find the supported API version for the latest programming language SDKs?
291
+
Where can I find the supported API version for the latest programming language client libraries?
292
292
answer: |
293
293
294
294
This table provides links to the latest SDK versions and shows the relationship between supported Document Intelligence SDK and API versions:
@@ -325,7 +325,7 @@ sections:
325
325
How can I specify a range of pages to be analyzed in a document?
326
326
answer: |
327
327
328
-
Use the `pages` parameter (supported in v2.1, v3.0, and later versions of the REST API) to specify pages for multiple-page PDF and TIFF documents. Accepted input includes the following ranges:
328
+
Use the `pages` parameter (supported in v2.1, v3.0, and later versions of the REST API) and specify pages for multiple-page PDF and TIFF documents. Accepted input includes the following ranges:
329
329
330
330
- Single pages. For example, if you specify `1, 2`, pages 1 and 2 are processed.
331
331
- Finite ranges. For example, if you specify `2-5`, pages 2 to 5 are processed.
@@ -368,7 +368,7 @@ sections:
368
368
369
369
- When the service analyzes Microsoft Word and HTML files that the read and layout models support, it counts pages in blocks of 3,000 characters each. For example, if your document contains 7,000 characters, the two pages with 3,000 characters each and one page with 1,000 characters add up to a total of three pages.
370
370
371
-
- When you're using the read or layout model to analyze Microsoft Word, Excel, PowerPoint, and HTML files, embedded or linked images aren't supported. So the service doesn't count them as added images.
371
+
- The read and layout models don't support analysis of embedded or linked images in Microsoft Word, Excel, PowerPoint, and HTML files. Therefore, service doesn't count them as added images.
372
372
373
373
- Training a custom model is always free with Document Intelligence. You're charged only when the service uses a model to analyze a document.
374
374
@@ -449,7 +449,7 @@ sections:
449
449
450
450
Document Intelligence doesn't have an explicit retrain operation. Each train operation generates a new model.
451
451
452
-
If you find that your model needs retraining, add more samples to your training dataset and train a new model.
452
+
If you find that your model needs to retrain, add more samples to your training dataset and train a new model.
453
453
454
454
- question: |
455
455
How many custom models can I compose into a single custom model?
@@ -554,7 +554,7 @@ sections:
554
554
555
555
[Disconnected containers](../../ai-services/containers/disconnected-containers.md) enable you to use APIs that are disconnected from the internet. [Billing information](../../ai-services/containers/disconnected-container-faq.yml#how-does-billing-work) isn't sent via the internet. Instead, you're charged based on a purchased commitment tier. Currently, disconnected container usage is available for Document Intelligence custom and invoice models.
556
556
557
-
The model capabilities provided in connected and disconnected containers are the same and are supported by Document Intelligence v2.1.
557
+
The Document Intelligence v2.1 model capabilities for connected and disconnected containers are the same.
558
558
559
559
- question: |
560
560
What data do connected containers send to the cloud?
@@ -565,7 +565,7 @@ sections:
565
565
For an example of the information that connected containers send to Microsoft for billing, see the [Azure AI container FAQ](../../ai-services/containers/disconnected-container-faq.yml#how-does-billing-work).
566
566
567
567
- question: |
568
-
Why am I receiving the error "Container isn't in a valid state. Subscription validation failed with status 'OutOfQuota' API key is out of quota"?
568
+
Why am I receiving the error *Container isn't in a valid state. Subscription validation failed with status 'OutOfQuota' API key is out of quota*?
569
569
answer: |
570
570
571
571
Document Intelligence connected containers send billing information to Azure by using a Document Intelligence resource on your Azure account. You could get this message if the containers can't communicate with the billing endpoint.
@@ -624,7 +624,7 @@ sections:
624
624
- question: |
625
625
Where can I find more solutions to my Azure AI Document Intelligence questions?
626
626
answer: |
627
-
[Microsoft Q&A](/answers/topics/azure-form-recognizer.html) is the home for technical questions and answers at Microsoft. You can filter queries that are specific to Document Intelligence.
627
+
[Microsoft Q & A](/answers/topics/azure-form-recognizer.html) is the home for technical questions and answers at Microsoft. You can filter queries that are specific to Document Intelligence.
628
628
629
629
- question: |
630
630
What should I do if the service doesn't recognize specific text, or recognizes it incorrectly, when I'm labeling documents?
0 commit comments