You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/concept-analyze-document-response.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,7 +64,7 @@ Spans specify the logical position of each element in the overall reading order,
64
64
65
65
### Bounding Region
66
66
67
-
Bounding regions describe the visual position of each element in the file. Since elements may not be visually contiguous (entities) or may cross pages (tables), the positions of most elements are described via an array of bounding regions. Each region specifies the page number (`1`-indexed) and bounding polygon. The bounding polygon is described as a sequence of points, clockwise from the left relative to the natural orientation of the element. For quadrilaterals, plot points are top-left, top-right, bottom-right, and bottom-left corners. Each point is represented by its x, y coordinate in the page unit specified by the unit property. In general, unit of measure for images is pixels while PDFs use inches.
67
+
Bounding regions describe the visual position of each element in the file. Since elements may not be visually contiguous or may cross pages (tables), the positions of most elements are described via an array of bounding regions. Each region specifies the page number (`1`-indexed) and bounding polygon. The bounding polygon is described as a sequence of points, clockwise from the left relative to the natural orientation of the element. For quadrilaterals, plot points are top-left, top-right, bottom-right, and bottom-left corners. Each point is represented by its x, y coordinate in the page unit specified by the unit property. In general, unit of measure for images is pixels while PDFs use inches.
68
68
69
69
:::image type="content" source="media/bounding-regions.png" alt-text="Screenshot of detected bounding regions example.":::
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ The following Form Recognizer service features are available in the Studio.
31
31
32
32
***Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model).
33
33
34
-
***General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model).
34
+
***General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model).
35
35
36
36
***Prebuilt models**: Form Recognizer's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model).
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/quickstarts/includes/v3-csharp-sdk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ recommendations: false
20
20
21
21
In this quickstart, you use the following features to analyze and extract data and values from forms and documents:
22
22
23
-
*[**General document model**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
23
+
*[**General document model**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs.
24
24
25
25
*[**Layout model**](#layout-model)—Analyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
26
26
@@ -121,7 +121,7 @@ dotnet run formrecognizer-quickstart.dll
121
121
122
122
## General document model
123
123
124
-
Analyze and extract text, tables, structure, key-value pairs, and named entities.
124
+
Analyze and extract text, tables, structure, key-value pairs.
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/quickstarts/includes/v3-java-sdk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ recommendations: false
17
17
18
18
In this quickstart you'll, use the following features to analyze and extract data and values from forms and documents:
19
19
20
-
*[**General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
20
+
*[**General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs.
21
21
22
22
*[**Layout**](#layout-model)—Analyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
23
23
@@ -138,7 +138,7 @@ To interact with the Form Recognizer service, you need to create an instance of
138
138
139
139
## General document model
140
140
141
-
Extract text, tables, structure, key-value pairs, and named entities from documents.
141
+
Extract text, tables, structure, and key-value pairs from documents.
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/quickstarts/includes/v3-javascript-sdk.md
+2-8Lines changed: 2 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ recommendations: false
17
17
18
18
In this quickstart you'll, use the following features to analyze and extract data and values from forms and documents:
19
19
20
-
*[**General document**](#general-document-model)—Analyze and extract key-value pairs, selection marks, and entities from documents.
20
+
*[**General document**](#general-document-model)—Analyze and extract key-value pairs, and selection marks from documents.
21
21
22
22
*[**Layout**](#layout-model)—Analyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
23
23
@@ -102,7 +102,7 @@ To interact with the Form Recognizer service, you need to create an instance of
102
102
103
103
## General document model
104
104
105
-
Extract text, tables, structure, key-value pairs, and named entities from documents.
105
+
Extract text, tables, structure,and key-value pairs from documents.
106
106
107
107
> [!div class="checklist"]
108
108
>
@@ -184,12 +184,6 @@ Key-Value Pairs:
184
184
Value: "Common Stock, $0.00000625 par value per share" (0.748)
185
185
- Key : "Outstanding as of April 24, 2020"
186
186
Value: "7,583,440,247 shares" (0.838)
187
-
Entities:
188
-
- "$0.00000625" Quantity - Currency (0.8)
189
-
- "MSFT" Organization - <none> (0.99)
190
-
- "NASDAQ" Organization - StockExchange (0.99)
191
-
- "2.125%" Quantity - Percentage (0.8)
192
-
- "2021" DateTime - DateRange (0.8)
193
187
```
194
188
195
189
To view the entire output, visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/FormRecognizer/v3-javascript-sdk-general-document-output.md)
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/quickstarts/includes/v3-python-sdk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ recommendations: false
17
17
18
18
In this quickstart you'll, use the following features to analyze and extract data and values from forms and documents:
19
19
20
-
*[**General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
20
+
*[**General document**](#general-document-model)—Analyze and extract text, tables, structure, and key-value pairs.
21
21
22
22
*[**Layout**](#layout-model)—Analyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
23
23
@@ -77,7 +77,7 @@ To interact with the Form Recognizer service, you need to create an instance of
77
77
78
78
## General document model
79
79
80
-
Extract text, tables, structure, key-value pairs, and named entities from documents.
80
+
Extract text, tables, structure, and key-value pairs from documents.
*[**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
41
41
*[**Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/whats-new.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -360,7 +360,7 @@ Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
360
360
361
361
***AI quality improvements**
362
362
363
-
*[**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices and improved processing of digital PDF documents.
363
+
*[**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other key data commonly found in receipts and invoices and improved processing of digital PDF documents.
364
364
*[**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.
365
365
*[**prebuilt-document**](concept-general-document.md). Improved value and check box detection.
366
366
*[**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
@@ -477,10 +477,10 @@ Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
477
477
478
478
* Form Recognizer v3.0 preview release introduces several new features, capabilities and enhancements:
479
479
480
-
*[**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
480
+
*[**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-structured and **unstructured documents**.
481
481
*[**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
482
482
*[**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
483
-
*[**General document**](concept-general-document.md) pretrained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
483
+
*[**General document**](concept-general-document.md) pretrained model is now updated to support selection marks in addition to API text, tables, structure, and key-value pairs from forms and documents.
484
484
*[**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
485
485
*[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
486
486
*[**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
@@ -544,7 +544,7 @@ Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
544
544
545
545
***Form Recognizer v3.0 preview release version 4.0.0-beta.1 (2021-10-07)introduces several new features and capabilities:**
546
546
547
-
*[**General document**](concept-general-document.md) model is a new API that uses a pretrained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
547
+
*[**General document**](concept-general-document.md) model is a new API that uses a pretrained model to extract text, tables, structure, and key-value pairs from forms and documents.
548
548
*[**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing.
549
549
*[**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
550
550
*[**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
@@ -556,15 +556,15 @@ Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
0 commit comments