You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/concept/accuracy-confidence.md
+28-25Lines changed: 28 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-document-intelligence
8
8
ms.topic: conceptual
9
-
ms.date: 02/21/2025
9
+
ms.date: 03/03/2025
10
10
ms.author: lajanuar
11
11
---
12
12
@@ -50,7 +50,7 @@ After an analysis operation, review the JSON output. Examine the `confidence` va
50
50
51
51
> [!NOTE]
52
52
>
53
-
> ***Custom neural and generative models**do not provide accuracy scores during training.
53
+
> ***Custom neural and generative models**don't provide accuracy scores during training.
54
54
55
55
The output of a `build` (v3.0 and onward) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document. Accuracy is measured within a percentage value range from 0% (low) to 100% (high). It's best to target a score of 80% or higher. For more sensitive cases, like financial or medical records, we recommend a score of close to 100%. You can also add a human review stage to validate for more critical automation workflows.
56
56
@@ -63,19 +63,22 @@ The output of a `build` (v3.0 and onward) or `train` (v2.1) custom model operati
63
63
64
64
Custom template models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
65
65
66
-
1.**Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
67
-
2.**Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
68
-
3.**Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
69
-
4.**Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
66
+
***Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
67
+
68
+
***Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
69
+
70
+
***Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
71
+
72
+
***Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
70
73
71
74
The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
72
75
73
76
| Accuracy | Confidence | Result |
74
77
|--|--|--|
75
-
| High| High |• The model is performing well with the labeled keys and document formats. <br>• You have a balanced training dataset. |
76
-
| High | Low |• The analyzed document appears different from the training dataset.<br>• The model would benefit from retraining with at least five more labeled documents. <br>• These results could also indicate a format variation between the training dataset and the analyzed document. </br>Consider adding a new model.|
77
-
| Low | High |• This result is most unlikely.<br>• For low accuracy scores, add more labeled data or split visually distinct documents into multiple models. |
78
-
| Low | Low|• Add more labeled data.<br>• Split visually distinct documents into multiple models.|
78
+
| High| High |• The model is performing well with the labeled keys and document formats. • You have a balanced training dataset. |
79
+
| High | Low |• The analyzed document appears different from the training dataset.• The model would benefit from retraining with at least five more labeled documents. • These results could also indicate a format variation between the training dataset and the analyzed document. </br>Consider adding a new model.|
80
+
| Low | High |• This result is most unlikely.• For low accuracy scores, add more labeled data or split visually distinct documents into multiple models. |
81
+
| Low | Low|• Add more labeled data.• Split visually distinct documents into multiple models.|
79
82
80
83
## Ensure high model accuracy for custom models
81
84
@@ -97,35 +100,35 @@ Variances in the visual structure of your documents affect the accuracy of your
97
100
98
101
Here are some common questions that should help with interpreting the table, row, and cell scores:
99
102
100
-
**Q:** Is it possible to see a high confidence score for cells, but a low confidence score for the row?<br>
103
+
##### Can cells have high confidence scores while the row has a low confidence score?
101
104
102
-
**A:** Yes. The different levels of table confidence (cell, row, and table) are meant to capture the correctness of a prediction at that specific level. A correctly predicted cell that belongs to a row with other possible misses would have high cell confidence, but the row's confidence should be low. Similarly, a correct row in a table with challenges with other rows would have high row confidence whereas the table's overall confidence would be low.
105
+
The different levels of table confidence (cell, row, and table) are meant to capture the correctness of a prediction at that specific level. A correctly predicted cell that belongs to a row with other possible misses would have high cell confidence, but the row's confidence should be low. Similarly, a correct row in a table with challenges with other rows would have high row confidence whereas the table's overall confidence would be low.
103
106
104
-
**Q:** What is the expected confidence score when cells are merged? Since a merge results in the number of columns identified to change, how are scores affected?<br>
107
+
##### How does merging cells affect confidence scores, given the change in the number of identified columns?
105
108
106
-
**A:**Regardless of the type of table, the expectation for merged cells is that they should have lower confidence values. Furthermore, the cell that is missing (because it was merged with an adjacent cell) should have `NULL` value with lower confidence as well. How much lower these values might be depends on the training dataset, the general trend of both merged and missing cell having lower scores should hold.
109
+
Regardless of the type of table, the expectation for merged cells is that they should have lower confidence values. Furthermore, the cell that is missing (because it was merged with an adjacent cell) should have `NULL` value with lower confidence as well. How much lower these values might be depends on the training dataset, the general trend of both merged and missing cell having lower scores should hold.
107
110
108
-
**Q:**What is the confidence score when a value is optional? Should you expect a cell with a ``NULL`` value and high confidence score if the value is missing?<br>
111
+
##### What is the confidence score for optional values? Should you expect a cell with a "NULL" value to have a high confidence score since the value is absent?
109
112
110
-
**A:**If your training dataset is representative of the optionality of cells, it helps the model know how often a value tends to appear in the training set, and thus what to expect during inference. This feature is used when computing the confidence of either a prediction or of making no prediction at all (`NULL`). You should expect an empty field with high confidence for missing values that are mostly empty in the training set too.
113
+
If your training dataset is representative of the optionality of cells, it helps the model know how often a value tends to appear in the training set, and thus what to expect during inference. This feature is used when computing the confidence of either a prediction or of making no prediction at all (`NULL`). You should expect an empty field with high confidence for missing values that are mostly empty in the training set too.
111
114
112
-
**Q:** How are confidence scores affected if a field is optional and not present or missed? Is the expectation that the confidence score answers that question?<br>
115
+
##### Can confidence scores alter if an optional field is absent? Do the confidence scores reflect this change?
113
116
114
-
**A:**When a value is missing from a row, the cell has a `NULL` value and confidence assigned. A high confidence score here should mean that the model prediction (of there not being a value) is more likely to be correct. In contrast, a low score should signal more uncertainty from the model (and thus the possibility of an error, like the value being missed).
117
+
When a value is missing from a row, the cell has a `NULL` value and confidence assigned. A high confidence score here should mean that the model prediction (of there not being a value) is more likely to be correct. In contrast, a low score should signal more uncertainty from the model (and thus the possibility of an error, like the value being missed).
115
118
116
-
**Q:** What should be the expectation for cell confidence and row confidence when extracting a multi-page table with a row split across pages?<br>
119
+
#####What are the expectations for cell and row confidence when extracting a multi-page table with a row split across pages?
117
120
118
-
**A:**Expect the cell confidence to be high and row confidence to be potentially lower than rows that aren't split. The proportion of split rows in the training data set can affect the confidence score. In general, a split row looks different than the other rows in the table (thus, the model is less certain that it's correct).
121
+
Expect the cell confidence to be high and row confidence to be potentially lower than rows that aren't split. The proportion of split rows in the training data set can affect the confidence score. In general, a split row looks different than the other rows in the table (thus, the model is less certain that it's correct).
119
122
120
-
**Q:** For cross-page tables with rows that cleanly end and start at the page boundaries, is it correct to assume that confidence scores are consistent across pages?
123
+
#####For tables spanning multiple pages, can we assume confidence scores remain consistent if rows end and start cleanly at page boundaries?
121
124
122
-
**A:** Yes. Since rows look similar in shape and contents, regardless of where they are in the document (or in which page), their respective confidence scores should be consistent.
125
+
Since rows look similar in shape and contents, regardless of where they are in the document (or in which page), their respective confidence scores should be consistent.
123
126
124
-
**Q:** What is the best way to utilize the new confidence scores?<br>
127
+
#####What is the best way to utilize the new confidence scores?
125
128
126
-
**A:** Look at all levels of table confidence starting in a top-to-bottom approach: begin by checking a table's confidence as a whole, then drill down to the row level and look at individual rows, finally look at cell-level confidences. Depending on the type of table, there are a couple of things of note:
129
+
* Look at all levels of table confidence starting in a top-to-bottom approach: begin by checking a table's confidence as a whole, then drill down to the row level and look at individual rows, finally look at cell-level confidences. Depending on the type of table, there are a couple of things of note:
127
130
128
-
For **fixed tables**, cell-level confidence already captures quite a bit of information on the correctness of things. This means that simply going over each cell and looking at its confidence can be enough to help determine the quality of the prediction.
131
+
*For **fixed tables**, cell-level confidence already captures quite a bit of information on the correctness of things. This means that simply going over each cell and looking at its confidence can be enough to help determine the quality of the prediction.
129
132
For **dynamic tables**, the levels are meant to build on top of each other, so the top-to-bottom approach is more important.
0 commit comments