Skip to content

Commit d350adc

Browse files
authored
Merge pull request #77110 from PatrickFarley/read-api-ga
[Cog serv] Read api ga
2 parents c1244cd + 2c80d48 commit d350adc

File tree

8 files changed

+10
-31
lines changed

8 files changed

+10
-31
lines changed

articles/cognitive-services/Computer-vision/Home.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ You can analyze images to detect and provide insights about their visual feature
4343

4444
You can use Computer Vision to extract text from an image into a machine-readable character stream using [optical character recognition (OCR)](concept-recognizing-text.md#ocr-optical-character-recognition-api). If needed, OCR corrects the rotation of the recognized text and provides the frame coordinates of each word. OCR supports 25 languages and automatically detects the language of the recognized text.
4545

46-
You can also use the [Read API](concept-recognizing-text.md#read-api) to extract both printed and handwritten text from images and text-heavy documents. The Read API uses updated models and works for a variety objects with different surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. Currently, the Read API is in preview, and English is the only supported language.
46+
You can also use the [Read API](concept-recognizing-text.md#read-api) to extract both printed and handwritten text from images and text-heavy documents. The Read API uses updated models and works for a variety objects with different surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. Currently, English is the only supported language.
4747

4848
## Moderate content in images
4949

articles/cognitive-services/Computer-vision/QuickStarts/CSharp-hand-text.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -106,11 +106,8 @@ namespace CSHttpClientSample
106106
client.DefaultRequestHeaders.Add(
107107
"Ocp-Apim-Subscription-Key", subscriptionKey);
108108

109-
// Request parameter.
110-
string requestParameters = "mode=Handwritten";
111-
112109
// Assemble the URI for the REST API method.
113-
string uri = uriBase + "?" + requestParameters;
110+
string uri = uriBase;
114111

115112
HttpResponseMessage response;
116113

articles/cognitive-services/Computer-vision/QuickStarts/java-hand-text.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -94,9 +94,6 @@ public class Main {
9494

9595
URIBuilder builder = new URIBuilder(uriBase);
9696

97-
// Request parameter.
98-
builder.setParameter("mode", "Handwritten");
99-
10097
// Prepare the URI for the REST API method.
10198
URI uri = builder.build();
10299
HttpPost request = new HttpPost(uri);

articles/cognitive-services/Computer-vision/QuickStarts/javascript-hand-text.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,11 +69,6 @@ To create and run the sample, do the following steps:
6969
var uriBase =
7070
"https://westus.api.cognitive.microsoft.com/vision/v2.0/read/core/asyncBatchAnalyze";
7171
72-
// Request parameter.
73-
var params = {
74-
"mode": "Handwritten",
75-
};
76-
7772
// Display the image.
7873
var sourceImageUrl = document.getElementById("inputImage").value;
7974
document.querySelector("#sourceImage").src = sourceImageUrl;
@@ -83,7 +78,7 @@ To create and run the sample, do the following steps:
8378
//
8479
// Make the first REST API call to submit the image for processing.
8580
$.ajax({
86-
url: uriBase + "?" + $.param(params),
81+
url: uriBase,
8782
8883
// Request headers.
8984
beforeSend: function(jqXHR){

articles/cognitive-services/Computer-vision/QuickStarts/python-hand-text.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,12 +73,9 @@ text_recognition_url = vision_base_url + "read/core/asyncBatchAnalyze"
7373
image_url = "https://upload.wikimedia.org/wikipedia/commons/d/dd/Cursive_Writing_on_Notebook_paper.jpg"
7474

7575
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
76-
# Note: The request parameter changed for APIv2.
77-
# For APIv1, it is 'handwriting': 'true'.
78-
params = {'mode': 'Handwritten'}
7976
data = {'url': image_url}
8077
response = requests.post(
81-
text_recognition_url, headers=headers, params=params, json=data)
78+
text_recognition_url, headers=headers, json=data)
8279
response.raise_for_status()
8380

8481
# Extracting handwritten text requires two API calls: One call to submit the

articles/cognitive-services/Computer-vision/concept-recognizing-text.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ Computer Vision provides a number of services that detect and extract printed or
2020

2121
## Read API
2222

23-
The Read API detects text content in an image using our latest recognition models and converts the identified text into a machine-readable character stream. It is optimized for text-heavy images (such as documents that have been digitally scanned) and for images with a lot of visual noise. It executes asynchronously because larger documents can take several minutes to return a result.
23+
The Read API detects text content in an image using our latest recognition models and converts the identified text into a machine-readable character stream. It's optimized for text-heavy images (such as documents that have been digitally scanned) and for images with a lot of visual noise. It will determine which recognition model to use for each line of text, supporting images with both printed and handwritten text. The Read API executes asynchronously because larger documents can take several minutes to return a result.
2424

2525
The Read operation maintains the original line groupings of recognized words in its output. Each line comes with bounding box coordinates, and each word within the line also has its own coordinates. If a word was recognized with low confidence, that information is conveyed as well. See the [Read API reference docs](https://westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/2afb498089f74080d7ef85eb) to learn more.
2626

2727
> [!NOTE]
28-
> This feature is currently in preview and is only available for English text.
28+
> This feature is only available for English text.
2929
3030
### Image requirements
3131

@@ -76,7 +76,7 @@ The Recognize Text API works with images that meet the following requirements:
7676
- The dimensions of the image must be between 50 x 50 and 4200 x 4200 pixels.
7777
- The file size of the image must be less than 4 megabytes (MB).
7878

79-
## Improve results
79+
## Limitations
8080

8181
The accuracy of text recognition operations depends on the quality of the images. The following factors may cause an inaccurate reading:
8282

articles/cognitive-services/Computer-vision/quickstarts-sdk/csharp-hand-text-sdk.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -49,10 +49,6 @@ To run the sample, do the following steps:
4949
// subscriptionKey = "0123456789abcdef0123456789ABCDEF"
5050
private const string subscriptionKey = "<Subscription key>";
5151

52-
// For printed text, change to TextRecognitionMode.Printed
53-
private const TextRecognitionMode textRecognitionMode =
54-
TextRecognitionMode.Handwritten;
55-
5652
// localImagePath = @"C:\Documents\LocalImage.jpg"
5753
private const string localImagePath = @"<LocalImage>";
5854

@@ -101,7 +97,7 @@ To run the sample, do the following steps:
10197
// Start the async process to read the text
10298
BatchReadFileHeaders textHeaders =
10399
await computerVision.BatchReadFileAsync(
104-
imageUrl, textRecognitionMode);
100+
imageUrl);
105101

106102
await GetTextAsync(computerVision, textHeaders.OperationLocation);
107103
}
@@ -122,7 +118,7 @@ To run the sample, do the following steps:
122118
// Start the async process to recognize the text
123119
BatchReadFileInStreamHeaders textHeaders =
124120
await computerVision.BatchReadFileInStreamAsync(
125-
imageStream, textRecognitionMode);
121+
imageStream);
126122

127123
await GetTextAsync(computerVision, textHeaders.OperationLocation);
128124
}
@@ -172,7 +168,6 @@ To run the sample, do the following steps:
172168

173169
1. Replace `<Subscription Key>` with your valid subscription key.
174170
1. Change `computerVision.Endpoint` to the Azure region associated with your subscription keys, if necessary.
175-
1. Optionally set `textRecognitionMode` to `TextRecognitionMode.Printed`.
176171
1. Replace `<LocalImage>` with the path and file name of a local image.
177172
1. Optionally, set `remoteImageUrl` to a different image.
178173
1. Run the program.

articles/cognitive-services/Computer-vision/quickstarts-sdk/python-sdk.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -215,18 +215,16 @@ You can get any handwritten or printed text from an image. This requires two cal
215215

216216
```Python
217217
# import models
218-
from azure.cognitiveservices.vision.computervision.models import TextRecognitionMode
219218
from azure.cognitiveservices.vision.computervision.models import TextOperationStatusCodes
220219
import time
221220

222221
url = "https://azurecomcdn.azureedge.net/cvt-1979217d3d0d31c5c87cbd991bccfee2d184b55eeb4081200012bdaf6a65601a/images/shared/cognitive-services-demos/read-text/read-1-thumbnail.png"
223-
mode = TextRecognitionMode.handwritten
224222
raw = True
225223
custom_headers = None
226224
numberOfCharsInOperationId = 36
227225

228226
# Async SDK call
229-
rawHttpResponse = client.batch_read_file(url, mode, custom_headers, raw)
227+
rawHttpResponse = client.batch_read_file(url, custom_headers, raw)
230228

231229
# Get ID from returned headers
232230
operationLocation = rawHttpResponse.headers["Operation-Location"]

0 commit comments

Comments
 (0)