Skip to content

Commit 780ade5

Browse files
authored
Merge pull request #197562 from PatrickFarley/cogserv-allup
[cog svcs] acrolinx improvements
2 parents d641f7c + d0addf2 commit 780ade5

File tree

2 files changed

+14
-14
lines changed

2 files changed

+14
-14
lines changed

articles/cognitive-services/Computer-vision/faq.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ metadata:
1616
title: Computer Vision API Frequently Asked Questions
1717
summary: |
1818
> [!TIP]
19-
> If you can't find answers to your questions in this FAQ, try asking the Computer Vision API community on [StackOverflow](https://stackoverflow.com/questions/tagged/project-oxford+or+microsoft-cognitive) or contact Help and Support on [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858)
19+
> If you can't find answers to your questions in this FAQ, ask the Computer Vision API community on [StackOverflow](https://stackoverflow.com/questions/tagged/project-oxford+or+microsoft-cognitive) or contact Help and Support on [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858)
2020
2121
2222
@@ -31,21 +31,21 @@ sections:
3131
- question: |
3232
The service is throwing an error because my image file is too large. How can I work around this?
3333
answer: |
34-
The file size limit for most Computer Vision features is 4MB, but the client library SDKs can handle files up to 6MB. For Optical Character Recognition (OCR) that handles multi-page documents, the maximum file size is 50 MB. For more information, see the Image [Analysis inputs limits](overview-image-analysis.md#image-requirements) and [OCR input limits](overview-ocr.md#input-requirements).
34+
The file size limit for most Computer Vision features is 4 MB, but the client library SDKs can handle files up to 6 MB. For Optical Character Recognition (OCR) that handles multi-page documents, the maximum file size is 50 MB. For more information, see the Image [Analysis inputs limits](overview-image-analysis.md#image-requirements) and [OCR input limits](overview-ocr.md#input-requirements).
3535
3636
- question: |
3737
How can I process multi-page documents with OCR in a single call?
3838
answer: |
39-
Optical Character Recognition, specifically the Read operation, supports multi-page documents as the API input. If you call the API with a 10-page document, you'll be billed for 10 pages, with each page counted as a billable transaction. Note that if you have the free (S0) tier, it can only process two pages at a time.
39+
Optical Character Recognition, specifically the Read operation, supports multi-page documents as the API input. If you call the API with a 10-page document, you'll be billed for 10 pages, with each page counted as a billable transaction. If you have the free (S0) tier, it can only process two pages at a time.
4040
4141
- question: |
4242
Can I send multiple images in a single API call to the Computer Vision service?
4343
answer: |
44-
This function is not currently available.
44+
This function isn't currently available.
4545
- question: |
4646
How many languages are supported for Image Analysis and OCR?
4747
answer: |
48-
Please see the [Language support](language-support.md) page for the list of languages covered by Image Analysis and OCR.
48+
See the [Language support](language-support.md) page for the list of languages covered by Image Analysis and OCR.
4949
5050
- question: |
5151
Can I train Computer Vision API to use custom tags? For example, I would like to feed in pictures of cat breeds to 'train' the AI, then receive the breed value on an AI request.

articles/cognitive-services/Computer-vision/upgrade-api-versions.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ ms.custom: non-critical
2020
This guide shows how to upgrade your existing container or cloud API code from Read v2.x to Read v3.x.
2121

2222
## Determine your API path
23-
Use the following table to determine the **version string** in the API path based on the Read 3.x version you are migrating to.
23+
Use the following table to determine the **version string** in the API path based on the Read 3.x version you're migrating to.
2424

2525
|Product type| Version | Version string in 3.x API path |
2626
|:-----|:----|:----|
@@ -39,7 +39,7 @@ Next, use the following sections to narrow your operations and replace the **ver
3939
|----------|-----------|
4040
|https://{endpoint}/vision/**v2.0/read/core/asyncBatchAnalyze** |https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
4141

42-
A new optional _language_ parameter is available. If you do not know the language of your document, or it may be multilingual, don't include it.
42+
A new optional _language_ parameter is available. If you don't know the language of your document, or it may be multilingual, don't include it.
4343

4444
### `Get Read Results`
4545

@@ -65,8 +65,8 @@ Note the following changes to the json:
6565
* To get the root for page array, change the json hierarchy from `recognitionResults` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
6666
* The page angle `clockwiseOrientation` has been renamed to `angle` and the range has been changed from 0 - 360 degrees to -180 to 180 degrees. Depending on your code, you may or may not have to make changes as most math functions can handle either range.
6767

68-
The v3.0 API also introduces the following improvements you can optionally leverage:
69-
* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing. See documentation for more details.
68+
The v3.0 API also introduces the following improvements you can optionally use:
69+
* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
7070
* `version` tells you the version of the API used to generate results
7171
* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
7272

@@ -171,15 +171,15 @@ In v3.0, it has been adjusted:
171171
## Service only
172172

173173
### `Recognize Text`
174-
`Recognize Text` is a *preview* operation which is being *deprecated in all versions of Computer Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and additional features, so it is recommended. To upgrade from `Recognize Text` to `Read`:
174+
`Recognize Text` is a *preview* operation that is being *deprecated in all versions of Computer Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and other features, so it's recommended. To upgrade from `Recognize Text` to `Read`:
175175

176176
|Recognize Text 2.x |Read 3.x |
177177
|----------|-----------|
178178
|https://{endpoint}/vision/**v2.0/recognizeText[?mode]**|https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
179179

180-
The _mode_ parameter is not supported in `Read`. Both handwritten and printed text will automatically be supported.
180+
The _mode_ parameter isn't supported in `Read`. Both handwritten and printed text will automatically be supported.
181181

182-
A new optional _language_ parameter is available in v3.0. If you do not know the language of your document, or it may be multilingual, don't include it.
182+
A new optional _language_ parameter is available in v3.0. If you don't know the language of your document, or it may be multilingual, don't include it.
183183

184184
### `Get Recognize Text Operation Result`
185185

@@ -203,8 +203,8 @@ Note the following changes to the json:
203203
* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded`. In v3.x, this field is `succeeded`.
204204
* To get the root for page array, change the json hierarchy from `recognitionResult` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
205205

206-
The v3.0 API also introduces the following improvements you can optionally leverage. See the API reference for more details:
207-
* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing. See documentation for more details.
206+
The v3.0 API also introduces the following improvements you can optionally use. See the API reference for more details:
207+
* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
208208
* `version` tells you the version of the API used to generate results
209209
* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
210210
* `angle` general orientation of the text in clockwise direction, measured in degrees between (-180, 180].

0 commit comments

Comments
 (0)