Skip to content

Commit d6bb311

Browse files
Merge pull request #222560 from PatrickFarley/minor-updates
[cog svcs] Minor updates
2 parents 73eb39c + 3c97cf8 commit d6bb311

25 files changed

+78
-90
lines changed

articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ You'll also need the following to use Form Recognizer containers:
3232
| **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). |
3333
| **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
3434
|**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
35-
| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
35+
| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
3636

3737
|Optional|Purpose|
3838
|---------|----------|

articles/cognitive-services/Anomaly-Detector/whats-new.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ We've also added links to some user-generated content. Those items will be marke
2525
| [JAVA](https://search.maven.org/artifact/com.azure/azure-ai-anomalydetector/3.0.0-beta.5/jar) | [MultivariateSample.java](https://github.com/Azure/azure-sdk-for-java/blob/e845677d919d47a2c4837153306b37e5f4ecd795/sdk/anomalydetector/azure-ai-anomalydetector/src/samples/java/com/azure/ai/anomalydetector/MultivariateSample.java)|
2626
| [JS/TS](https://www.npmjs.com/package/@azure-rest/ai-anomaly-detector/v/1.0.0-beta.1) |[sample_multivariate_detection.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/anomalydetector/ai-anomaly-detector-rest/samples-dev/sample_multivariate_detection.ts)|
2727

28-
* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](https://learn.microsoft.com/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
28+
* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
2929

3030
### Nov 2022
3131

@@ -100,7 +100,7 @@ We've also added links to some user-generated content. Those items will be marke
100100

101101
## Videos
102102

103-
* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](https://learn.microsoft.com/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
103+
* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
104104
* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
105105
* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
106106
* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez

articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: computer-vision
1010
ms.topic: how-to
11-
ms.date: 06/13/2022
11+
ms.date: 12/27/2022
1212
ms.author: pafarley
1313
ms.custom: seodec18, cog-serv-seo-aug-2020
1414
keywords: on-premises, OCR, Docker, container
@@ -23,9 +23,7 @@ Containers enable you to run the Computer Vision APIs in your own environment. C
2323
The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
2424

2525
## What's new
26-
The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
27-
28-
## Read 3.2 container
26+
The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
2927

3028
The Read 3.2 OCR container is the latest GA model and provides:
3129
* New models for enhanced accuracy.
@@ -80,18 +78,16 @@ grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detect
8078

8179
[!INCLUDE [Container requirements and recommendations](includes/container-requirements-and-recommendations.md)]
8280

83-
## Get the container image with `docker pull`
81+
## Get the container image
8482

85-
Container images for Read are available.
83+
The following container images for Read are available.
8684

8785
| Container | Container Registry / Repository / Image Name | Tags |
8886
|-----------|------------|-----------------------------------------|
8987
| Read 3.2 GA | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30` | latest, 3.2, 3.2-model-2022-04-30 |
9088

9189
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
9290

93-
### Docker pull for the Read OCR container
94-
9591
```bash
9692
docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
9793
```
@@ -104,10 +100,10 @@ docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-202
104100

105101
Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
106102

107-
1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
103+
1. [Run the container](#run-the-container), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
108104
1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
109105

110-
## Run the container with `docker run`
106+
## Run the container
111107

112108
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
113109

@@ -121,7 +117,7 @@ Billing={ENDPOINT_URI} \
121117
ApiKey={API_KEY}
122118
```
123119

124-
This command:
120+
The above command:
125121

126122
* Runs the Read OCR latest GA container from the container image.
127123
* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
@@ -151,7 +147,7 @@ If you're using Azure Storage to store images for processing, you can create a [
151147
To find your connection string:
152148

153149
1. Navigate to **Storage accounts** on the Azure portal, and find your account.
154-
2. Click on **Access keys** in the left navigation list.
150+
2. Select on **Access keys** in the left navigation list.
155151
3. Your connection string will be located below **Connection string**
156152

157153
[!INCLUDE [Running multiple containers on the same host](../../../includes/cognitive-services-containers-run-multiple-same-host.md)]
@@ -168,7 +164,7 @@ Use the host, `http://localhost:5000`, for container APIs. You can view the Swag
168164

169165
### Asynchronous Read
170166

171-
You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request.
167+
You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifier to the HTTP GET request.
172168

173169
From the swagger UI, select the `Analyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image:
174170

@@ -301,7 +297,7 @@ You can use the following operation to synchronously read an image.
301297

302298
`POST /vision/v3.2/read/syncAnalyze`
303299

304-
When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this is if an error occurs. When an error occurs the following JSON is returned:
300+
When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this behavior is if an error occurs. If an error occurs, the following JSON is returned:
305301

306302
```json
307303
{

articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 07/05/2022
12+
ms.date: 12/27/2022
1313
ms.author: pafarley
1414
ms.custom: seodec18, ignite-2022
1515
---
@@ -33,14 +33,6 @@ The "adult" classification contains several different categories:
3333

3434
## Use the API
3535

36-
#### [Version 3.2](#tab/3-2)
37-
38-
You can detect adult content with the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
39-
40-
#### [Version 4.0](#tab/4-0)
41-
42-
You can detect adult content with the [Analyze Image](https://aka.ms/vision-4-0-ref) API. When you add the value of `Adult` to the **features** query parameter, the API returns three properties&mdash;`adult`, `racy`, and `gore`&mdash;in its JSON response. Each of these properties contains a boolean value and confidence scores between zero and one.
43-
44-
---
36+
You can detect adult content with the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
4537

4638
- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/cognitive-services/Computer-vision/concept-detecting-faces.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 06/13/2022
12+
ms.date: 12/27/2022
1313
ms.author: pafarley
1414
ms.custom: seodec18
1515
---
@@ -116,6 +116,6 @@ The next example demonstrates the JSON response returned for an image containing
116116

117117
## Use the API
118118

119-
The face detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
119+
The face detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
120120

121121
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/cognitive-services/Computer-vision/concept-face-detection.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: face-api
1111
ms.topic: conceptual
12-
ms.date: 07/20/2022
12+
ms.date: 12/27/2022
1313
ms.author: pafarley
1414
---
1515

@@ -31,7 +31,7 @@ Try out the capabilities of face detection quickly and easily using Vision Studi
3131
3232
## Face ID
3333

34-
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
34+
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
3535

3636
## Face landmarks
3737

articles/cognitive-services/Computer-vision/concept-face-recognition.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: face-api
1111
ms.topic: conceptual
12-
ms.date: 07/20/2022
12+
ms.date: 12/27/2022
1313
ms.author: pafarley
1414
---
1515

@@ -22,7 +22,9 @@ You can try out the capabilities of face recognition quickly and easily using Vi
2222
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
2323
2424

25-
## Recognition operations
25+
## Face recognition operations
26+
27+
[!INCLUDE [Gate notice](./includes/identity-gate-notice.md)]
2628

2729
This section details how the underlying operations use the above data structures to identify and verify a face.
2830

0 commit comments

Comments
 (0)