You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ You'll also need the following to use Form Recognizer containers:
32
32
|**Familiarity with Docker**| You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker`[terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). |
33
33
|**Docker Engine installed**| <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
34
34
|**Form Recognizer resource**| A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
35
-
| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
35
+
| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](https://learn.microsoft.com/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
28
+
* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
29
29
30
30
### Nov 2022
31
31
@@ -100,7 +100,7 @@ We've also added links to some user-generated content. Those items will be marke
100
100
101
101
## Videos
102
102
103
-
* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](https://learn.microsoft.com/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
103
+
* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
104
104
* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
105
105
* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
106
106
* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
+10-14Lines changed: 10 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: computer-vision
10
10
ms.topic: how-to
11
-
ms.date: 06/13/2022
11
+
ms.date: 12/27/2022
12
12
ms.author: pafarley
13
13
ms.custom: seodec18, cog-serv-seo-aug-2020
14
14
keywords: on-premises, OCR, Docker, container
@@ -23,9 +23,7 @@ Containers enable you to run the Computer Vision APIs in your own environment. C
23
23
The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
24
24
25
25
## What's new
26
-
The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
27
-
28
-
## Read 3.2 container
26
+
The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
29
27
30
28
The Read 3.2 OCR container is the latest GA model and provides:
31
29
* New models for enhanced accuracy.
@@ -80,18 +78,16 @@ grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detect
80
78
81
79
[!INCLUDE [Container requirements and recommendations](includes/container-requirements-and-recommendations.md)]
82
80
83
-
## Get the container image with `docker pull`
81
+
## Get the container image
84
82
85
-
Container images for Read are available.
83
+
The following container images for Read are available.
Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
106
102
107
-
1.[Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
103
+
1.[Run the container](#run-the-container), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
108
104
1.[Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
109
105
110
-
## Run the container with `docker run`
106
+
## Run the container
111
107
112
108
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
113
109
@@ -121,7 +117,7 @@ Billing={ENDPOINT_URI} \
121
117
ApiKey={API_KEY}
122
118
```
123
119
124
-
This command:
120
+
The above command:
125
121
126
122
* Runs the Read OCR latest GA container from the container image.
127
123
* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
@@ -151,7 +147,7 @@ If you're using Azure Storage to store images for processing, you can create a [
151
147
To find your connection string:
152
148
153
149
1. Navigate to **Storage accounts** on the Azure portal, and find your account.
154
-
2.Click on **Access keys** in the left navigation list.
150
+
2.Select on **Access keys** in the left navigation list.
155
151
3. Your connection string will be located below **Connection string**
156
152
157
153
[!INCLUDE [Running multiple containers on the same host](../../../includes/cognitive-services-containers-run-multiple-same-host.md)]
@@ -168,7 +164,7 @@ Use the host, `http://localhost:5000`, for container APIs. You can view the Swag
168
164
169
165
### Asynchronous Read
170
166
171
-
You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request.
167
+
You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifier to the HTTP GET request.
172
168
173
169
From the swagger UI, select the `Analyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image:
174
170
@@ -301,7 +297,7 @@ You can use the following operation to synchronously read an image.
301
297
302
298
`POST /vision/v3.2/read/syncAnalyze`
303
299
304
-
When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this is if an error occurs. When an error occurs the following JSON is returned:
300
+
When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this behavior is if an error occurs. If an error occurs, the following JSON is returned:
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md
+2-10Lines changed: 2 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: conceptual
12
-
ms.date: 07/05/2022
12
+
ms.date: 12/27/2022
13
13
ms.author: pafarley
14
14
ms.custom: seodec18, ignite-2022
15
15
---
@@ -33,14 +33,6 @@ The "adult" classification contains several different categories:
33
33
34
34
## Use the API
35
35
36
-
#### [Version 3.2](#tab/3-2)
37
-
38
-
You can detect adult content with the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties—`isAdultContent`, `isRacyContent`, and `isGoryContent`—in its JSON response. The method also returns corresponding properties—`adultScore`, `racyScore`, and `goreScore`—which represent confidence scores between zero and one for each respective category.
39
-
40
-
#### [Version 4.0](#tab/4-0)
41
-
42
-
You can detect adult content with the [Analyze Image](https://aka.ms/vision-4-0-ref) API. When you add the value of `Adult` to the **features** query parameter, the API returns three properties—`adult`, `racy`, and `gore`—in its JSON response. Each of these properties contains a boolean value and confidence scores between zero and one.
43
-
44
-
---
36
+
You can detect adult content with the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties—`isAdultContent`, `isRacyContent`, and `isGoryContent`—in its JSON response. The method also returns corresponding properties—`adultScore`, `racyScore`, and `goreScore`—which represent confidence scores between zero and one for each respective category.
45
37
46
38
-[Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-detecting-faces.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: conceptual
12
-
ms.date: 06/13/2022
12
+
ms.date: 12/27/2022
13
13
ms.author: pafarley
14
14
ms.custom: seodec18
15
15
---
@@ -116,6 +116,6 @@ The next example demonstrates the JSON response returned for an image containing
116
116
117
117
## Use the API
118
118
119
-
The face detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
119
+
The face detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
120
120
121
121
*[Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-face-detection.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: face-api
11
11
ms.topic: conceptual
12
-
ms.date: 07/20/2022
12
+
ms.date: 12/27/2022
13
13
ms.author: pafarley
14
14
---
15
15
@@ -31,7 +31,7 @@ Try out the capabilities of face detection quickly and easily using Vision Studi
31
31
32
32
## Face ID
33
33
34
-
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
34
+
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
0 commit comments