You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/includes/quickstarts/foundry-quickstart-multimodal.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: pafarley
12
12
## Prerequisites
13
13
14
14
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15
-
* Once you have your Azure subscription, <ahref="https://aka.ms/acs-create"title="Create a Content Safety resource"target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
15
+
* Once you have your Azure subscription, <ahref="https://aka.ms/acs-create"title="Create a Content Safety resource"target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](../../overview.md#region-availability), and supported pricing tier. Then select **Create**.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/includes/quickstarts/rest-quickstart-multimodal.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: pafarley
12
12
## Prerequisites
13
13
14
14
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15
-
* Once you have your Azure subscription, <ahref="https://aka.ms/acs-create"title="Create a Content Safety resource"target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
15
+
* Once you have your Azure subscription, <ahref="https://aka.ms/acs-create"title="Create a Content Safety resource"target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](../../overview.md#region-availability), and supported pricing tier. Then select **Create**.
16
16
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
17
17
* One of the following installed:
18
18
*[cURL](https://curl.haxx.se/) for REST API calls.
@@ -27,7 +27,7 @@ The following section walks through a sample multimodal moderation request with
27
27
28
28
Choose a sample image to analyze, and download it to your device.
29
29
30
-
See [Input requirements](./overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
30
+
See [Input requirements](../../overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
31
31
32
32
You can input your image by one of two methods: **local filestream** or **blob storage URL**.
33
33
-**Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
@@ -81,7 +81,7 @@ The parameters in the request body are defined in this table:
81
81
| **content or blobUrl** | (Required) The content or blob URL of the image. I can be either base64-encoded bytes or a blob URL. If both are given, the request is refused. The maximum allowed size of the image is 7,200 x 7,200 pixels, and the maximum file size is 4 MB. The minimum size of the image is 50 pixels x 50 pixels. | String |
82
82
| **text** | (Optional) The text attached to the image. We support at most 1000 characters (unicode code points) in one text request. | String |
83
83
| **enableOcr** | (Required) When set to true, our service will perform OCR and analyze the detected text with input image at the same time. We will recognize at most 1000 characters (unicode code points) from input image. The others will be truncated. | Boolean |
84
-
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](./concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
84
+
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](../../concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
85
85
86
86
Open a command prompt window and run the cURL command.
87
87
@@ -118,5 +118,5 @@ The JSON fields in the output are defined here:
118
118
119
119
| Name | Description | Type |
120
120
| :------------- | :--------------- | ------ |
121
-
|**categoriesAnalysis**| Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](./concepts/harm-categories.md)| String |
122
-
|**Severity**| The severity level of the flag in each harm category. [Harm categories](./concepts/harm-categories.md)| Integer |
121
+
|**categoriesAnalysis**| Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](../../concepts/harm-categories.md)| String |
122
+
|**Severity**| The severity level of the flag in each harm category. [Harm categories](../../concepts/harm-categories.md)| Integer |
0 commit comments