Skip to content

Commit bc7ebf5

Browse files
committed
fix links
1 parent 5501f57 commit bc7ebf5

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

articles/ai-services/content-safety/includes/quickstarts/foundry-quickstart-multimodal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: pafarley
1212
## Prerequisites
1313

1414
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15-
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
15+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](../../overview.md#region-availability), and supported pricing tier. Then select **Create**.
1616

1717
## Setup
1818

articles/ai-services/content-safety/includes/quickstarts/rest-quickstart-multimodal.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: pafarley
1212
## Prerequisites
1313

1414
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15-
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
15+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](../../overview.md#region-availability), and supported pricing tier. Then select **Create**.
1616
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
1717
* One of the following installed:
1818
* [cURL](https://curl.haxx.se/) for REST API calls.
@@ -27,7 +27,7 @@ The following section walks through a sample multimodal moderation request with
2727

2828
Choose a sample image to analyze, and download it to your device.
2929

30-
See [Input requirements](./overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
30+
See [Input requirements](../../overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
3131

3232
You can input your image by one of two methods: **local filestream** or **blob storage URL**.
3333
- **Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
@@ -81,7 +81,7 @@ The parameters in the request body are defined in this table:
8181
| **content or blobUrl** | (Required) The content or blob URL of the image. I can be either base64-encoded bytes or a blob URL. If both are given, the request is refused. The maximum allowed size of the image is 7,200 x 7,200 pixels, and the maximum file size is 4 MB. The minimum size of the image is 50 pixels x 50 pixels. | String |
8282
| **text** | (Optional) The text attached to the image. We support at most 1000 characters (unicode code points) in one text request. | String |
8383
| **enableOcr** | (Required) When set to true, our service will perform OCR and analyze the detected text with input image at the same time. We will recognize at most 1000 characters (unicode code points) from input image. The others will be truncated. | Boolean |
84-
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](./concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
84+
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](../../concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
8585
8686
Open a command prompt window and run the cURL command.
8787
@@ -118,5 +118,5 @@ The JSON fields in the output are defined here:
118118

119119
| Name | Description | Type |
120120
| :------------- | :--------------- | ------ |
121-
| **categoriesAnalysis** | Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](./concepts/harm-categories.md)| String |
122-
| **Severity** | The severity level of the flag in each harm category. [Harm categories](./concepts/harm-categories.md) | Integer |
121+
| **categoriesAnalysis** | Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](../../concepts/harm-categories.md)| String |
122+
| **Severity** | The severity level of the flag in each harm category. [Harm categories](../../concepts/harm-categories.md) | Integer |

0 commit comments

Comments
 (0)