Skip to content

Commit 5501f57

Browse files
committed
add foundry qs
1 parent cda3df7 commit 5501f57

File tree

3 files changed

+159
-109
lines changed

3 files changed

+159
-109
lines changed
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
---
2+
title: "Quickstart: Analyze multimodal content with the AI Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 07/28/2025
9+
ms.author: pafarley
10+
---
11+
12+
## Prerequisites
13+
14+
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
16+
17+
## Setup
18+
19+
1. Go to the [Azure AI Foundry portal](https://ai.azure.com/), and sign in with your Azure account that has the Content Safety resource.
20+
1. On the left nav, select **AI Services**. On the next page, select **Content Safety**.
21+
1. Select the **Moderate multimodal content** pane.
22+
1. Select your resource in the **Azure AI Services** dropdown menu.
23+
24+
## Analyze multimodal content
25+
26+
Choose one of the provided sample images, or upload your own. You also enter text that's to be associated with the image.
27+
28+
When you select **Run test**, the service analyzes the graphic image content, any text that appears in the image, and the provided text that's associated with the image. If any content type triggers any of the harm category content filters, that information appears in the **Category and risk level detection results** pane.
29+
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
---
2+
title: "Quickstart: Analyze multimodal content with the REST API"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 07/28/2025
9+
ms.author: pafarley
10+
---
11+
12+
## Prerequisites
13+
14+
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
15+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
16+
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
17+
* One of the following installed:
18+
* [cURL](https://curl.haxx.se/) for REST API calls.
19+
* [Python 3.x](https://www.python.org/) installed
20+
21+
22+
## Analyze image with text
23+
24+
The following section walks through a sample multimodal moderation request with cURL.
25+
26+
### Prepare a sample image
27+
28+
Choose a sample image to analyze, and download it to your device.
29+
30+
See [Input requirements](./overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
31+
32+
You can input your image by one of two methods: **local filestream** or **blob storage URL**.
33+
- **Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
34+
- **Blob storage URL**: Upload your image to an Azure Blob Storage account. Follow the [blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) to learn how to do this. Then open Azure Storage Explorer and get the URL to your image. Save it to a temporary location.
35+
36+
### Analyze content
37+
38+
Paste the command below into a text editor, and make the following changes.
39+
40+
1. Replace `<endpoint>` with your resource endpoint URL.
41+
1. Replace `<your_subscription_key>` with your key.
42+
1. Populate the `"image"` field in the body with either a `"content"` field or a `"blobUrl"` field. For example: `{"image": {"content": "<base_64_string>"}` or `{"image": {"blobUrl": "<your_storage_url>"}`.
43+
1. Optionally replace the value of the `"text"` field with your own text you'd like to analyze.
44+
45+
```shell
46+
curl --location '<endpoint>/contentsafety/imageWithText:analyze?api-version=2024-09-15-preview ' \
47+
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \
48+
--header 'Content-Type: application/json' \
49+
--data '{
50+
"image": {
51+
"content": "<base_64_string>"
52+
},
53+
"categories": ["Hate","Sexual","Violence","SelfHarm"],
54+
"enableOcr": true,
55+
"text": "I want to kill you"
56+
}'
57+
```
58+
59+
> [!NOTE]
60+
> If you're using a blob storage URL, the request body should look like this:
61+
>
62+
> ```
63+
> {
64+
> "image": {
65+
> "blobUrl": "<your_storage_url>"
66+
> }
67+
> }
68+
> ```
69+
70+
71+
The below fields must be included in the URL:
72+
73+
| Name |Required? | Description | Type |
74+
| :------- |-------- |:--------------- | ------ |
75+
| **API Version** |Required |This is the API version to be checked. Current version is: `api-version=2024-09-15`. Example: `<endpoint>/contentsafety/imageWithText:analyze?api-version=2024-09-15` | String |
76+
77+
The parameters in the request body are defined in this table:
78+
79+
| Name | Description | Type |
80+
| :--------------------- | :----------------------------------------------------------- | ------- |
81+
| **content or blobUrl** | (Required) The content or blob URL of the image. I can be either base64-encoded bytes or a blob URL. If both are given, the request is refused. The maximum allowed size of the image is 7,200 x 7,200 pixels, and the maximum file size is 4 MB. The minimum size of the image is 50 pixels x 50 pixels. | String |
82+
| **text** | (Optional) The text attached to the image. We support at most 1000 characters (unicode code points) in one text request. | String |
83+
| **enableOcr** | (Required) When set to true, our service will perform OCR and analyze the detected text with input image at the same time. We will recognize at most 1000 characters (unicode code points) from input image. The others will be truncated. | Boolean |
84+
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](./concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
85+
86+
Open a command prompt window and run the cURL command.
87+
88+
89+
### Interpret the API response
90+
91+
92+
You should see the image and text moderation results displayed as JSON data in the console. For example:
93+
94+
```json
95+
{
96+
"categoriesAnalysis": [
97+
{
98+
"category": "Hate",
99+
"severity": 2
100+
},
101+
{
102+
"category": "SelfHarm",
103+
"severity": 0
104+
},
105+
{
106+
"category": "Sexual",
107+
"severity": 0
108+
},
109+
{
110+
"category": "Violence",
111+
"severity": 0
112+
}
113+
]
114+
}
115+
```
116+
117+
The JSON fields in the output are defined here:
118+
119+
| Name | Description | Type |
120+
| :------------- | :--------------- | ------ |
121+
| **categoriesAnalysis** | Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](./concepts/harm-categories.md)| String |
122+
| **Severity** | The severity level of the flag in each harm category. [Harm categories](./concepts/harm-categories.md) | Integer |

articles/ai-services/content-safety/quickstart-multimodal.md

Lines changed: 8 additions & 109 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: quickstart
9-
ms.date: 03/26/2025
9+
ms.date: 07/28/2025
1010
ms.author: pafarley
11-
# zone_pivot_groups: programming-languages-content-safety
11+
zone_pivot_groups: programming-languages-content-safety-foundry-rest
1212
---
1313

1414
# Quickstart: Analyze multimodal content (preview)
@@ -20,115 +20,14 @@ For more information on how content is filtered, see the [Harm categories concep
2020
> [!IMPORTANT]
2121
> This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability).
2222
23+
::: zone pivot="programming-language-foundry-portal"
2324

24-
## Prerequisites
25+
[!INCLUDE [Studio quickstart](./includes/quickstarts/foundry-quickstart-multimodal.md)]
2526

26-
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
27-
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
28-
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
29-
* One of the following installed:
30-
* [cURL](https://curl.haxx.se/) for REST API calls.
31-
* [Python 3.x](https://www.python.org/) installed
27+
::: zone-end
3228

29+
::: zone pivot="programming-language-rest"
3330

34-
## Analyze image with text
31+
[!INCLUDE [cURL quickstart](./includes/quickstarts/rest-quickstart-multimodal.md)]
3532

36-
The following section walks through a sample multimodal moderation request with cURL.
37-
38-
### Prepare a sample image
39-
40-
Choose a sample image to analyze, and download it to your device.
41-
42-
See [Input requirements](./overview.md#input-requirements) for the image limitations. If your format is animated, the service will extract the first frame to do the analysis.
43-
44-
You can input your image by one of two methods: **local filestream** or **blob storage URL**.
45-
- **Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
46-
- **Blob storage URL**: Upload your image to an Azure Blob Storage account. Follow the [blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) to learn how to do this. Then open Azure Storage Explorer and get the URL to your image. Save it to a temporary location.
47-
48-
### Analyze content
49-
50-
Paste the command below into a text editor, and make the following changes.
51-
52-
1. Replace `<endpoint>` with your resource endpoint URL.
53-
1. Replace `<your_subscription_key>` with your key.
54-
1. Populate the `"image"` field in the body with either a `"content"` field or a `"blobUrl"` field. For example: `{"image": {"content": "<base_64_string>"}` or `{"image": {"blobUrl": "<your_storage_url>"}`.
55-
1. Optionally replace the value of the `"text"` field with your own text you'd like to analyze.
56-
57-
```shell
58-
curl --location '<endpoint>/contentsafety/imageWithText:analyze?api-version=2024-09-15-preview ' \
59-
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \
60-
--header 'Content-Type: application/json' \
61-
--data '{
62-
"image": {
63-
"content": "<base_64_string>"
64-
},
65-
"categories": ["Hate","Sexual","Violence","SelfHarm"],
66-
"enableOcr": true,
67-
"text": "I want to kill you"
68-
}'
69-
```
70-
71-
> [!NOTE]
72-
> If you're using a blob storage URL, the request body should look like this:
73-
>
74-
> ```
75-
> {
76-
> "image": {
77-
> "blobUrl": "<your_storage_url>"
78-
> }
79-
> }
80-
> ```
81-
82-
83-
The below fields must be included in the URL:
84-
85-
| Name |Required? | Description | Type |
86-
| :------- |-------- |:--------------- | ------ |
87-
| **API Version** |Required |This is the API version to be checked. Current version is: `api-version=2024-09-15`. Example: `<endpoint>/contentsafety/imageWithText:analyze?api-version=2024-09-15` | String |
88-
89-
The parameters in the request body are defined in this table:
90-
91-
| Name | Description | Type |
92-
| :--------------------- | :----------------------------------------------------------- | ------- |
93-
| **content or blobUrl** | (Required) The content or blob URL of the image. I can be either base64-encoded bytes or a blob URL. If both are given, the request is refused. The maximum allowed size of the image is 7,200 x 7,200 pixels, and the maximum file size is 4 MB. The minimum size of the image is 50 pixels x 50 pixels. | String |
94-
| **text** | (Optional) The text attached to the image. We support at most 1000 characters (unicode code points) in one text request. | String |
95-
| **enableOcr** | (Required) When set to true, our service will perform OCR and analyze the detected text with input image at the same time. We will recognize at most 1000 characters (unicode code points) from input image. The others will be truncated. | Boolean |
96-
| **categories** | (Optional) This is assumed to be an array of category names. See the [Harm categories guide](./concepts/harm-categories.md) for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | Enum |
97-
98-
Open a command prompt window and run the cURL command.
99-
100-
101-
### Interpret the API response
102-
103-
104-
You should see the image and text moderation results displayed as JSON data in the console. For example:
105-
106-
```json
107-
{
108-
"categoriesAnalysis": [
109-
{
110-
"category": "Hate",
111-
"severity": 2
112-
},
113-
{
114-
"category": "SelfHarm",
115-
"severity": 0
116-
},
117-
{
118-
"category": "Sexual",
119-
"severity": 0
120-
},
121-
{
122-
"category": "Violence",
123-
"severity": 0
124-
}
125-
]
126-
}
127-
```
128-
129-
The JSON fields in the output are defined here:
130-
131-
| Name | Description | Type |
132-
| :------------- | :--------------- | ------ |
133-
| **categoriesAnalysis** | Each output class that the API predicts. Classification can be multi-labeled. For example, when an image is uploaded to the image moderation model, it could be classified as both sexual content and violence. [Harm categories](./concepts/harm-categories.md)| String |
134-
| **Severity** | The severity level of the flag in each harm category. [Harm categories](./concepts/harm-categories.md) | Integer |
33+
::: zone-end

0 commit comments

Comments
 (0)