You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
4
+
description: Learn how to get started with the Content Safety service using Content Safety Studio in your browser.
5
5
#services: cognitive-services
6
6
author: PatrickFarley
7
7
manager: nitinme
8
8
ms.service: azure-ai-content-safety
9
9
ms.custom: build-2023, build-2023-dataai
10
10
ms.topic: quickstart
11
-
ms.date: 02/14/2024
11
+
ms.date: 10/01/2024
12
12
ms.author: pafarley
13
13
---
14
14
15
15
# QuickStart: Azure AI Content Safety Studio
16
16
17
-
In this quickstart, get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
17
+
This article explains how you can get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
18
18
19
19
> [!CAUTION]
20
-
> Some of the sample content provided by Content Safety Studio may be offensive. Sample images are blurred by default. User discretion is advised.
20
+
> Some of the sample content provided by Content Safety Studio might be offensive. Sample images are blurred by default. User discretion is advised.
21
21
22
22
## Prerequisites
23
23
24
-
* An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
24
+
* An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
25
25
* A [Content Safety](https://aka.ms/acs-create) Azure resource.
26
-
* Assign `Cognitive Services User` role to your account. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
27
-
* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
26
+
* Assign the *Cognitive Services User* role to your account. Go to the [Azure portal](https://portal.azure.com), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the *Cognitive Services User* role, and select the member of your account that you need to assign this role to, then review and assign. It might take a few minutes for the assignment to take effect.
27
+
* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
28
28
29
29
> [!IMPORTANT]
30
-
> * You must assign the `Cognitive Services User` role to your Azure account to use the studio experience. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
31
-
30
+
> You must assign the *Cognitive Services User* role to your Azure account to use the studio experience. Go to the [Azure portal](https://portal.azure.com), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the *Cognitive Services User* role, and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
32
31
33
32
## Analyze text content
34
-
The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides capability for you to quickly try out text moderation.
35
33
36
-
:::image type="content" source="media/analyzetext.png" alt-text="Screenshot of Analyze Text panel.":::
34
+
The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides the capability for you to quickly try out text moderation.
35
+
36
+
:::image type="content" source="media/analyze-text.png" alt-text="Screenshot of Analyze Text panel.":::
37
37
38
38
1. Select the **Moderate text content** panel.
39
39
1. Add text to the input field, or select sample text from the panels on the page.
@@ -43,15 +43,15 @@ The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page
43
43
> See [Input requirements](./overview.md#input-requirements) for maximum text length limitations.
44
44
1. Select **Run test**.
45
45
46
-
The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
46
+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
47
47
48
-
The **Use blocklist** tab on the right lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
48
+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
49
49
50
50
## Detect user input attacks
51
51
52
52
The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
53
53
54
-
:::image type="content" source="media/jailbreak-panel.png" alt-text="Screenshot of content safety studio with user input risk detection panel selected.":::
54
+
:::image type="content" source="media/prompt-shields.png" alt-text="Screenshot of content safety studio with Prompt Shields panel selected.":::
55
55
56
56
1. Select the **Prompt Shields** panel.
57
57
1. Select a sample text on the page, or input your own content for testing. You can also upload a CSV file to do a batch test.
@@ -62,25 +62,26 @@ The service returns the risk flag and type for each sample.
62
62
For more information, see the [Prompt Shields conceptual guide](./concepts/jailbreak-detection.md).
63
63
64
64
## Analyze image content
65
+
65
66
The [Moderate image content](https://contentsafety.cognitive.azure.com/image) page provides capability for you to quickly try out image moderation.
66
67
67
-
:::image type="content" source="media/analyzeimage.png" alt-text="Screenshot of Analyze Image panel.":::
68
+
:::image type="content" source="media/analyze-image.png" alt-text="Screenshot of Analyze Image panel.":::
68
69
69
70
1. Select the **Moderate image content** panel.
70
71
1. Select a sample image from the panels on the page, or upload your own image. The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
71
72
1. Select **Run test**.
72
73
73
-
The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
74
+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
74
75
75
76
## View and export code
76
-
You can use the **View Code** feature in both *Analyze text content* or *Analyze image content* page to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
77
77
78
-
:::image type="content" source="media/viewcode.png" alt-text="Screenshot of the View code.":::
78
+
You can use the **View Code** feature in either the *Analyze text content* or *Analyze image content* pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
79
79
80
+
:::image type="content" source="media/view-code.png" alt-text="Screenshot of the View code window.":::
80
81
81
82
## Monitor online activity
82
83
83
-
The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor)page lets you view your API usage and trends.
84
+
The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor)panel lets you view your API usage and trends.
84
85
85
86
:::image type="content" source="media/monitor.png" alt-text="Screenshot of Monitoring panel.":::
86
87
@@ -93,6 +94,7 @@ In the **Reject rate per category** chart, you can also adjust the severity thre
93
94
You can also edit blocklists if you want to change some terms, based on the **Top 10 blocked terms** chart.
94
95
95
96
## Manage your resource
97
+
96
98
To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Content Safety Studio home page and select the **Resource** tab. If you have other resources, you can switch resources here as well.
97
99
98
100
:::image type="content" source="media/manage-resource.png" alt-text="Screenshot of Manage Resource.":::
@@ -104,9 +106,9 @@ If you want to clean up and remove an Azure AI services resource, you can delete
Next, get started using Azure AI Content Safety through the REST APIs or a client SDK, so you can seamlessly integrate the service into your application.
110
112
111
113
> [!div class="nextstepaction"]
112
-
> [Quickstart: REST API and client SDKs](./quickstart-text.md)
114
+
> [Quickstart: Analyze text content](./quickstart-text.md)
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/get-started-build-detector.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ To upload another set of images, return to the top of this section and repeat th
84
84
85
85
To train the detector model, select the **Train** button. The detector uses all of the current images and their tags to create a model that identifies each tagged object. This process can take several minutes.
86
86
87
-

87
+

88
88
89
89
The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
0 commit comments