Skip to content

Commit 14415c9

Browse files
authored
Merge pull request #600 from cdpark/refresh-sept-patrickfarley-3
Feature 308280: Q&M: AI Services freshness for 180d target - Batch 3
2 parents e138f3b + a2b618a commit 14415c9

File tree

16 files changed

+47
-49
lines changed

16 files changed

+47
-49
lines changed
37.5 KB
Loading
73.3 KB
Loading
Binary file not shown.
Binary file not shown.
-86.1 KB
Loading
73.2 KB
Loading
Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,39 @@
11
---
22
title: "Quickstart: Content Safety Studio"
33
titleSuffix: "Azure AI services"
4-
description: In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
4+
description: Learn how to get started with the Content Safety service using Content Safety Studio in your browser.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.custom: build-2023, build-2023-dataai
1010
ms.topic: quickstart
11-
ms.date: 02/14/2024
11+
ms.date: 10/01/2024
1212
ms.author: pafarley
1313
---
1414

1515
# QuickStart: Azure AI Content Safety Studio
1616

17-
In this quickstart, get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
17+
This article explains how you can get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
1818

1919
> [!CAUTION]
20-
> Some of the sample content provided by Content Safety Studio may be offensive. Sample images are blurred by default. User discretion is advised.
20+
> Some of the sample content provided by Content Safety Studio might be offensive. Sample images are blurred by default. User discretion is advised.
2121
2222
## Prerequisites
2323

24-
* An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
24+
* An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
2525
* A [Content Safety](https://aka.ms/acs-create) Azure resource.
26-
* Assign `Cognitive Services User` role to your account. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
27-
* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
26+
* Assign the *Cognitive Services User* role to your account. Go to the [Azure portal](https://portal.azure.com), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the *Cognitive Services User* role, and select the member of your account that you need to assign this role to, then review and assign. It might take a few minutes for the assignment to take effect.
27+
* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
2828

2929
> [!IMPORTANT]
30-
> * You must assign the `Cognitive Services User` role to your Azure account to use the studio experience. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
31-
30+
> You must assign the *Cognitive Services User* role to your Azure account to use the studio experience. Go to the [Azure portal](https://portal.azure.com), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the *Cognitive Services User* role, and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
3231
3332
## Analyze text content
34-
The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides capability for you to quickly try out text moderation.
3533

36-
:::image type="content" source="media/analyzetext.png" alt-text="Screenshot of Analyze Text panel.":::
34+
The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides the capability for you to quickly try out text moderation.
35+
36+
:::image type="content" source="media/analyze-text.png" alt-text="Screenshot of Analyze Text panel.":::
3737

3838
1. Select the **Moderate text content** panel.
3939
1. Add text to the input field, or select sample text from the panels on the page.
@@ -43,15 +43,15 @@ The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page
4343
> See [Input requirements](./overview.md#input-requirements) for maximum text length limitations.
4444
1. Select **Run test**.
4545

46-
The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
46+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
4747

48-
The **Use blocklist** tab on the right lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
48+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
4949

5050
## Detect user input attacks
5151

5252
The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
5353

54-
:::image type="content" source="media/jailbreak-panel.png" alt-text="Screenshot of content safety studio with user input risk detection panel selected.":::
54+
:::image type="content" source="media/prompt-shields.png" alt-text="Screenshot of content safety studio with Prompt Shields panel selected.":::
5555

5656
1. Select the **Prompt Shields** panel.
5757
1. Select a sample text on the page, or input your own content for testing. You can also upload a CSV file to do a batch test.
@@ -62,25 +62,26 @@ The service returns the risk flag and type for each sample.
6262
For more information, see the [Prompt Shields conceptual guide](./concepts/jailbreak-detection.md).
6363

6464
## Analyze image content
65+
6566
The [Moderate image content](https://contentsafety.cognitive.azure.com/image) page provides capability for you to quickly try out image moderation.
6667

67-
:::image type="content" source="media/analyzeimage.png" alt-text="Screenshot of Analyze Image panel.":::
68+
:::image type="content" source="media/analyze-image.png" alt-text="Screenshot of Analyze Image panel.":::
6869

6970
1. Select the **Moderate image content** panel.
7071
1. Select a sample image from the panels on the page, or upload your own image. The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
7172
1. Select **Run test**.
7273

73-
The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
74+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
7475

7576
## View and export code
76-
You can use the **View Code** feature in both *Analyze text content* or *Analyze image content* page to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
7777

78-
:::image type="content" source="media/viewcode.png" alt-text="Screenshot of the View code.":::
78+
You can use the **View Code** feature in either the *Analyze text content* or *Analyze image content* pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
7979

80+
:::image type="content" source="media/view-code.png" alt-text="Screenshot of the View code window.":::
8081

8182
## Monitor online activity
8283

83-
The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor) page lets you view your API usage and trends.
84+
The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor) panel lets you view your API usage and trends.
8485

8586
:::image type="content" source="media/monitor.png" alt-text="Screenshot of Monitoring panel.":::
8687

@@ -93,6 +94,7 @@ In the **Reject rate per category** chart, you can also adjust the severity thre
9394
You can also edit blocklists if you want to change some terms, based on the **Top 10 blocked terms** chart.
9495

9596
## Manage your resource
97+
9698
To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Content Safety Studio home page and select the **Resource** tab. If you have other resources, you can switch resources here as well.
9799

98100
:::image type="content" source="media/manage-resource.png" alt-text="Screenshot of Manage Resource.":::
@@ -104,9 +106,9 @@ If you want to clean up and remove an Azure AI services resource, you can delete
104106
- [Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
105107
- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
106108

107-
## Next steps
109+
## Next step
108110

109111
Next, get started using Azure AI Content Safety through the REST APIs or a client SDK, so you can seamlessly integrate the service into your application.
110112

111113
> [!div class="nextstepaction"]
112-
> [Quickstart: REST API and client SDKs](./quickstart-text.md)
114+
> [Quickstart: Analyze text content](./quickstart-text.md)

articles/ai-services/custom-vision-service/get-started-build-detector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ To upload another set of images, return to the top of this section and repeat th
8484

8585
To train the detector model, select the **Train** button. The detector uses all of the current images and their tags to create a model that identifies each tagged object. This process can take several minutes.
8686

87-
![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train01.png)
87+
![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train-1.png)
8888

8989
The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
9090

0 commit comments

Comments
 (0)