Skip to content

Commit e425643

Browse files
committed
add images
1 parent 7adb327 commit e425643

File tree

5 files changed

+7
-4
lines changed

5 files changed

+7
-4
lines changed

articles/ai-studio/ai-services/how-to/content-safety.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,8 @@ Follow these steps to use the Content Safety **try it out** page:
2727
1. Go to [AI Studio](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
2828
tbd image
2929
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
30-
tbd image
31-
32-
30+
:::image type="content" source="../../media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
31+
3332
## Analyze text
3433

3534
1. Select the **Moderate text content** panel.
@@ -39,7 +38,9 @@ Follow these steps to use the Content Safety **try it out** page:
3938

4039
### Use a blocklist
4140

42-
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
41+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
42+
43+
:::image type="content" source="../../media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel.":::
4344

4445
## Analyze images
4546

@@ -54,6 +55,8 @@ The **Moderate image** page provides capability for you to quickly try out image
5455

5556
You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
5657

58+
:::image type="content" source="../../media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button.":::
59+
5760
## Use Prompt Shields
5861

5962
The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
38.2 KB
Loading
33.1 KB
Loading
263 KB
Loading
151 KB
Loading

0 commit comments

Comments
 (0)