Skip to content

Commit 057e37f

Browse files
committed
foundry freshness
1 parent e73d5c6 commit 057e37f

File tree

4 files changed

+16
-16
lines changed

4 files changed

+16
-16
lines changed

articles/ai-studio/ai-services/content-safety-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,16 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: overview
10-
ms.date: 11/09/2024
10+
ms.date: 02/20/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

15-
# Content safety in the Azure AI Foundry portal
15+
# Content Safety in the Azure AI Foundry portal
1616

1717
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
1818

19-
## Features
19+
## Features
2020

2121
You can use Azure AI Content Safety for many scenarios:
2222

@@ -44,6 +44,6 @@ You can use Azure AI Content Safety for many scenarios:
4444
Refer to the [Content Safety overview](/azure/ai-services/content-safety/overview) for supported regions, rate limits, and input requirements for all features. Refer to the [Language support](/azure/ai-services/content-safety/language-support) page for supported languages.
4545

4646

47-
## Next step
47+
## Next step
4848

4949
Get started using Azure AI Content Safety in [Azure AI Foundry portal](https://ai.azure.com) by following the [How-to guide](./how-to/content-safety.md).

articles/ai-studio/how-to/use-blocklists.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 11/07/2024
10+
ms.date: 02/20/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---

articles/ai-studio/includes/use-blocklists.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.reviewer: pafarley
66
ms.author: pafarley
77
ms.service: azure-ai-foundry
88
ms.topic: include
9-
ms.date: 12/05/2024
9+
ms.date: 02/20/2025
1010
ms.custom: include
1111
---
1212

@@ -17,19 +17,19 @@ ms.custom: include
1717

1818
:::image type="content" source="../media/content-safety/content-filter/select-blocklists.png" lightbox="../media/content-safety/content-filter/select-blocklists.png" alt-text="Screenshot of the Blocklists page tab.":::
1919

20-
2. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
20+
1. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
2121

22-
3. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
22+
1. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
2323

24-
4. Enter the term that should be filtered and select **Add term**. You can also use a regex. You can delete each term in your blocklist.
24+
1. Enter the term that should be filtered and select **Add term**. You can also use a regex. You can delete each term in your blocklist.
2525

2626
## Attach a blocklist to a content filter configuration
2727

2828
1. Once the blocklist is ready, go back to the **Safety+ Security** page and select the **Content filters** tab. Create a new content filter configuration. This opens a wizard with several AI content safety components.
2929

3030
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" lightbox="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the Create content filter button.":::
3131

32-
2. On the **Input filter** and **Output filter** screens, toggle the **Blocklist** button on. You can then select a blocklist from the list.
32+
1. On the **Input filter** and **Output filter** screens, toggle the **Blocklist** button on. You can then select a blocklist from the list.
3333
There are two types of blocklists: the custom blocklists you created, and prebuilt blocklists that Microsoft provides—in this case a Profanity blocklist (English).
3434

35-
3. You can now decide which of the available blocklists you want to include in your content filtering configuration. The last step is to review and finish the content filtering configuration by selecting **Next**. You can always go back and edit your configuration. Once it’s ready, select a **Create content filter**. The new configuration that includes your blocklists can now be applied to a deployment.
35+
1. You can now decide which of the available blocklists you want to include in your content filtering configuration. The last step is to review and finish the content filtering configuration by selecting **Next**. You can always go back and edit your configuration. Once it’s ready, select a **Create content filter**. The new configuration that includes your blocklists can now be applied to a deployment.

articles/ai-studio/responsible-use-of-ai-overview.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,30 @@
11
---
22
title: Responsible AI for Azure AI Foundry
33
titleSuffix: Azure AI Foundry
4-
description: Learn how to use AI responsibly with Azure AI Foundry.
4+
description: Learn how to use AI services and features responsibly with Azure AI Foundry.
55
manager: nitinme
66
keywords: Azure AI services, cognitive
77
ms.service: azure-ai-foundry
88
ms.topic: overview
9-
ms.date: 11/06/2024
9+
ms.date: 02/20/2025
1010
ms.author: pafarley
1111
author: PatrickFarley
1212
ms.custom: ignite-2024
1313
---
1414

1515
# Responsible AI for Azure AI Foundry
1616

17-
This article aims to provide an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well.
17+
This article provides an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well.
1818

1919
At Microsoft, our approach is guided by a governance framework rooted in AI principles, which establish product requirements and serve as our "north star." When we identify a business use case for generative AI, we first assess the potential risks of the AI system to pinpoint critical focus areas.
2020

2121
Once we identify these risks, we evaluate their prevalence within the AI system through systematic measurement, helping us prioritize areas that need attention. We then apply appropriate mitigations and measure again to assess effectiveness.
2222

2323
Finally, we examine strategies for managing risks in production, including deployment and operational readiness and setting up monitoring to support ongoing improvement after the application is live.
2424

25-
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern.":::
25+
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern: Map, Measure, and Manage.":::
2626

27-
In alignment with those Microsoft's RAI practices, these recommendations are organized into four stages:
27+
In alignment with Microsoft's RAI practices, these recommendations are organized into four stages:
2828
- **Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
2929
- **Measure**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
3030
- **Mitigate**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.

0 commit comments

Comments
 (0)