|
1 | 1 | ---
|
2 | 2 | title: Responsible AI for Azure AI Foundry
|
3 | 3 | titleSuffix: Azure AI Foundry
|
4 |
| -description: Learn how to use AI responsibly with Azure AI Foundry. |
| 4 | +description: Learn how to use AI services and features responsibly with Azure AI Foundry. |
5 | 5 | manager: nitinme
|
6 | 6 | keywords: Azure AI services, cognitive
|
7 | 7 | ms.service: azure-ai-foundry
|
8 | 8 | ms.topic: overview
|
9 |
| -ms.date: 11/06/2024 |
| 9 | +ms.date: 02/20/2025 |
10 | 10 | ms.author: pafarley
|
11 | 11 | author: PatrickFarley
|
12 | 12 | ms.custom: ignite-2024
|
13 | 13 | ---
|
14 | 14 |
|
15 | 15 | # Responsible AI for Azure AI Foundry
|
16 | 16 |
|
17 |
| -This article aims to provide an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well. |
| 17 | +This article provides an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well. |
18 | 18 |
|
19 | 19 | At Microsoft, our approach is guided by a governance framework rooted in AI principles, which establish product requirements and serve as our "north star." When we identify a business use case for generative AI, we first assess the potential risks of the AI system to pinpoint critical focus areas.
|
20 | 20 |
|
21 | 21 | Once we identify these risks, we evaluate their prevalence within the AI system through systematic measurement, helping us prioritize areas that need attention. We then apply appropriate mitigations and measure again to assess effectiveness.
|
22 | 22 |
|
23 | 23 | Finally, we examine strategies for managing risks in production, including deployment and operational readiness and setting up monitoring to support ongoing improvement after the application is live.
|
24 | 24 |
|
25 |
| -:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern."::: |
| 25 | +:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern: Map, Measure, and Manage."::: |
26 | 26 |
|
27 |
| -In alignment with those Microsoft's RAI practices, these recommendations are organized into four stages: |
| 27 | +In alignment with Microsoft's RAI practices, these recommendations are organized into four stages: |
28 | 28 | - **Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
|
29 | 29 | - **Measure**: Measure the frequency and severity of those content risks by establishing clear metrics, creating measurement test sets, and completing iterative, systematic testing (both manual and automated).
|
30 | 30 | - **Mitigate**: Mitigate content risks by implementing tools and strategies such as prompt engineering and using our content filters. Repeat measurement to test effectiveness after implementing mitigations.
|
|
0 commit comments