You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/responsible-use-of-ai-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,13 +16,13 @@ ms.custom: ignite-2024
16
16
17
17
This article aims to provide an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well.
18
18
19
-
At Microsoft, our approach is guided by a governance framework rooted in AI principles, which establish product requirements and serve as our "north star." When we identify abusiness use case for generative AI, we first assess the potential risks of the AI system to pinpoint critical focus areas.
19
+
At Microsoft, our approach is guided by a governance framework rooted in AI principles, which establish product requirements and serve as our "north star." When we identify a business use case for generative AI, we first assess the potential risks of the AI system to pinpoint critical focus areas.
20
20
21
21
Once we identify these risks, we evaluate their prevalence within the AI system through systematic measurement, helping us prioritize areas that need attention. We then apply appropriate mitigations and measure again to assess effectiveness.
22
22
23
23
Finally, we examine strategies for managing risks in production, including deployment and operational readiness and setting up monitoring to support ongoing improvement after the application is live.
24
24
25
-
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the contenty safety pattern.":::
25
+
:::image type="content" source="media/content-safety/safety-pattern.png" alt-text="Diagram of the content safety pattern.":::
26
26
27
27
In alignment with those Microsoft's RAI practices, these recommendations are organized into four stages:
28
28
-**Map**: Identify and prioritize potential content risks that could result from your AI system through iterative red-teaming, stress-testing, and analysis.
@@ -74,7 +74,7 @@ At the end of this measurement stage, you should have a defined measurement appr
74
74
75
75
Mitigating harms presented by large language models such as the Azure OpenAI models requires an iterative, layered approach that includes experimentation and continual measurement. We recommend developing a mitigation plan that encompasses four layers of mitigations for the harms identified in the earlier stages of this process:
76
76
77
-
:::image type="content" source="media/content-safety/mitigation-layers.png" alt-text="Diagram of mitigation layers":::
77
+
:::image type="content" source="media/content-safety/mitigation-layers.png" alt-text="Diagram of mitigation layers.":::
0 commit comments