|
2 | 2 | title: AI Red Teaming Agent |
3 | 3 | titleSuffix: Azure AI Foundry |
4 | 4 | description: This article provides conceptual overview of the AI Red Teaming Agent. |
5 | | -manager: scottpolly |
6 | 5 | ms.service: azure-ai-foundry |
7 | 6 | ms.topic: how-to |
8 | 7 | ms.date: 04/04/2025 |
@@ -37,7 +36,7 @@ When thinking about AI-related safety risks developing trustworthy AI systems, M |
37 | 36 |
|
38 | 37 | :::image type="content" source="../media/evaluations/red-teaming-agent/map-measure-mitigate-ai-red-teaming.png" alt-text="Diagram of how to use AI Red Teaming Agent showing proactive to reactive and less costly to more costly." lightbox="../media/evaluations/red-teaming-agent/map-measure-mitigate-ai-red-teaming.png"::: |
39 | 38 |
|
40 | | -AI Red Teaming Agent can be used to run automated scans and simulate adversarial probing to help accelerate the identification and evaluation of known risks at scale. This helps teams "shift left" from costly reactive incidents to more proactive testing frameworks that can catch issues before deployment. Manual AI red teaming process is time and resource intensive. It relies on the creativity of safety and security expertise to simulate adversarial probing. This process can create a bottleneck for many organizations to accelerate AI adoption. With the AI Red Teaming Agent, organizations can now leverage Microsoft’s deep expertise to scale and accelerate their AI development with Trustworthy AI at the forefront. |
| 39 | +AI Red Teaming Agent can be used to run automated scans and simulate adversarial probing to help accelerate the identification and evaluation of known risks at scale. This helps teams "shift left" from costly reactive incidents to more proactive testing frameworks that can catch issues before deployment. Manual AI red teaming process is time and resource intensive. It relies on the creativity of safety and security expertise to simulate adversarial probing. This process can create a bottleneck for many organizations to accelerate AI adoption. With the AI Red Teaming Agent, organizations can now leverage Microsoft's deep expertise to scale and accelerate their AI development with Trustworthy AI at the forefront. |
41 | 40 |
|
42 | 41 | We encourage teams to use the AI Red Teaming Agent to run automated scans throughout the design, development, and pre-deployment stage: |
43 | 42 |
|
@@ -105,7 +104,7 @@ Learn more about the tools leveraged by the AI Red Teaming Agent. |
105 | 104 | - [Azure AI Risk and Safety Evaluations](./safety-evaluations-transparency-note.md) |
106 | 105 | - [PyRIT: Python Risk Identification Tool](https://github.com/Azure/PyRIT) |
107 | 106 |
|
108 | | -The most effective strategies for risk assessment we’ve seen leverage automated tools to surface potential risks, which are then analyzed by expert human teams for deeper insights. If your organization is just starting with AI red teaming, we encourage you to explore the resources created by our own AI red team at Microsoft to help you get started. |
| 107 | +The most effective strategies for risk assessment we've seen leverage automated tools to surface potential risks, which are then analyzed by expert human teams for deeper insights. If your organization is just starting with AI red teaming, we encourage you to explore the resources created by our own AI red team at Microsoft to help you get started. |
109 | 108 |
|
110 | 109 | - [Planning red teaming for large language models (LLMs) and their applications](../openai/concepts/red-teaming.md) |
111 | 110 | - [Three takeaways from red teaming 100 generative AI products](https://www.microsoft.com/security/blog/2025/01/13/3-takeaways-from-red-teaming-100-generative-ai-products/) |
|
0 commit comments