You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/ai-red-teaming-agent.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,6 @@
2
2
title: AI Red Teaming Agent
3
3
titleSuffix: Azure AI Foundry
4
4
description: This article provides conceptual overview of the AI Red Teaming Agent.
5
-
manager: scottpolly
6
5
ms.service: azure-ai-foundry
7
6
ms.topic: how-to
8
7
ms.date: 04/04/2025
@@ -37,7 +36,7 @@ When thinking about AI-related safety risks developing trustworthy AI systems, M
37
36
38
37
:::image type="content" source="../media/evaluations/red-teaming-agent/map-measure-mitigate-ai-red-teaming.png" alt-text="Diagram of how to use AI Red Teaming Agent showing proactive to reactive and less costly to more costly." lightbox="../media/evaluations/red-teaming-agent/map-measure-mitigate-ai-red-teaming.png":::
39
38
40
-
AI Red Teaming Agent can be used to run automated scans and simulate adversarial probing to help accelerate the identification and evaluation of known risks at scale. This helps teams "shift left" from costly reactive incidents to more proactive testing frameworks that can catch issues before deployment. Manual AI red teaming process is time and resource intensive. It relies on the creativity of safety and security expertise to simulate adversarial probing. This process can create a bottleneck for many organizations to accelerate AI adoption. With the AI Red Teaming Agent, organizations can now leverage Microsoft’s deep expertise to scale and accelerate their AI development with Trustworthy AI at the forefront.
39
+
AI Red Teaming Agent can be used to run automated scans and simulate adversarial probing to help accelerate the identification and evaluation of known risks at scale. This helps teams "shift left" from costly reactive incidents to more proactive testing frameworks that can catch issues before deployment. Manual AI red teaming process is time and resource intensive. It relies on the creativity of safety and security expertise to simulate adversarial probing. This process can create a bottleneck for many organizations to accelerate AI adoption. With the AI Red Teaming Agent, organizations can now leverage Microsoft's deep expertise to scale and accelerate their AI development with Trustworthy AI at the forefront.
41
40
42
41
We encourage teams to use the AI Red Teaming Agent to run automated scans throughout the design, development, and pre-deployment stage:
43
42
@@ -105,7 +104,7 @@ Learn more about the tools leveraged by the AI Red Teaming Agent.
105
104
-[Azure AI Risk and Safety Evaluations](./safety-evaluations-transparency-note.md)
The most effective strategies for risk assessment we’ve seen leverage automated tools to surface potential risks, which are then analyzed by expert human teams for deeper insights. If your organization is just starting with AI red teaming, we encourage you to explore the resources created by our own AI red team at Microsoft to help you get started.
107
+
The most effective strategies for risk assessment we've seen leverage automated tools to surface potential risks, which are then analyzed by expert human teams for deeper insights. If your organization is just starting with AI red teaming, we encourage you to explore the resources created by our own AI red team at Microsoft to help you get started.
109
108
110
109
-[Planning red teaming for large language models (LLMs) and their applications](../openai/concepts/red-teaming.md)
111
110
-[Three takeaways from red teaming 100 generative AI products](https://www.microsoft.com/security/blog/2025/01/13/3-takeaways-from-red-teaming-100-generative-ai-products/)
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/fine-tuning-overview.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,6 @@
2
2
title: Fine-tune models with Azure AI Foundry
3
3
titleSuffix: Azure AI Foundry
4
4
description: This article explains what fine-tuning is and under what circumstances you should consider doing it.
5
-
manager: scottpolly
6
5
ms.service: azure-ai-foundry
7
6
ms.custom:
8
7
- build-2024
@@ -26,10 +25,10 @@ Fine-tuning excels at customizing language models for specific applications and
26
25
-**Domain Specialization:** Adapt a language model for a specialized field like medicine, finance, or law – where domain specific knowledge and terminology is important. Teach the model to understand technical jargon and provide more accurate responses.
27
26
-**Task Performance:** Optimize a model for a specific task like sentiment analysis, code generation, translation, or summarization. You can significantly improve the performance of a smaller model on a specific application, compared to a general purpose model.
28
27
-**Style and Tone:** Teach the model to match your preferred communication style – for example, adapt the model for formal business writing, brand-specific voice, or technical writing.
29
-
-**Instruction Following:** Improve the model’s ability to follow specific formatting requirements, multi-step instructions, or structured outputs. In multi-agent frameworks, teach the model to call the right agent for the right task.
28
+
-**Instruction Following:** Improve the model's ability to follow specific formatting requirements, multi-step instructions, or structured outputs. In multi-agent frameworks, teach the model to call the right agent for the right task.
30
29
-**Compliance and Safety:** Train a fine-tuned model to adhere to organizational policies, regulatory requirements, or other guidelines unique to your application.
31
30
-**Language or Cultural Adaptation:** Tailor a language model for a specific language, dialect, or cultural context that may not be well represented in the training data.
32
-
Fine-tuning is especially valuable when a general-purpose model doesn’t meet your specific requirements – but you want to avoid the cost and complexity of training a model from scratch.
31
+
Fine-tuning is especially valuable when a general-purpose model doesn't meet your specific requirements – but you want to avoid the cost and complexity of training a model from scratch.
33
32
34
33
## Serverless or Managed Compute?
35
34
Before picking a model, it's important to select the fine-tuning product that matches your needs. Azure's AI Foundry offers two primary modalities for fine tuning: serverless and managed compute.
0 commit comments