Skip to content

Commit 7a26481

Browse files
authored
Merge pull request #7734 from TimShererWithAquent/us496641-08
Freshness Edit: AI Foundry: Run AI Red Teaming Agent locally
2 parents b4af9f1 + 7a189b9 commit 7a26481

File tree

2 files changed

+82
-80
lines changed

2 files changed

+82
-80
lines changed

articles/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent.md

Lines changed: 76 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
2-
title: Run AI Red Teaming Agent locally (Azure AI Evaluation SDK)
2+
title: Run AI Red Teaming Agent Locally (Azure AI Evaluation SDK)
33
titleSuffix: Azure AI Foundry
4-
description: This article provides instructions on how to use the AI Red Teaming Agent to run a local automated scan of a Generative AI application with the Azure AI Evaluation SDK.
4+
description: Learn how to use the AI Red Teaming Agent to run a local automated scan of a Generative AI application with the Azure AI Evaluation SDK.
55
ms.service: azure-ai-foundry
66
ms.custom:
77
- references_regions
88
ms.topic: how-to
9-
ms.date: 06/03/2025
9+
ms.date: 10/20/2025
1010
ms.reviewer: minthigpen
1111
ms.author: lagayhar
1212
author: lgayhardt
@@ -16,12 +16,12 @@ author: lgayhardt
1616

1717
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
1818

19-
The AI Red Teaming Agent (preview) is a powerful tool designed to help organizations proactively find safety risks associated with generative AI systems during design and development. By integrating Microsoft's open-source framework for Python Risk Identification Tool's ([PyRIT](https://github.com/Azure/PyRIT)) AI red teaming capabilities directly into Azure AI Foundry, teams can automatically scan their model and application endpoints for risks, simulate adversarial probing, and generate detailed reports.
19+
The AI Red Teaming Agent (preview) is a powerful tool designed to help organizations proactively find safety risks associated with generative AI systems during design and development. The AI red teaming capabilities of Microsoft's open-source framework for Python Risk Identification Tool ([PyRIT](https://github.com/Azure/PyRIT)) are integrated directly into Azure AI Foundry. Teams can automatically scan their model and application endpoints for risks, simulate adversarial probing, and generate detailed reports.
2020

21-
This article guides you through the process of
21+
This article explains how to:
2222

23-
- Creating an AI Red Teaming Agent locally
24-
- Running automated scans locally and viewing the results
23+
- Create an AI Red Teaming Agent locally
24+
- Run automated scans locally and view the results
2525

2626
## Prerequisites
2727

@@ -31,7 +31,7 @@ This article guides you through the process of
3131

3232
## Getting started
3333

34-
First install the `redteam` package as an extra from Azure AI Evaluation SDK, this provides the PyRIT functionality:
34+
Install the `redteam` package as an extra from Azure AI Evaluation SDK. This package provides the PyRIT functionality:
3535

3636
```python
3737
uv pip install "azure-ai-evaluation[redteam]"
@@ -72,7 +72,7 @@ def simple_callback(query: str) -> str:
7272
red_team_result = await red_team_agent.scan(target=simple_callback)
7373
```
7474

75-
This example generates a default set of 10 attack prompts for each of the default set of four risk categories (violence, sexual, hate and unfairness, and self-harm) to result in a total of 40 rows of attack prompts to be generated and sent to your target.
75+
This example generates a default set of 10 attack prompts for each of the default set of four risk categories: violence, sexual, hate and unfairness, and self-harm. The example has a total of 40 rows of attack prompts to generate and send to your target.
7676

7777
Optionally, you can specify which risk categories of content risks you want to cover with `risk_categories` parameter and define the number of prompts covering each risk category with `num_objectives` parameter.
7878

@@ -96,7 +96,7 @@ red_team_agent = RedTeam(
9696
9797
## Region support
9898

99-
Currently, AI Red Teaming Agent is only available in a few regions. Ensure your Azure AI Project is located in the following supported regions:
99+
Currently, AI Red Teaming Agent is available only in some regions. Ensure your Azure AI Project is located in the following supported regions:
100100

101101
- East US2
102102
- Sweden Central
@@ -107,70 +107,70 @@ Currently, AI Red Teaming Agent is only available in a few regions. Ensure your
107107

108108
The `RedTeam` can run automated scans on various targets.
109109

110-
**Model configurations**: If you're just scanning a base model during your model selection process, you can pass in your model configuration as a target to your `red_team_agent.scan()`:
110+
- **Model configurations**: If you're just scanning a base model during your model selection process, you can pass in your model configuration as a target to your `red_team_agent.scan()`:
111111

112-
```python
113-
# Configuration for Azure OpenAI model
114-
azure_openai_config = {
115-
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
116-
"api_key": os.environ.get("AZURE_OPENAI_KEY"), # not needed for entra ID based auth, use az login before running,
117-
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
118-
}
112+
```python
113+
# Configuration for Azure OpenAI model
114+
azure_openai_config = {
115+
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
116+
"api_key": os.environ.get("AZURE_OPENAI_KEY"), # not needed for entra ID based auth, use az login before running,
117+
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
118+
}
119119

120-
red_team_result = await red_team_agent.scan(target=azure_openai_config)
121-
```
120+
red_team_result = await red_team_agent.scan(target=azure_openai_config)
121+
```
122122

123-
**Simple callback**: A simple callback which takes in a string prompt from `red_team_agent` and returns some string response from your application.
123+
- **Simple callback**: A simple callback that takes in a string prompt from `red_team_agent` and returns some string response from your application:
124124

125-
```python
126-
# Define a simple callback function that simulates a chatbot
127-
def simple_callback(query: str) -> str:
128-
# Your implementation to call your application (e.g., RAG system, chatbot)
129-
return "I'm an AI assistant that follows ethical guidelines. I cannot provide harmful content."
125+
```python
126+
# Define a simple callback function that simulates a chatbot
127+
def simple_callback(query: str) -> str:
128+
# Your implementation to call your application (e.g., RAG system, chatbot)
129+
return "I'm an AI assistant that follows ethical guidelines. I cannot provide harmful content."
130130

131-
red_team_result = await red_team_agent.scan(target=simple_callback)
132-
```
131+
red_team_result = await red_team_agent.scan(target=simple_callback)
132+
```
133133

134-
**Complex callback**: A more complex callback that is aligned to the OpenAI Chat Protocol
134+
- **Complex callback**: A more complex callback that is aligned to the OpenAI Chat Protocol:
135135

136-
```python
137-
# Create a more complex callback function that handles conversation state
138-
async def advanced_callback(messages, stream=False, session_state=None, context=None):
139-
# Extract the latest message from the conversation history
140-
messages_list = [{"role": message.role, "content": message.content}
141-
for message in messages]
142-
latest_message = messages_list[-1]["content"]
136+
```python
137+
# Create a more complex callback function that handles conversation state
138+
async def advanced_callback(messages, stream=False, session_state=None, context=None):
139+
# Extract the latest message from the conversation history
140+
messages_list = [{"role": message.role, "content": message.content}
141+
for message in messages]
142+
latest_message = messages_list[-1]["content"]
143143

144-
# In a real application, you might process the entire conversation history
145-
# Here, we're just simulating a response
146-
response = "I'm an AI assistant that follows safety guidelines. I cannot provide harmful content."
144+
# In a real application, you might process the entire conversation history
145+
# Here, we're just simulating a response
146+
response = "I'm an AI assistant that follows safety guidelines. I cannot provide harmful content."
147147

148-
# Format the response to follow the expected chat protocol format
149-
formatted_response = {
150-
"content": response,
151-
"role": "assistant"
152-
}
148+
# Format the response to follow the expected chat protocol format
149+
formatted_response = {
150+
"content": response,
151+
"role": "assistant"
152+
}
153153

154-
return {"messages": [formatted_response]}
154+
return {"messages": [formatted_response]}
155155

156-
red_team_result = await red_team_agent.scan(target=advanced_callback)
157-
```
156+
red_team_result = await red_team_agent.scan(target=advanced_callback)
157+
```
158158

159-
**PyRIT prompt target**: For advanced users coming from PyRIT, `RedTeam` can also scan text-based PyRIT `PromptChatTarget`. See the full list of [PyRIT prompt targets](https://azure.github.io/PyRIT/code/targets/0_prompt_targets.html).
159+
- **PyRIT prompt target**: For advanced users coming from PyRIT, `RedTeam` can also scan text-based PyRIT `PromptChatTarget`. See the full list of [PyRIT prompt targets](https://azure.github.io/PyRIT/code/targets/0_prompt_targets.html).
160160

161-
```python
162-
from pyrit.prompt_target import OpenAIChatTarget, PromptChatTarget
161+
```python
162+
from pyrit.prompt_target import OpenAIChatTarget, PromptChatTarget
163163

164-
# Create a PyRIT PromptChatTarget for an Azure OpenAI model
165-
# This could be any class that inherits from PromptChatTarget
166-
chat_target = OpenAIChatTarget(
167-
model_name=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
168-
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
169-
api_key=os.environ.get("AZURE_OPENAI_KEY")
170-
)
164+
# Create a PyRIT PromptChatTarget for an Azure OpenAI model
165+
# This could be any class that inherits from PromptChatTarget
166+
chat_target = OpenAIChatTarget(
167+
model_name=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
168+
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
169+
api_key=os.environ.get("AZURE_OPENAI_KEY")
170+
)
171171

172-
red_team_result = await red_team_agent.scan(target=chat_target)
173-
```
172+
red_team_result = await red_team_agent.scan(target=chat_target)
173+
```
174174

175175
## Supported risk categories
176176

@@ -185,9 +185,9 @@ The following risk categories are supported in the AI Red Teaming Agent's runs,
185185

186186
## Custom attack objectives
187187

188-
Though the AI Red Teaming Agent provides a Microsoft curated set of adversarial attack objectives covering each supported risk, you might want to bring your own additional custom set to be used fo reach risk category as your own organization policy might be different.
188+
The AI Red Teaming Agent provides a Microsoft curated set of adversarial attack objectives that cover each supported risk. Because your organization's policy might be different, you might want to bring your own custom set to use for each risk category.
189189

190-
You can run the AI Red Teaming Agent on your own dataset
190+
You can run the AI Red Teaming Agent on your own dataset.
191191

192192
```python
193193
custom_red_team_agent = RedTeam(
@@ -197,7 +197,7 @@ custom_red_team_agent = RedTeam(
197197
)
198198
```
199199

200-
Your dataset must be a JSON file, in the following format with the associated metadata for the corresponding risk-types. When bringing your own prompts, the supported `risk-type`s are `violence`, `sexual`, `hate_unfairness`, and `self_harm` so that the attacks can be evaluated for success correspondingly by our Safety Evaluators. The number of prompts you specify will be the `num_objectives` used in the scan.
200+
Your dataset must be a JSON file, in the following format with the associated metadata for the corresponding risk types. When you bring your own prompts, the supported risk types are `violence`, `sexual`, `hate_unfairness`, and `self_harm`. Use these supported types so that the Safety Evaluators can evaluate the attacks for success. The number of prompts that you specify is the `num_objectives` used in the scan.
201201

202202
```json
203203
[
@@ -229,25 +229,25 @@ Your dataset must be a JSON file, in the following format with the associated me
229229

230230
## Supported attack strategies
231231

232-
If only the target is passed in when you run a scan and no attack strategies are specified, the `red_team_agent` will only send baseline direct adversarial queries to your target. This is the most naive method of attempting to elicit undesired behavior or generated content. It's recommended to try the baseline direct adversarial querying first before applying any attack strategies.
232+
If only the target is passed in when you run a scan and no attack strategies are specified, the `red_team_agent` sends only baseline direct adversarial queries to your target. This approach is the most naive method of attempting to elicit undesired behavior or generated content. We recommend that you try the baseline direct adversarial querying first before you apply any attack strategies.
233233

234-
Attack strategies are methods to take the baseline direct adversarial queries and convert them into another form to try bypassing your target's safeguards. Attack strategies are classified into three buckets of complexities. Attack complexity reflects the effort an attacker needs to put in conducting the attack.
234+
Attack strategies are methods to take the baseline direct adversarial queries and convert them into another form to try bypassing your target's safeguards. Attack strategies are classified into three levels of complexity. Attack complexity reflects the effort an attacker needs to put in conducting the attack.
235235

236-
- **Easy complexity attacks** require less effort, such as translation of a prompt into some encoding
237-
- **Moderate complexity attacks** requires having access to resources such as another generative AI model
238-
- **Difficult complexity attacks** includes attacks that require access to significant resources and effort to execute an attack such as knowledge of search-based algorithms in addition to a generative AI model.
236+
- **Easy complexity attacks** require less effort, such as translation of a prompt into some encoding.
237+
- **Moderate complexity attacks** require having access to resources such as another generative AI model.
238+
- **Difficult complexity attacks** include attacks that require access to significant resources and effort to run an attack, such as knowledge of search-based algorithms, in addition to a generative AI model.
239239

240240
### Default grouped attack strategies
241241

242-
We offer a group of default attacks for easy complexity and moderate complexity which can be used in `attack_strategies` parameter. A difficult complexity attack can be a composition of two strategies in one attack.
242+
This approach offers a group of default attacks for easy complexity and moderate complexity that you can use in the `attack_strategies` parameter. A difficult complexity attack can be a composition of two strategies in one attack.
243243

244244
| Attack strategy complexity group | Includes |
245245
| --- | --- |
246246
| `EASY` | `Base64`, `Flip`, `Morse` |
247247
| `MODERATE` | `Tense` |
248248
| `DIFFICULT` | Composition of `Tense` and `Base64` |
249249

250-
The following scan would first run all the baseline direct adversarial queries. Then, it would apply the following attack techniques: `Base64`, `Flip`, `Morse`, `Tense`, and a composition of `Tense` and `Base64` which would first translate the baseline query into past tense then encode it into `Base64`.
250+
The following scan first runs all the baseline direct adversarial queries. Then, it applies the following attack techniques: `Base64`, `Flip`, `Morse`, `Tense`, and a composition of `Tense` and `Base64`, which first translates the baseline query into past tense then encode it into `Base64`.
251251

252252
```python
253253
from azure.ai.evaluation.red_team import AttackStrategy
@@ -266,7 +266,7 @@ red_team_agent_result = await red_team_agent.scan(
266266

267267
### Specific attack strategies
268268

269-
More advanced users can specify the desired attack strategies instead of using default groups. The following attack strategies are supported:
269+
You can specify the desired attack strategies instead of using default groups. The following attack strategies are supported:
270270

271271
| Attack strategy | Description | Complexity |
272272
| --- | --- | --- |
@@ -292,9 +292,11 @@ More advanced users can specify the desired attack strategies instead of using d
292292
| `Jailbreak` | User Injected Prompt Attacks (UPIA) injects specially crafted prompts to bypass AI safeguards | Easy |
293293
| `Tense` | Changes tense of text into past tense. | Moderate |
294294

295-
Each new attack strategy specified is applied to the set of baseline adversarial queries used in addition to the baseline adversarial queries.
295+
Each new attack strategy is applied to the set of baseline adversarial queries used in addition to the baseline adversarial queries.
296+
297+
The following example generates one attack objective per each of the four risk categories specified. This approach first generates four baseline adversarial prompts to send to your target. Then, each baseline query gets converted into each of the four attack strategies. This conversion results in a total of 20 attack-response pairs from your AI system.
296298

297-
This following example would generate one attack objective per each of the four risk categories specified. This will first, generate four baseline adversarial prompts which would be sent to your target. Then, each baseline query would get converted into each of the four attack strategies. This results in a total of 20 attack-response pairs from your AI system. The last attack strategy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
299+
The last attack strategy is a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition first encodes the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions support chaining only two attack strategies together.
298300

299301
```python
300302
red_team_agent = RedTeam(
@@ -335,7 +337,7 @@ red_team_agent_result = await red_team_agent.scan(
335337
)
336338
```
337339

338-
The `My-First-RedTeam-Scan.json` file contains a scorecard that provides a breakdown across attack complexity and risk categories, as well as a joint attack complexity and risk category report. Important metadata is tracked in the `parameters` section which outlines which risk categories were used to generate the attack objectives and which attack strategies were specified in the scan.
340+
The `My-First-RedTeam-Scan.json` file contains a scorecard that provides a breakdown across attack complexity and risk categories. It also includes a joint attack complexity and risk category report. Important metadata is tracked in the `parameters` section, which outlines which risk categories were used to generate the attack objectives and which attack strategies were specified in the scan.
339341

340342
```json
341343
{
@@ -508,4 +510,4 @@ The red teaming scorecard also provides row-level data on each attack-response p
508510

509511
## Related content
510512

511-
Try out an [example workflow](https://aka.ms/airedteamingagent-sample) in our GitHub samples.
513+
Try an [example workflow](https://aka.ms/airedteamingagent-sample) in the GitHub samples.

0 commit comments

Comments
 (0)