Skip to content

Commit 4885bca

Browse files
committed
red teaming cloud update
1 parent 95d821c commit 4885bca

File tree

1 file changed

+47
-9
lines changed

1 file changed

+47
-9
lines changed

articles/ai-foundry/how-to/develop/run-ai-red-teaming-cloud.md

Lines changed: 47 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- references_regions
99
ms.topic: how-to
10-
ms.date: 06/03/2025
10+
ms.date: 07/23/2025
1111
ms.reviewer: minthigpen
1212
ms.author: lagayhar
1313
author: lgayhardt
@@ -31,7 +31,7 @@ If this is your first time running evaluations or AI red teaming runs on your Az
3131

3232
## Getting started
3333

34-
First, install Azure AI Foundry SDK's project client which runs the AI Red Teaming Agent in the cloud
34+
First, install Azure AI Foundry SDK's project client, which runs the AI Red Teaming Agent in the cloud.
3535

3636
```python
3737
uv install azure-ai-projects azure-identity
@@ -46,15 +46,44 @@ Then, set your environment variables for your Azure AI Foundry resources
4646
import os
4747

4848
endpoint = os.environ["PROJECT_ENDPOINT"] # Sample : https://<account_name>.services.ai.azure.com/api/projects/<project_name>
49-
model_endpoint = os.environ["MODEL_ENDPOINT"] # Sample : https://<account_name>.services.ai.azure.com
50-
model_api_key= os.environ["MODEL_API_KEY"]
51-
model_deployment_name = os.environ["MODEL_DEPLOYMENT_NAME"] # Sample : gpt-4o-mini
49+
5250
```
5351

5452
## Supported targets
5553

5654
Running the AI Red Teaming Agent in the cloud currently only supports Azure OpenAI model deployments in your Azure AI Foundry project as a target.
5755

56+
## Configure your target
57+
58+
You can configure your target model deployment in two ways:
59+
60+
### Option 1: Using Foundry project deployments
61+
62+
If you're using model deployments that are part of your Azure AI Foundry project, set up the following environment variables:
63+
64+
```python
65+
import os
66+
67+
model_endpoint = os.environ["MODEL_ENDPOINT"] # Sample : https://<account_name>.openai.azure.com
68+
model_api_key = os.environ["MODEL_API_KEY"]
69+
model_deployment_name = os.environ["MODEL_DEPLOYMENT_NAME"] # Sample : gpt-4o-mini
70+
```
71+
72+
### Option 2: Using Azure OpenAI/AI Services deployments
73+
74+
If you want to use deployments from your Azure OpenAI or AI Services accounts, you first need to connect these resources to your Foundry project through connections.
75+
76+
1. **Create a connection**: Follow the instructions in [Configure project connections](../configure-project-connection.md?pivots=ai-foundry-portal#add-a-connection) to connect your Azure OpenAI or AI Services resource to your Foundry project.
77+
78+
2. **Get the connection name**: After connecting the account, you'll see the connection created with a generated name in your Foundry project.
79+
80+
3. **Configure the target**: Use the format `"connectionName/deploymentName"` for your model deployment configuration:
81+
82+
```python
83+
# Format: "connectionName/deploymentName"
84+
model_deployment_name = "my-openai-connection/gpt-4o-mini"
85+
```
86+
5887
## Create an AI red teaming run
5988

6089
# [Python](#tab/python)
@@ -86,19 +115,28 @@ red_team_agent = RedTeam(
86115
)
87116

88117
# Create and run the red teaming scan
89-
red_team_response = project_client.red_teams.create(red_team=red_team_agent, headers={"model-endpoint": model_endpoint, "api-key": model_api_key,})
118+
# If you configured target using Option 1, use:
119+
# headers = {"model-endpoint": model_endpoint, "api-key": model_api_key}
120+
# If you configured target using Option 2, use:
121+
# headers = {}
122+
123+
# Choose one of the following based on your configuration option:
124+
headers = {"model-endpoint": model_endpoint, "api-key": model_api_key} # For Option 1
125+
# headers = {} # For Option 2
126+
127+
red_team_response = project_client.red_teams.create(red_team=red_team_agent, headers=headers)
90128
```
91129

92130
# [cURL](#tab/curl)
93131

94132
```bash
95-
curl --request POST \ --url https://{{account}}.services.ai.azure.com/api/projects/{{project}}/redteams/runs:run \ --header 'content-type: application/json' \ --header 'authorization: Bearer {{ai_token}}' --data '{ "scanName": "sample_scan_magic_1", "riskCategories": [ "Violence" ], "attackStrategy": [ "Flip" ], "numTurns": 1, "target": { "type": "AzureOpenAIModel", "modelDeploymentName": "{{connectionName}}/{{deploymentName}}" }}'
133+
curl --request POST \ --url https://{{account}}.services.ai.azure.com/api/projects/{{project}}/redteams/runs:run \ --header 'content-type: application/json' \ --header 'authorization: Bearer {{ai_token}}' --data '{ "displayName": "Red Team Scan #1", "riskCategories": [ "Violence" ], "attackStrategy": [ "Flip" ], "numTurns": 1, "target": { "type": "AzureOpenAIModel", "modelDeploymentName": "{{connectionName}}/{{deploymentName}}" }}'
96134
```
97135

98136
- Replace `{{account}}`, `{{project}}` with Foundry Project account name and project name.
99137
- Replace `{{ai_token}}` with Bearer token with audience "<https://ai.azure.com>"
100-
- Replace `"{{connectionName}}"` with the Azure OpenAI model connection name connected to the Foundry project account.
101-
- Replace `"{{deploymentName}}"` with the Azure OpenAI deployment name of the AOAI connection account.
138+
- For Option 1 (Foundry project deployments): Replace `"{{connectionName}}/{{deploymentName}}"` with just `"{{deploymentName}}"` (your model deployment name).
139+
- For Option 2 (Azure OpenAI/AI Services deployments): Replace `"{{connectionName}}"` with the Azure OpenAI model connection name connected to the Foundry project account, and replace `"{{deploymentName}}"` with the Azure OpenAI deployment name of the Azure OpenAI connection account.
102140

103141
---
104142

0 commit comments

Comments
 (0)