You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent.md
+12-11Lines changed: 12 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ You can instantiate the AI Red Teaming agent with your Azure AI Project and Azur
43
43
```python
44
44
# Azure imports
45
45
from azure.identity import DefaultAzureCredential
46
-
from azure.ai.evaluation import RedTeam, RiskCategory
46
+
from azure.ai.evaluation.red_teamimport RedTeam, RiskCategory
47
47
48
48
# Azure AI Project Information
49
49
azure_ai_project = {
@@ -77,7 +77,7 @@ Currently, AI Red Teaming Agent is only available in a few regions. Ensure your
77
77
78
78
## Running an automated scan for safety risks
79
79
80
-
Once your `RedTeam` is instantiated, you can run an automated scan with minimal configuration, only a target is required. The following wouldgenerate five direct adversarial queries for each of the four risk categories for a total of 20 attack and response pairs.
80
+
Once your `RedTeam` is instantiated, you can run an automated scan with minimal configuration, only a target is required. The following would, by default, generate five baseline adversarial queries for each of the four risk categories for a total of 20 attack and response pairs.
If only the target is passed in when you run a scan and no attack strategies are specified, the `red_team_agent` will only send direct adversarial queries to your target. This is the most naive method of attempting to elicit undesired behavior or generated content. It's recommended to try the baseline direct querying first before applying any attack strategies.
157
+
If only the target is passed in when you run a scan and no attack strategies are specified, the `red_team_agent` will only send baseline direct adversarial queries to your target. This is the most naive method of attempting to elicit undesired behavior or generated content. It's recommended to try the baseline direct adversarial querying first before applying any attack strategies.
158
158
159
159
Attack strategies are methods to take the baseline direct adversarial queries and convert them into another form to try bypassing your target's safeguards. Attack strategies are classified into three buckets of complexities. Attack complexity reflects the effort an attacker needs to put in conducting the attack.
160
160
@@ -175,7 +175,7 @@ We offer a group of default attacks for easy complexity and moderate complexity
175
175
The following scan would first run all the baseline direct adversarial queries. Then, it would apply the following attack techniques: `Base64`, `Flip`, `Morse`, `Tense`, and a composition of `Tense` and `Base64` which would first translate the baseline query into past tense then encode it into `Base64`.
176
176
177
177
```python
178
-
from azure.ai.evaluation import AttackStrategy
178
+
from azure.ai.evaluation.red_teamimport AttackStrategy
179
179
180
180
# Run the red team scan with multiple attack strategies
181
181
red_team_agent_result =await red_team_agent.scan(
@@ -217,16 +217,16 @@ More advanced users can specify the desired attack strategies instead of using d
217
217
|`Jailbreak`| User Injected Prompt Attacks (UPIA) injects specially crafted prompts to bypass AI safeguards | Easy |
218
218
|`Tense`| Changes tense of text into past tense. | Moderate |
219
219
220
-
Each new attack strategy specified will be applied to the set of baseline adversarial queries used.
220
+
Each new attack strategy specified will be applied to the set of baseline adversarial queries used. If no attack strategies are specified then only baseline adversarial queries will be sent to your target.
221
221
222
-
This following example would generate one attack objective per each of the four risk categories specified. That would generate four baseline adversarial prompts which would then get converted into each of the three attack strategies to result in a total of 12 attack-response pairs from your AI system.
222
+
This following example would generate one attack objective per each of the four risk categories specified. That would generate four baseline adversarial prompts which would then get converted into each of the four attack strategies to result in a total of 16 attack-response pairs from your AI system. The last attack stratgy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
223
223
224
224
```python
225
225
red_team_agent = RedTeam(
226
-
azure_ai_project=azure_ai_project,# required
227
-
credential=DefaultAzureCredential(),# required
228
-
risk_categories=[RiskCategory.Violence, RiskCategory.HateUnfairness, RiskCategory.Sexual, RiskCategory.SelfHarm],# optional, defaults to all four
0 commit comments