You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Though the AI Red Teaming Agent (preview) can be run [locally](link/to/run-airedteaming-local.md) during prototyping and development to help identify safety risks, running them in the cloud allows for pre-deployment AI red teaming runs on larger combinations of attack strategies and risk categories for a fuller analysis.
20
+
Though the AI Red Teaming Agent (preview) can be run [locally](./develop/run-scans-ai-red-teaming-agent) during prototyping and development to help identify safety risks, running them in the cloud allows for pre-deployment AI red teaming runs on larger combinations of attack strategies and risk categories for a fuller analysis.
After your automated scan is finished running [locally](develop/run-scans-ai-red-teaming-agent.md) or [remotely](develop/run-ai-red-teaming-cloud.md), the results also get logged to your Azure AI Foundry project which you specified in the creation of your AI Red Teaming Agent.
20
+
After your automated scan is finished running [locally](./develop/run-scans-ai-red-teaming-agent.md) or [remotely](./develop/run-ai-red-teaming-cloud.md), the results also get logged to your Azure AI Foundry project which you specified in the creation of your AI red teaming agent.
21
21
22
22
## View report of each scan
23
23
24
-
In your Azure AI Foundry project, navigate to the **Evaluations** page and select the **AI red teaming** tab to view the comprehensive report with a detailed drill-down of each scan.
24
+
In your Azure AI Foundry project or hub based project, navigate to the **Evaluations** page and select the **AI red teaming** tab to view the comprehensive report with a detailed drill-down of each scan.
25
25
26
-
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team.png" alt-text="Screenshot of AI Red Teaming tab in Azure AI Foundry project page." lightbox="../../media/evaluations/red-teaming-agent/ai-red-team.png":::
26
+
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team.png" alt-text="Screenshot of AI Red Teaming tab in Azure AI Foundry project page." lightbox="../media/evaluations/red-teaming-agent/ai-red-team.png":::
27
27
28
28
Once you select into the scan, you can view the report by risk categories, which shows you the overall number of successful attacks and a breakdown of successful attacks per risk categories:
29
29
30
-
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-report-risk.png" alt-text="Screenshot of AI Red Teaming report view by risk category in Azure AI Foundry." lightbox="../../media/evaluations/red-teaming-agent/ai-red-team-report-risk.png":::
30
+
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-report-risk.png" alt-text="Screenshot of AI Red Teaming report view by risk category in Azure AI Foundry." lightbox="../media/evaluations/red-teaming-agent/ai-red-team-report-risk.png":::
31
31
32
32
Or by attack complexity classification:
33
33
34
-
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-report-attack.png" alt-text="Screenshot of AI Red Teaming report view by attack complexity category in Azure AI Foundry." lightbox="../../media/evaluations/red-teaming-agent/ai-red-team-report-attack.png":::
34
+
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-report-attack.png" alt-text="Screenshot of AI Red Teaming report view by attack complexity category in Azure AI Foundry." lightbox="../media/evaluations/red-teaming-agent/ai-red-team-report-attack.png":::
35
35
36
36
Drilling down further into the data tab provides a row-level view of each attack-response pair, enabling deeper insights into system issues and behaviors. For each attack-response pair, you can see additional information such as whether or not the attack was successful, what attack strategy was used and its attack complexity. There's also an option for a human in the loop reviewer to provide human feedback by selecting the thumbs up or thumbs down icon.
37
37
38
-
:::image type="content" source="../../media/evaluations/red-teaming-agent/ai-red-team-data.png" alt-text="Screenshot of AI Red Teaming data page in Azure AI Foundry." lightbox="../../media/evaluations/red-teaming-agent/ai-red-team-data.png":::
38
+
:::image type="content" source="../../media/evaluations/red-teaming-agent/ai-red-team-data.png" alt-text="Screenshot of AI Red Teaming data page in Azure AI Foundry." lightbox="../media/evaluations/red-teaming-agent/ai-red-team-data.png":::
39
39
40
40
To view each conversation, selecting **View more** opens up the full conversation for more detailed analysis of the AI system's response.
41
41
42
-
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-data-conversation.png" alt-text="Screenshot of AI Red Teaming data page with a conversation history opened in Azure AI Foundry." lightbox="../../media/evaluations/red-teaming-agent/ai-red-team-data-conversation.png":::
42
+
:::image type="content" source="../media/evaluations/red-teaming-agent/ai-red-team-data-conversation.png" alt-text="Screenshot of AI Red Teaming data page with a conversation history opened in Azure AI Foundry." lightbox="../media/evaluations/red-teaming-agent/ai-red-team-data-conversation.png":::
43
43
44
-
## Next steps
44
+
## Related content
45
45
46
46
Try out an [example workflow](https://aka.ms/airedteamingagent-sample) in our GitHub samples.
0 commit comments