Skip to content

Commit 234ac9d

Browse files
authored
Update ai-red-teaming-agent.md
1 parent a8bbcca commit 234ac9d

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

articles/ai-foundry/concepts/ai-red-teaming-agent.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,6 @@ The AI Red Teaming Agent leverages Microsoft's open-source framework for Python
2727

2828
Together these components (scanning, evaluating, and reporting) help teams understand how AI systems respond to common attacks, ultimately guiding a comprehensive risk management strategy.
2929

30-
[!INCLUDE [uses-hub-only](../includes/uses-hub-only.md )]
31-
3230
## When to use the AI Red Teaming Agent's scans
3331

3432
When thinking about AI-related safety risks developing trustworthy AI systems, Microsoft uses NIST's framework to mitigate risk effectively: Govern, Map, Measure, Manage. We'll focus on the last three parts in relation to the generative AI development lifecycle:

0 commit comments

Comments
 (0)