AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
-
Updated
Feb 13, 2026 - TypeScript
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
A security scanner for your LLM agentic workflows
A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers.
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
AspGoat is an intentionally vulnerable ASP.NET Core application for learning and practicing web application security.
Code scanner to check for issues in prompts and LLM calls
A comprehensive guide to adversarial testing and security evaluation of AI systems, helping organizations identify vulnerabilities before attackers exploit them.
Open-source LLM Prompt-Injection and Jailbreaking Playground
AI security and prompt injection payload toolkit
AI red teaming, jailbreaking, and all forms of adversarial attacks for security purposes
The ultimate OWASP MCP Top 10 security checklist and pentesting framework for Model Context Protocol (MCP), AI agents, and LLM-powered systems.
Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.
A repository for your Garak runs, as well as a modern visualizer.
Projet issu du codelab Devfest Nantes 2025 “La guerre des prompts” : atelier de 2h pour apprendre à pirater des IA et comment les protéger via des frameworks open source
Hackaprompt v1.0 AIRT Agents
Awesome LLM security tools, research, and documents
AI Agent Security Testing — 112 attacks across 14 categories. Prompt injection, jailbreaks, MCP poisoning, agency hijacking & more. Test any AI agent in 5 minutes.
Sandbox for testing LLM prompt injections, jailbreaks, and AI red teaming techniques, part of the SynAccel Mirage line
Autonomous AI Red Teaming laboratory validating the Microsoft AI Red Team Taxonomy using the PyRIT framework. Focused on Agentic AI security and strategic conversational persistence.
Add a description, image, and links to the ai-red-teaming topic page so that developers can more easily learn about it.
To associate your repository with the ai-red-teaming topic, visit your repo's landing page and select "manage topics."