A curated list of links, references, books, videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices, etc., which are related to GenAI, LLM, RAG, MCP, Agents, and Agentic AI security.
- GenAI Security Papers & Standards
- AI Security Books
- AI Security Videos
- Online Tutorials / Blogs / Presentations
- Online Courses (Paid/Free)
- AI Security Certifications
- Tools of Trade
- Security Practices and CTFs
- AI Red Teaming
- GenAI Security Attacks, Breaches & Incidents
- Regulatory Frameworks & Governance
- Newsletters & Communities
- Contributors
Important papers, standards, and checklists from organizations like OWASP, NIST, and others.
- OWASP Top 10 for LLM Applications 2025
- OWASP LLM AI Security and Governance Checklist
- OWASP Agentic AI Top 10
- NIST AI RMF Playbook
- NIST AI Risk Management Framework (AI RMF)
- NIST Adversarial Machine Learning
- Microsoft Failure Models in Machine Learning
- Microsoft Threat Modeling AI/ML
- OWASP GenAI Security Project
- MITRE ATLAS (Adversarial Threat Landscape for AI Systems)
- Google Secure AI Framework (SAIF)
- Anthropic Responsible Scaling Policy
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI
- AI Value Creators
- AI Engineering by Chip Huyen
- Designing Machine Learning Systems
- Hands-On Large Language Models
- Nexus by Yuval Noah Harari
- The Developer's Playbook for Large Language Model Security: Building Secure AI Applications by Steve Wilson
- 10 Best AI Security Books (Practical DevSecOps)
- AI Security E-book 101 (Practical DevSecOps, PDF)
- AI Security and Red Teaming - Rob Maas (Wiley)
- Not with a Bug, But with a Sticker - Ram Shankar Siva Kumar & Hyrum Anderson
- Generative AI Security - Ken Huang, Yang Wang (Springer)
- Adversarial AI Attacks, Mitigations, and Defense Strategies - John Sotiropoulos (Packt)
- Intro to LLM Security - WhyLabs
- OWASP Top 10 for LLM Applications Explained - OWASP
- Hacking LLMs and Prompt Injection - LiveOverflow
- AI Red Teaming - DEFCON AI Village
- Securing LLM Applications - SANS Institute
Articles and guides covering LLM, RAG, and general GenAI security.
- LLM Security
- What are foundational models?
- A quick check on the AI Threat Model
- Security Incident Response using LLM
- OWASP: CheatSheet – A Practical Guide for Securely Using Third-Party MCP Servers 1.0
- AI Security Interview Questions (Practical DevSecOps)
- Emerging AI Security Roles (Practical DevSecOps)
- AI Security Engineer Roadmap (Practical DevSecOps)
- Prompt Injection Attacks and Defenses in LLM-Integrated Applications
- Simon Willison's Blog on Prompt Injection
- Embrace the Red - AI Security Blog by Johann Rehberger
- Trail of Bits - AI/ML Security Research
- Riding the RAG Trail: Access, Permissions and Context
- Securing Risks with RAG Architectures
- Mitigating Security Risks in Retrieval Augmented Generation (RAG)
- RAG: The Essential Guide
- Why RAG is revolutionising GenAI
- Invariant Labs: MCP Security Notification Tool
- Pillar Security: MCP Security Research
- Agentic Security Risks - OWASP
- Tool Poisoning Attacks in MCP
- Web LLM attacks - PortSwigger
- Prompt injection jailbreaking
- LLM Attacks - Comprehensive Attack Taxonomy
- Not what you've signed up for: Compromising Real-World LLM-Integrated Applications
- Stanford CS-324: Large Language Models
- Princeton COS 597G: Understanding Large Language Models
- Coursera: GenAI with LLM
- Coursera: Generative AI Engineering with LLMs Specialization
- Coursera: Generative AI for Cybersecurity Professionals (IBM)
- Coursera: AI for Cybersecurity (JHU)
- AttackIQ: The foundation of AI Security
- Certified AI Security Professional (CAISP) by Practical DevSecOps – Securing AI systems, models, and pipelines against adversarial threats, LLM vulnerabilities, AI supply chain risks, data poisoning, and AI-specific security frameworks. Hands-on, practitioner-level skills.
Tools for defending, scanning, and auditing GenAI systems.
- LLM Guard - Information extraction and security for LLMs.
- Model Scan - Scanning models for serialization attacks.
- Rebuff - Prompt injection detection.
- NB Defense - Notebook security.
- Protect AI's OSS Portfolio
- LLM Guard Playground
- AI/ML Exploits
- Garak - LLM Vulnerability Scanner
- PyRIT - Python Risk Identification Toolkit for GenAI (Microsoft)
- Counterfit - AI Security Testing (Microsoft)
- ART - Adversarial Robustness Toolbox (IBM)
- promptmap - Prompt Injection Testing
- Guardrails AI - Input/output validation for LLMs.
- NeMo Guardrails (NVIDIA) - Programmable guardrails for LLM applications.
- Vigil - LLM Prompt Injection Detection
- Lakera Guard - Real-time AI security for prompt injection and data leakage.
Practice your skills with these vulnerable applications and challenges.
- Gandalf - Lakera AI - LLM security challenge.
- Prompt Airlines - AI security challenges, CTF style.
- Certified AI/ML Pentester Exam
- Damn Vulnerable MCP Server - Deliberately vulnerable MCP implementation.
- Vulnerable MCP Servers Lab - Collection of vulnerable servers.
- FinBot Agentic AI CTF - Agentic Security CTF.
- OWASP WrongSecrets LLM exercise
- Huntr.com - World’s first bug bounty platform for AI/ML.
- HackAPrompt - Prompt hacking competition.
- Crucible by Dreadnode - AI/ML security challenges and CTFs.
- AI Goat - Vulnerable LLM CTF built on AWS.
Resources and methodologies for red teaming AI/GenAI systems.
- Microsoft AI Red Team
- Anthropic Red Teaming Research
- MITRE ATLAS Attack Navigator
- AI Red Teaming Guide - Humane Intelligence
- Google DeepMind: Evaluating Frontier Models for Dangerous Capabilities
Notable real-world incidents involving GenAI and LLM security.
- Check Point Researchers Expose Critical Claude Code Flaws - CVE-2025-59536 and CVE-2026-21852: Enabling Remote Command Execution and API Key Theft
- Anthropic: Chinese AI Firms Created 24,000 Fraudulent Accounts For ‘Distillation Attacks
- LiteLLM on PyPI Was Compromised, What the Attack Changed and What Defenders Should Do Now
- The Day Chevrolet’s AI Chatbot Tried to Sell a $70,000 SUV for $1
- Samsung bans use of generative AI tools like ChatGPT after April internal data leak
- AI-powered Bing Chat spills its secrets via prompt injection attack
- Air Canada Chatbot Provides Wrong Info (2024) - Airline held liable for chatbot hallucinating refund policy.
- Microsoft Tay Bot Manipulation (2016) - Twitter chatbot manipulated into generating offensive content.
- ChatGPT Data Leak Bug (2023) - Bug exposed chat history titles and payment info of other users.
- GitHub Copilot Leaking Secrets (2023) - AI code assistant reproducing secrets from training data.
- EU AI Act - EU regulation on artificial intelligence.
- NIST AI 100-1: Artificial Intelligence Risk Management Framework
- ISO/IEC 42001:2023 - AI Management System Standard
- White House Executive Order on Safe, Secure AI
- Singapore AI Governance Framework
- OWASP GenAI Slack Channel - Join #project-top10-for-llm channel.
- AI Village (DEF CON) - Community focused on AI security research.
- MLSecOps Community - Community for ML security operations.
- The AI Security Newsletter by Ken Huang
- Protect AI Newsletter
