Skip to content

jassics/awesome-genai-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 

Repository files navigation

Awesome GenAI Security

A curated list of links, references, books, videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices, etc., which are related to GenAI, LLM, RAG, MCP, Agents, and Agentic AI security.

Table of Contents


GenAI security banner

GenAI Security Papers & Standards

Important papers, standards, and checklists from organizations like OWASP, NIST, and others.

  1. OWASP Top 10 for LLM Applications 2025
  2. OWASP LLM AI Security and Governance Checklist
  3. OWASP Agentic AI Top 10
  4. NIST AI RMF Playbook
  5. NIST AI Risk Management Framework (AI RMF)
  6. NIST Adversarial Machine Learning
  7. Microsoft Failure Models in Machine Learning
  8. Microsoft Threat Modeling AI/ML
  9. OWASP GenAI Security Project
  10. MITRE ATLAS (Adversarial Threat Landscape for AI Systems)
  11. Google Secure AI Framework (SAIF)
  12. Anthropic Responsible Scaling Policy
  13. ENISA Multilayer Framework for Good Cybersecurity Practices for AI

AI Security Books

  1. AI Value Creators
  2. AI Engineering by Chip Huyen
  3. Designing Machine Learning Systems
  4. Hands-On Large Language Models
  5. Nexus by Yuval Noah Harari
  6. The Developer's Playbook for Large Language Model Security: Building Secure AI Applications by Steve Wilson
  7. 10 Best AI Security Books (Practical DevSecOps)
  8. AI Security E-book 101 (Practical DevSecOps, PDF)
  9. AI Security and Red Teaming - Rob Maas (Wiley)
  10. Not with a Bug, But with a Sticker - Ram Shankar Siva Kumar & Hyrum Anderson
  11. Generative AI Security - Ken Huang, Yang Wang (Springer)
  12. Adversarial AI Attacks, Mitigations, and Defense Strategies - John Sotiropoulos (Packt)

AI Security Videos

  1. Intro to LLM Security - WhyLabs
  2. OWASP Top 10 for LLM Applications Explained - OWASP
  3. Hacking LLMs and Prompt Injection - LiveOverflow
  4. AI Red Teaming - DEFCON AI Village
  5. Securing LLM Applications - SANS Institute

Online Tutorials / Blogs / Presentations

Articles and guides covering LLM, RAG, and general GenAI security.

  1. LLM Security
  2. What are foundational models?
  3. A quick check on the AI Threat Model
  4. Security Incident Response using LLM
  5. OWASP: CheatSheet – A Practical Guide for Securely Using Third-Party MCP Servers 1.0
  6. AI Security Interview Questions (Practical DevSecOps)
  7. Emerging AI Security Roles (Practical DevSecOps)
  8. AI Security Engineer Roadmap (Practical DevSecOps)
  9. Prompt Injection Attacks and Defenses in LLM-Integrated Applications
  10. Simon Willison's Blog on Prompt Injection
  11. Embrace the Red - AI Security Blog by Johann Rehberger
  12. Trail of Bits - AI/ML Security Research

RAG Security

  1. Riding the RAG Trail: Access, Permissions and Context
  2. Securing Risks with RAG Architectures
  3. Mitigating Security Risks in Retrieval Augmented Generation (RAG)
  4. RAG: The Essential Guide
  5. Why RAG is revolutionising GenAI

MCP & Agent Security

  1. Invariant Labs: MCP Security Notification Tool
  2. Pillar Security: MCP Security Research
  3. Agentic Security Risks - OWASP
  4. Tool Poisoning Attacks in MCP

LLM Attacks

  1. Web LLM attacks - PortSwigger
  2. Prompt injection jailbreaking
  3. LLM Attacks - Comprehensive Attack Taxonomy
  4. Not what you've signed up for: Compromising Real-World LLM-Integrated Applications

Online Courses (Paid/Free)

  1. Stanford CS-324: Large Language Models
  2. Princeton COS 597G: Understanding Large Language Models
  3. Coursera: GenAI with LLM
  4. Coursera: Generative AI Engineering with LLMs Specialization
  5. Coursera: Generative AI for Cybersecurity Professionals (IBM)
  6. Coursera: AI for Cybersecurity (JHU)
  7. AttackIQ: The foundation of AI Security

AI Security Certifications

  1. Certified AI Security Professional (CAISP) by Practical DevSecOps – Securing AI systems, models, and pipelines against adversarial threats, LLM vulnerabilities, AI supply chain risks, data poisoning, and AI-specific security frameworks. Hands-on, practitioner-level skills.

Tools of Trade

Tools for defending, scanning, and auditing GenAI systems.

Defensive / Scanning

  1. LLM Guard - Information extraction and security for LLMs.
  2. Model Scan - Scanning models for serialization attacks.
  3. Rebuff - Prompt injection detection.
  4. NB Defense - Notebook security.
  5. Protect AI's OSS Portfolio
  6. LLM Guard Playground

Offensive / Red Teaming

  1. AI/ML Exploits
  2. Garak - LLM Vulnerability Scanner
  3. PyRIT - Python Risk Identification Toolkit for GenAI (Microsoft)
  4. Counterfit - AI Security Testing (Microsoft)
  5. ART - Adversarial Robustness Toolbox (IBM)
  6. promptmap - Prompt Injection Testing

Guardrails & Firewalls

  1. Guardrails AI - Input/output validation for LLMs.
  2. NeMo Guardrails (NVIDIA) - Programmable guardrails for LLM applications.
  3. Vigil - LLM Prompt Injection Detection
  4. Lakera Guard - Real-time AI security for prompt injection and data leakage.

Security Practices and CTFs

Practice your skills with these vulnerable applications and challenges.

  1. Gandalf - Lakera AI - LLM security challenge.
  2. Prompt Airlines - AI security challenges, CTF style.
  3. Certified AI/ML Pentester Exam
  4. Damn Vulnerable MCP Server - Deliberately vulnerable MCP implementation.
  5. Vulnerable MCP Servers Lab - Collection of vulnerable servers.
  6. FinBot Agentic AI CTF - Agentic Security CTF.
  7. OWASP WrongSecrets LLM exercise
  8. Huntr.com - World’s first bug bounty platform for AI/ML.
  9. HackAPrompt - Prompt hacking competition.
  10. Crucible by Dreadnode - AI/ML security challenges and CTFs.
  11. AI Goat - Vulnerable LLM CTF built on AWS.

AI Red Teaming

Resources and methodologies for red teaming AI/GenAI systems.

  1. Microsoft AI Red Team
  2. Anthropic Red Teaming Research
  3. MITRE ATLAS Attack Navigator
  4. AI Red Teaming Guide - Humane Intelligence
  5. Google DeepMind: Evaluating Frontier Models for Dangerous Capabilities

GenAI Security Attacks, Breaches & Incidents

Notable real-world incidents involving GenAI and LLM security.

  1. Check Point Researchers Expose Critical Claude Code Flaws - CVE-2025-59536 and CVE-2026-21852: Enabling Remote Command Execution and API Key Theft
  2. Anthropic: Chinese AI Firms Created 24,000 Fraudulent Accounts For ‘Distillation Attacks
  3. LiteLLM on PyPI Was Compromised, What the Attack Changed and What Defenders Should Do Now
  4. The Day Chevrolet’s AI Chatbot Tried to Sell a $70,000 SUV for $1
  5. Samsung bans use of generative AI tools like ChatGPT after April internal data leak
  6. AI-powered Bing Chat spills its secrets via prompt injection attack
  7. Air Canada Chatbot Provides Wrong Info (2024) - Airline held liable for chatbot hallucinating refund policy.
  8. Microsoft Tay Bot Manipulation (2016) - Twitter chatbot manipulated into generating offensive content.
  9. ChatGPT Data Leak Bug (2023) - Bug exposed chat history titles and payment info of other users.
  10. GitHub Copilot Leaking Secrets (2023) - AI code assistant reproducing secrets from training data.

Regulatory Frameworks & Governance

  1. EU AI Act - EU regulation on artificial intelligence.
  2. NIST AI 100-1: Artificial Intelligence Risk Management Framework
  3. ISO/IEC 42001:2023 - AI Management System Standard
  4. White House Executive Order on Safe, Secure AI
  5. Singapore AI Governance Framework

Newsletters & Communities

  1. OWASP GenAI Slack Channel - Join #project-top10-for-llm channel.
  2. AI Village (DEF CON) - Community focused on AI security research.
  3. MLSecOps Community - Community for ML security operations.
  4. The AI Security Newsletter by Ken Huang
  5. Protect AI Newsletter

Contributors

About

Curated list of links, references, books videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices etc. which are related to GenAI and LLM Security

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors