Skip to content

beyefendi/awesome-llm-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Awesome LLM Security Awesome

A curated list of awesome tools, documents, and projects about LLM Security.


πŸ“š Table of Contents


πŸ› οΈ Tools

🧰 Multi-Purpose Model Scanners

  • GitHub Repo stars promptfoo LLM red teaming and evaluation framework with CI/CD integration
  • GitHub Repo stars Garak LLM vulnerability scanner
  • GitHub Repo stars AI-Infra-Guard LLM vulnerability scanner with Web UI, REST APIs, and Dockerized
  • GitHub Repo stars LLM Guard Security toolkit for LLM interactions
  • GitHub stars Agentic Security Security toolkit for AI agents
  • GitHub Repo stars DeepTeam LLM red teaming framework (prompt injection, hallucination, data leaks, jailbreaks).
  • GitHub stars AI-Scanner AI model safety scanner built on NVIDIA garak
  • GitHub stars LLMmap Tool for mapping LLM vulnerabilities
  • GitHub stars LLaMator Framework for testing vulnerabilities of LLMs
  • GitHub Repo stars Plexiglass Security toolbox for testing and safeguarding LLMs
  • GitHub Repo stars Inkog AI agent security scanner (CLI + MCP server) detects prompt injection, SQLi via LLM.

πŸ€– MCP & Agent Scanners

  • GitHub stars AgentBench: Benchmark to evaluate LLMs as agents
  • GitHub Repo stars Agentic Radar Open-source CLI security scanner for agentic workflows
  • GitHub Repo stars MCP Scanner Scan MCP servers for potential threats & security findings
  • GitHub stars Awesome MCP Security Curated list of MCP security resources
  • GitHub Repo stars MCP Shield Security scanner for MCP servers
  • GitHub stars Invariant Trace analysis tool for AI agents
  • GitHub Repo stars MCP Safety Scanner Automated MCP safety auditing and remediation using Agents
  • GitHub Repo stars Agent Security Scanner MCP MCP server for scanning code for web vulnerabilities, prompt injection, and AI-hallucinated package detection
  • GitHub Repo stars Agent-threat-rules Open detection standard for AI agent threats. Like Sigma, but for prompt injection, tool poisoning, and MCP attacks
  • GitHub stars Tenuo Capability-based authorization for AI agents
  • GitHub stars Awesome LLM Agent Security LLM agent security resources, attacks, vulnerabilities
  • GitHub Repo stars Ziran Security testing framework for AI agents
  • GitHub Repo stars Cerberus Agentic AI runtime security platform
  • GitHub Repo stars clawguard Firewall for AI agents
  • GitHub Repo stars MCPs-audit OWASP Security Scanner for MCP Servers
  • GitHub Repo stars Agent Guard Runtime governance firewall for AI agents, policy enforcement, MCP tool scanning

πŸ§‘β€πŸ’» RAG Security


πŸ’£ Prompt Injection


πŸ—‘οΈ Autonomous Pentesting Frameworks


πŸ›‘οΈ Defensive & Guardrail Tools

  • GitHub Repo stars Guardrails: Add structured validation and policy enforcement for LLMs.
  • GitHub stars NeMo Guardrails: Protects against jailbreak and hallucinations with customizable rulesets
  • GitHub Repo stars PurpleLlama: Tools to assess and improve LLM security from META.
  • GitHub Repo stars PyRIT: Python Risk Identification Tool for generative AI
  • GitHub stars LLM-Guard: Tool for securing LLM interactions (replaced rebuff)
  • GitHub stars LangKit: Functions for jailbreak detection, prompt injection, and sensitive information detection
  • GitHub Repo stars Prompt Injection Defenses: Practical and proposed defenses against prompt injection.
  • GitHub Repo stars Vigil: Prompt injection detection toolkit and REST API for LLM security risk scoring.
  • GitHub stars Plexiglass: Security tool for LLM applications
  • GitHub Repo stars Last Layer: Low-latency pre-filter for prompt injection prevention.
  • GitHub Repo stars Veritensor: AI model scanner to detect Pickle/PyTorch malware, check licenses, and verify HF hashes.
  • GitHub Repo stars ShellWard: AI Agent security middleware
  • GitHub stars Tenuo: Capability tokens for AI agents with task-scoped TTLs, offline verification, and proof-of-possession binding
  • GitHub stars TrustGate: Generative Application Firewall for GenAI Applications
  • GitHub stars LLM Confidentiality: Tool for ensuring confidentiality in LLMs
  • GitHub stars LocalMod: Self-hosted content moderation API with prompt injection detection, toxicity filtering, PII detection, and NSFW filtering
  • GitHub Repo stars OpenClaw Security Suite: 11-tool defensive security suite for AI agent workspaces (prompt injection defense, integrity verification, secret scanning, supply chain analysis). Pure Python stdlib, zero dependencies, local-only execution.
  • Acgs-lite: Governance layer for AI agents that blocks unsafe actions before execution, enforces MACI separation of powers, and keeps tamper-evident audit trails. GitHub Repo stars
  • GitHub Repo stars Prompt Shield: GitHub Action for detecting indirect prompt injection in CI/CD pipelines. 4-layer defense architecture.
  • AIDEFEND: Practical knowledge base for AI security defenses
  • Aigis: Zero-dependency Python firewall for AI agents. 180+ patterns across OWASP LLM Top 10, StruQ-style structured prompts, goal-conditioned FSM, RAG context filter, MCP 3-stage scanning, MemoryGraft defence, judge-manipulation detection. Multi-layer: 4 walls + L4-L7 capability/AEP/safety/FSM. GitHub stars

πŸ•΅οΈ Benchmarks


🧩 Threat Modeling


πŸ§ͺ Playground


πŸ§ͺ PoC & Study Resources


πŸŽ₯ Courses


🌟 Miscellaneous


πŸ“° Blogs and Social Media


πŸ™ Acknowledgements

This repository is actively maintained as a fork of the original project. It includes pending contributions, removes broken links, and separates academic papers from other resources for better organization.

Contributions are always welcome. Please read the Contribution Guidelines before contributing.

Alternative: Awesome LLMSecOps


Star History Chart

About

Awesome LLM security tools, research, and documents

Topics

Resources

Contributing

Stars

Watchers

Forks

Contributors