Skip to content

Arcanum-Sec/ai-sec-resources

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 

Repository files navigation

Arcanum AI Security Resources Hub

Live Site: https://arcanum-sec.github.io/ai-sec-resources/

A comprehensive collection of AI/LLM security resources including labs, competitions, bug bounties, and security tools for learning and practicing AI security concepts.

Overview

The Arcanum AI Security Resources Hub serves as a centralized platform for AI security professionals, researchers, and enthusiasts to discover and access various resources for learning about and testing AI/LLM security vulnerabilities.

What's Included

Labs (23 Active)

Interactive training environments and challenges covering:

  • Prompt injection techniques
  • Jailbreaking methodologies
  • Indirect prompt injection
  • Data exfiltration attacks
  • Cross-user data leakage
  • Authentication bypass methods
  • RAG system vulnerabilities
  • And much more...

Competitions (5 Active)

Competitive platforms for testing AI security skills:

  • HackAPrompt 2.0 - World's largest AI red-teaming competition
  • Pangea AI Escape Room - Interactive escape room challenges
  • RedTeam Arena - Community-driven LLM red-teaming
  • Gray Swan AI Arena - AI safety and alignment competitions
  • LLM Hacker Challenge - Progressive difficulty challenges by All About AI

Bug Bounties (4 Programs)

Official vulnerability disclosure programs:

  • Anthropic Bug Bounty - Claude AI system vulnerabilities
  • OpenAI Bug Bounty - ChatGPT & GPT API security issues
  • Google Gemini Bug Bounty - Gemini AI model vulnerabilities
  • 0din.ai GenAI Bug Bounty - Mozilla's generative AI security program

Security Tools (7 Tools)

Essential tools for AI security testing:

  • P4RS3LT0NGV3 (Original & Extended) - Prompt injection payload generators
  • PyRIT - Microsoft's Python Risk Identification Tool
  • Garak - NVIDIA's comprehensive LLM vulnerability scanner
  • Promptfoo - LLM testing and red teaming framework
  • Spikeé - Arcanum's AI security analysis platform
  • PyRIT-Ship - Burp Suite extension for AI vulnerability testing

Text Resources (3 Resources)

Research papers, taxonomies, and documentation:

  • Arcanum Prompt Injection Taxonomy - Comprehensive classification system for prompt injection attacks
  • AI Pentest Questionnaire - Structured penetration testing assessment guide for AI systems
  • AI Security Ecosystem - Enterprise AI deployment mapping for comprehensive pentesting scope identification

Getting Started

  1. Visit the live site: AI Security Resources Hub
  2. Browse through the different categories using the tab navigation
  3. Click on any resource to access the tool, lab, or competition
  4. Start with beginner-level resources and progress to advanced challenges

Local Development

To run this project locally:

# Clone the repository
git clone https://github.com/Arcanum-Sec/ai-sec-resources.git

# Navigate to the project directory
cd ai-sec-resources

# Serve the files using any web server
# For example, using Python's built-in server:
python -m http.server 8080

# Or using Node.js serve:
npx serve .

Project Structure

ai-security-labs-pages/
├── index.html          # Main application file with all content
├── README.md           # This file
└── .git/              # Git repository data

Features

  • Responsive Design - Works on desktop, tablet, and mobile devices
  • Tab Navigation - Organized content across Labs, Competitions, Bug Bounties, and Tools
  • Search-Friendly - Easy to find specific resources
  • Visual Status Indicators - Live status indicators for each resource
  • External Links - Direct access to all platforms and tools
  • Progressive Difficulty - Resources organized by skill level

Contributing

We welcome contributions to expand and improve the resource collection! To contribute:

  1. Fork the repository
  2. Add new resources to the appropriate section in index.html
  3. Update the stats counters if adding new items
  4. Test your changes locally
  5. Submit a pull request with a clear description

Adding New Resources

When adding new resources, please ensure:

  • Accurate descriptions and feature lists
  • Working links to the actual resources
  • Appropriate difficulty level classification
  • Consistent formatting with existing entries

License

This project is open source and available under the MIT License.

Disclaimer

This resource hub is intended for educational and authorized security testing purposes only. Always ensure you have proper authorization before testing any AI systems or applications.

Contact

  • Project Maintainer: Arcanum Security
  • Issues: Please report issues via GitHub Issues
  • Website: Arcanum Security

Acknowledgments

Special thanks to all the security researchers, organizations, and content creators who have contributed to the AI security community by creating and maintaining these valuable resources.


If you find this resource hub useful, please consider starring the repository!

About

AI Security Resources Hub

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages