Thank you for your interest in contributing.
This project aims to provide a practical and structured framework for identifying and mitigating risks associated with sovereign AI and agentic AI systems. Contributions that improve the clarity, completeness, and usability of the framework are welcome.
There are several ways to contribute to this project.
You may contribute improvements to:
- Threat taxonomy categories
- Attack trees
- Threat scenarios
- Sovereign exposure models
Contributions should focus on realistic security risks that organizations may encounter when deploying AI systems.
Contributors may propose new governance or security controls addressing risks such as:
- AI data sovereignty
- Agentic workflow security
- AI supply chain integrity
- AI model lifecycle governance
- Prompt injection mitigation
Controls should be practical and aligned with real-world enterprise security practices.
High-quality documentation is essential for a framework project.
Contributions may include:
- Improving clarity of existing documentation
- Adding examples or diagrams
- Expanding guidance for security teams
- Correcting inaccuracies
You may also contribute:
- Threat modeling templates
- Architecture examples
- Governance checklists
- Automation scripts that support threat modeling
To contribute, please follow these steps.
- Fork the repository
- Create a feature branch
Example:
git checkout -b feature/new-threat-scenario
- Make your changes
- Commit with clear messages
- Submit a pull request
Pull requests should clearly describe the purpose and scope of the change.
Please follow these guidelines when submitting contributions.
Each pull request should address a single improvement or feature where possible.
Documentation should be written in clear and professional language.
When adding threat scenarios or controls, include:
- Description of the threat
- Potential attack path
- Business impact
- Suggested mitigation controls
New contributions should align with the repository structure.
Examples:
docs/05-attack-trees/ docs/06-scenarios/ docs/07-controls/ docs/08-templates/
Maintaining consistent organization ensures the framework remains easy to use.
Do not include:
- confidential data
- proprietary system details
- credentials or secrets
All examples should use hypothetical or sanitized scenarios.
If you discover a security vulnerability related to this repository, please do not open a public issue.
Instead, follow the process described in:
SECURITY.md
Contributors are expected to maintain respectful and professional interactions.
Constructive collaboration helps improve the framework and supports the broader AI security community.
All meaningful contributions will be acknowledged through GitHub commit history and contributor listings.
Thank you for helping improve the security and governance of AI systems.