Skip to content

Latest commit

 

History

History
158 lines (90 loc) · 3.31 KB

File metadata and controls

158 lines (90 loc) · 3.31 KB

Contributing to the Sovereign AI Threat Model Framework

Thank you for your interest in contributing.

This project aims to provide a practical and structured framework for identifying and mitigating risks associated with sovereign AI and agentic AI systems. Contributions that improve the clarity, completeness, and usability of the framework are welcome.


Ways to Contribute

There are several ways to contribute to this project.

Threat Modeling Improvements

You may contribute improvements to:

  • Threat taxonomy categories
  • Attack trees
  • Threat scenarios
  • Sovereign exposure models

Contributions should focus on realistic security risks that organizations may encounter when deploying AI systems.


Security Control Enhancements

Contributors may propose new governance or security controls addressing risks such as:

  • AI data sovereignty
  • Agentic workflow security
  • AI supply chain integrity
  • AI model lifecycle governance
  • Prompt injection mitigation

Controls should be practical and aligned with real-world enterprise security practices.


Documentation Improvements

High-quality documentation is essential for a framework project.

Contributions may include:

  • Improving clarity of existing documentation
  • Adding examples or diagrams
  • Expanding guidance for security teams
  • Correcting inaccuracies

Templates and Tooling

You may also contribute:

  • Threat modeling templates
  • Architecture examples
  • Governance checklists
  • Automation scripts that support threat modeling

Contribution Process

To contribute, please follow these steps.

  1. Fork the repository
  2. Create a feature branch

Example:

git checkout -b feature/new-threat-scenario

  1. Make your changes
  2. Commit with clear messages
  3. Submit a pull request

Pull requests should clearly describe the purpose and scope of the change.


Contribution Guidelines

Please follow these guidelines when submitting contributions.

Keep Changes Focused

Each pull request should address a single improvement or feature where possible.


Maintain Clear Documentation

Documentation should be written in clear and professional language.

When adding threat scenarios or controls, include:

  • Description of the threat
  • Potential attack path
  • Business impact
  • Suggested mitigation controls

Align With Framework Structure

New contributions should align with the repository structure.

Examples:

docs/05-attack-trees/ docs/06-scenarios/ docs/07-controls/ docs/08-templates/

Maintaining consistent organization ensures the framework remains easy to use.


Avoid Sensitive Data

Do not include:

  • confidential data
  • proprietary system details
  • credentials or secrets

All examples should use hypothetical or sanitized scenarios.


Reporting Security Issues

If you discover a security vulnerability related to this repository, please do not open a public issue.

Instead, follow the process described in:

SECURITY.md


Code of Conduct

Contributors are expected to maintain respectful and professional interactions.

Constructive collaboration helps improve the framework and supports the broader AI security community.


Recognition

All meaningful contributions will be acknowledged through GitHub commit history and contributor listings.

Thank you for helping improve the security and governance of AI systems.