Updates to the AI Application Security templates#561
Merged
Conversation
These updates are to match the VRT update - bugcrowd/vulnerability-rating-taxonomy#464 Adding: P1 - AI Application Security - Training Data Poisoning - Backdoor Injection / Bias Manipulation P1 - AI Application Security - Model Extraction - API Query-Based Model Reconstruction P1 - AI Application Security - Sensitive Information Disclosure - Cross-Tenant PII Leakage/Exposure . P1 - AI Application Security - Sensitive Information Disclosure - Key Leak P1 - AI Application Security - Remote Code Execution - Full System Compromise P2 - AI Application Security - Remote Code Execution - Sandboxed Container Code Execution P2 - AI Application Security - Prompt Injection - System Prompt Leakage P2 - AI Application Security - Vector and Embedding Weaknesses - Embedding Exfiltration / Model Extraction P3 - AI Application Security - Vector and Embedding Weaknesses - Semantic Indexing P2 - AI Application Security - Denial-of-Service (DoS) - Application-Wide P4 - AI Application Security - AI Safety - Misinformation / Wrong Factual Data P4 - AI Application Security - Insufficient Rate Limiting - Query Flooding / API Token Abuse P4 - AI Application Security - Denial-of-Service (DoS) - Tenant-Scoped P4 - AI Application Security - Adversarial Example Injection - AI Misclassification Attacks P3 - AI Application Security - Improper Output Handling - Cross-Site Scripting (XSS) P4 - AI Application Security - Improper Output Handling - Markdown/HTML Injection P5 - AI Application Security - Improper Input Handling - ANSI Escape Codes P5 - AI Application Security - Improper Input Handling - Unicode Confusables P5 - AI Application Security - Improper Input Handling - RTL Overrides Removing: P1 - AI Application Security - Large Language Model (LLM) Security - LLM Output Handling P1 - AI Application Security - Large Language Model (LLM) Security - Prompt Injection P1 - AI Application Security - Large Language Model (LLM) Security - Training Data Poisoning P2 - AI Application Security - Large Language Model (LLM) Security - Excessive Agency/Permission Manipulation
abhinav-nain
approved these changes
Jun 20, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
These updates are to match the VRT update - bugcrowd/vulnerability-rating-taxonomy#464