Skip to content

Commit bfe79aa

Browse files
committed
Updates to the AI Application Security templates
These updates are to match the VRT update - bugcrowd/vulnerability-rating-taxonomy#464 Adding: P1 - AI Application Security - Training Data Poisoning - Backdoor Injection / Bias Manipulation P1 - AI Application Security - Model Extraction - API Query-Based Model Reconstruction P1 - AI Application Security - Sensitive Information Disclosure - Cross-Tenant PII Leakage/Exposure . P1 - AI Application Security - Sensitive Information Disclosure - Key Leak P1 - AI Application Security - Remote Code Execution - Full System Compromise P2 - AI Application Security - Remote Code Execution - Sandboxed Container Code Execution P2 - AI Application Security - Prompt Injection - System Prompt Leakage P2 - AI Application Security - Vector and Embedding Weaknesses - Embedding Exfiltration / Model Extraction P3 - AI Application Security - Vector and Embedding Weaknesses - Semantic Indexing P2 - AI Application Security - Denial-of-Service (DoS) - Application-Wide P4 - AI Application Security - AI Safety - Misinformation / Wrong Factual Data P4 - AI Application Security - Insufficient Rate Limiting - Query Flooding / API Token Abuse P4 - AI Application Security - Denial-of-Service (DoS) - Tenant-Scoped P4 - AI Application Security - Adversarial Example Injection - AI Misclassification Attacks P3 - AI Application Security - Improper Output Handling - Cross-Site Scripting (XSS) P4 - AI Application Security - Improper Output Handling - Markdown/HTML Injection P5 - AI Application Security - Improper Input Handling - ANSI Escape Codes P5 - AI Application Security - Improper Input Handling - Unicode Confusables P5 - AI Application Security - Improper Input Handling - RTL Overrides Removing: P1 - AI Application Security - Large Language Model (LLM) Security - LLM Output Handling P1 - AI Application Security - Large Language Model (LLM) Security - Prompt Injection P1 - AI Application Security - Large Language Model (LLM) Security - Training Data Poisoning P2 - AI Application Security - Large Language Model (LLM) Security - Excessive Agency/Permission Manipulation
1 parent a8c535c commit bfe79aa

File tree

126 files changed

+891
-91
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

126 files changed

+891
-91
lines changed

submissions/description/ai_application_security/.gitkeep

Whitespace-only changes.

submissions/description/ai_application_security/adversarial_example_injection/.gitkeep

Whitespace-only changes.

submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/.gitkeep

Whitespace-only changes.

submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md renamed to submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/guidance.md

Lines changed: 6 additions & 0 deletions
Lines changed: 20 additions & 0 deletions

submissions/description/ai_application_security/llm_security/guidance.md renamed to submissions/description/ai_application_security/adversarial_example_injection/guidance.md

Lines changed: 6 additions & 0 deletions

submissions/description/ai_application_security/llm_security/llm_output_handling/template.md renamed to submissions/description/ai_application_security/adversarial_example_injection/template.md

Lines changed: 1 addition & 1 deletion

submissions/description/ai_application_security/ai_safety/.gitkeep

Whitespace-only changes.

0 commit comments

Comments
 (0)