Skip to content

Latest commit

 

History

History
301 lines (227 loc) · 9.7 KB

File metadata and controls

301 lines (227 loc) · 9.7 KB

🔵 Technical Path - Security & Threat Modeling

📋 Overview

Duration: Weeks 9-12 Level: Intermediate to Advanced Focus: MITRE ATLAS, OWASP AI, security tools, Berryville methodology Prerequisites: Foundation Path, Regulatory Path (recommended)

This path develops technical competency in AI security, threat modeling, and security tooling essential for effective AI risk management.


🎯 Learning Objectives

By the end of this path, you will be able to:

  • ✅ Apply MITRE ATLAS for AI threat modeling
  • ✅ Implement OWASP AI security recommendations
  • ✅ Use security tools for AI/ML assessment
  • ✅ Apply Berryville Institute architectural risk analysis
  • ✅ Evaluate cloud AI security configurations

📅 Week 9: MITRE ATLAS Threat Modeling

Learning Objectives

  • Master the ATLAS matrix structure
  • Apply ATLAS to real-world threat modeling
  • Use ATLAS Navigator tool

Required Resources

Resource Type Time Cost Link
ATLAS Matrix Interactive 3 hrs Free atlas.mitre.org
ATLAS Navigator Tool 2 hrs Free ATLAS Navigator
ATLAS Case Studies Studies 2 hrs Free Case Studies
ATT&CK Fundamentals Training 2 hrs Free MITRE ATT&CK

Supplementary Resources

Resource Type Time Link
MITRE ATLAS GitHub Repository 1 hr GitHub
AI Red Team Resources Collection 2 hrs Various

Hands-On Exercises

  1. ATLAS Matrix Exploration

    • Document all 12 tactics with descriptions
    • Identify 5 techniques per tactic most relevant to your environment
    • Create quick-reference guide
  2. Threat Model Creation

    • Select an AI system (recommendation engine, chatbot, or classifier)
    • Using ATLAS Navigator, create threat model covering:
      • Likely attack paths
      • Relevant techniques
      • Potential mitigations
    • Export and document
  3. Case Study Analysis

    • Analyze 3 ATLAS case studies
    • For each: identify techniques used, impact, and lessons learned
    • Document defensive recommendations

Week 9 Checklist

  • Complete ATLAS matrix review
  • Master ATLAS Navigator tool
  • Analyze all case studies
  • Create threat model for real system
  • Document tactical quick-reference

📅 Week 10: OWASP AI Security Implementation

Learning Objectives

  • Apply OWASP ML Top 10 and LLM Top 10
  • Implement security controls for AI systems
  • Conduct AI security assessments

Required Resources

Resource Type Time Cost Link
OWASP AI Security Guide Guide 3 hrs Free OWASP AI
ML Top 10 Document 2 hrs Free ML Top 10
LLM Top 10 2025 Document 2 hrs Free LLM Top 10
OWASP Testing Guide Guide 2 hrs Free Testing Guide

Supplementary Resources

Resource Type Time Link
OWASP Cheat Sheets Reference 2 hrs Cheat Sheets
AI Security 101 Course 2 hrs Various

Hands-On Exercises

  1. ML Top 10 Assessment

    • For each ML Top 10 risk:
      • Document attack scenario
      • Identify detection methods
      • Define preventive controls
      • Create testing approach
  2. LLM Security Assessment

    • Select an LLM deployment (internal or hypothetical)
    • Assess against LLM Top 10
    • Document: risk, current controls, gaps, recommendations
  3. Security Control Mapping

    • Map OWASP AI controls to:
      • NIST AI RMF MANAGE function
      • ISO 42001 Annex A controls
    • Create integrated control framework

Week 10 Checklist

  • Study ML Top 10 in depth
  • Study LLM Top 10 in depth
  • Complete ML Top 10 assessment
  • Conduct LLM security assessment
  • Create control mapping document

📅 Week 11: Security Tools & Cloud AI

Learning Objectives

  • Use AI security testing tools
  • Evaluate cloud AI security configurations
  • Implement model monitoring

Required Resources

Resource Type Time Cost Link
Adversarial Robustness Toolbox Tool 3 hrs Free ART GitHub
Microsoft Counterfit Tool 2 hrs Free Counterfit
AWS AI/ML Security Guide 2 hrs Free AWS AI
Azure AI Security Guide 2 hrs Free Azure AI
GCP AI Security Guide 2 hrs Free GCP AI

Supplementary Resources

Resource Type Time Link
CSA AI Security Guidance Guide 2 hrs CSA
MLflow Security Docs 1 hr MLflow
Kubeflow Security Docs 1 hr Kubeflow

Hands-On Exercises

  1. Tool Exploration (Lab Environment)

    • Set up Adversarial Robustness Toolbox in test environment
    • Run sample adversarial attack simulations
    • Document: attack types, detection methods, defensive measures
  2. Cloud Security Assessment

    • Select one cloud platform (AWS/Azure/GCP)
    • Review AI service security configurations
    • Create security checklist for AI workloads covering:
      • Identity and access management
      • Data encryption
      • Network security
      • Logging and monitoring
  3. Model Monitoring Design

    • Design a model monitoring solution including:
      • Performance drift detection
      • Data drift detection
      • Anomaly detection on inputs/outputs
      • Security event logging
    • Document tool selection and architecture

Week 11 Checklist

  • Install and explore ART
  • Review Counterfit capabilities
  • Complete cloud security assessment
  • Design model monitoring solution
  • Document tool evaluation findings

📅 Week 12: Berryville & Advanced Risk Analysis

Learning Objectives

  • Apply Berryville architectural risk analysis
  • Integrate security into AI development lifecycle
  • Conduct comprehensive AI risk assessments

Required Resources

Resource Type Time Cost Link
Berryville ML Risk Framework Paper 2 hrs Free BIML
Architectural Risk Analysis Paper 2 hrs Free BIML ARA
BIML Taxonomy Document 2 hrs Free BIML
AI Threat Modeling Guide Guide 2 hrs Free Various

Supplementary Resources

Resource Type Time Link
STRIDE for AI Article 1 hr Various
AI Attack Surface Analysis Paper 2 hrs Academic sources

Hands-On Exercises

  1. BIML Risk Analysis

    • Select an ML system for analysis
    • Apply BIML methodology:
      • System characterization
      • Risk source identification
      • Attack surface analysis
      • Risk prioritization
    • Document findings and recommendations
  2. Integrated Threat Model

    • Combine learnings from ATLAS, OWASP, and BIML
    • Create comprehensive threat model for complex AI system
    • Include: threats, vulnerabilities, controls, residual risks
  3. Security Architecture Review

    • Design secure AI system architecture
    • Address: data pipeline, training, serving, monitoring
    • Document security controls at each stage
  4. Technical Path Assessment

    • Compile all technical artifacts
    • Self-assess against learning objectives
    • Identify areas for continued development

Week 12 Checklist

  • Study BIML methodology
  • Complete BIML risk analysis
  • Create integrated threat model
  • Design secure architecture
  • Complete technical self-assessment

📊 Technical Path Assessment

Knowledge Check

  1. What are the 12 ATLAS tactics?
  2. Name the OWASP ML Top 10 risks
  3. What is prompt injection and how do you mitigate it?
  4. Describe the Berryville architectural risk analysis approach
  5. What cloud security controls are critical for AI workloads?

Portfolio Deliverables

By completing this path, you should have:

  • ATLAS tactical quick-reference
  • AI system threat model (ATLAS Navigator export)
  • Case study analysis document
  • ML Top 10 assessment
  • LLM security assessment
  • OWASP-to-framework control mapping
  • Cloud security checklist
  • Model monitoring design
  • BIML risk analysis
  • Integrated threat model
  • Secure architecture design

➡️ Next Steps

Congratulations on completing the Technical Path!

Continue to: Advanced Path - Risk assessments, vendor management, and operational governance


📚 Additional Resources

Certifications

  • GIAC Machine Learning Security (GMLS)
  • OSCP (for broader security context)
  • Cloud security certifications (AWS/Azure/GCP)

Tools & Labs

  • HuggingFace model testing
  • AI Village resources
  • DEF CON AI Village archives

Research

  • arXiv ML security papers
  • USENIX Security proceedings
  • IEEE S&P AI security tracks

Last Updated: 2024 | Back to Learning Paths