Duration: Weeks 9-12 Level: Intermediate to Advanced Focus: MITRE ATLAS, OWASP AI, security tools, Berryville methodology Prerequisites: Foundation Path, Regulatory Path (recommended)
This path develops technical competency in AI security, threat modeling, and security tooling essential for effective AI risk management.
By the end of this path, you will be able to:
- ✅ Apply MITRE ATLAS for AI threat modeling
- ✅ Implement OWASP AI security recommendations
- ✅ Use security tools for AI/ML assessment
- ✅ Apply Berryville Institute architectural risk analysis
- ✅ Evaluate cloud AI security configurations
- Master the ATLAS matrix structure
- Apply ATLAS to real-world threat modeling
- Use ATLAS Navigator tool
| Resource | Type | Time | Cost | Link |
|---|---|---|---|---|
| ATLAS Matrix | Interactive | 3 hrs | Free | atlas.mitre.org |
| ATLAS Navigator | Tool | 2 hrs | Free | ATLAS Navigator |
| ATLAS Case Studies | Studies | 2 hrs | Free | Case Studies |
| ATT&CK Fundamentals | Training | 2 hrs | Free | MITRE ATT&CK |
| Resource | Type | Time | Link |
|---|---|---|---|
| MITRE ATLAS GitHub | Repository | 1 hr | GitHub |
| AI Red Team Resources | Collection | 2 hrs | Various |
-
ATLAS Matrix Exploration
- Document all 12 tactics with descriptions
- Identify 5 techniques per tactic most relevant to your environment
- Create quick-reference guide
-
Threat Model Creation
- Select an AI system (recommendation engine, chatbot, or classifier)
- Using ATLAS Navigator, create threat model covering:
- Likely attack paths
- Relevant techniques
- Potential mitigations
- Export and document
-
Case Study Analysis
- Analyze 3 ATLAS case studies
- For each: identify techniques used, impact, and lessons learned
- Document defensive recommendations
- Complete ATLAS matrix review
- Master ATLAS Navigator tool
- Analyze all case studies
- Create threat model for real system
- Document tactical quick-reference
- Apply OWASP ML Top 10 and LLM Top 10
- Implement security controls for AI systems
- Conduct AI security assessments
| Resource | Type | Time | Cost | Link |
|---|---|---|---|---|
| OWASP AI Security Guide | Guide | 3 hrs | Free | OWASP AI |
| ML Top 10 | Document | 2 hrs | Free | ML Top 10 |
| LLM Top 10 2025 | Document | 2 hrs | Free | LLM Top 10 |
| OWASP Testing Guide | Guide | 2 hrs | Free | Testing Guide |
| Resource | Type | Time | Link |
|---|---|---|---|
| OWASP Cheat Sheets | Reference | 2 hrs | Cheat Sheets |
| AI Security 101 | Course | 2 hrs | Various |
-
ML Top 10 Assessment
- For each ML Top 10 risk:
- Document attack scenario
- Identify detection methods
- Define preventive controls
- Create testing approach
- For each ML Top 10 risk:
-
LLM Security Assessment
- Select an LLM deployment (internal or hypothetical)
- Assess against LLM Top 10
- Document: risk, current controls, gaps, recommendations
-
Security Control Mapping
- Map OWASP AI controls to:
- NIST AI RMF MANAGE function
- ISO 42001 Annex A controls
- Create integrated control framework
- Map OWASP AI controls to:
- Study ML Top 10 in depth
- Study LLM Top 10 in depth
- Complete ML Top 10 assessment
- Conduct LLM security assessment
- Create control mapping document
- Use AI security testing tools
- Evaluate cloud AI security configurations
- Implement model monitoring
| Resource | Type | Time | Cost | Link |
|---|---|---|---|---|
| Adversarial Robustness Toolbox | Tool | 3 hrs | Free | ART GitHub |
| Microsoft Counterfit | Tool | 2 hrs | Free | Counterfit |
| AWS AI/ML Security | Guide | 2 hrs | Free | AWS AI |
| Azure AI Security | Guide | 2 hrs | Free | Azure AI |
| GCP AI Security | Guide | 2 hrs | Free | GCP AI |
| Resource | Type | Time | Link |
|---|---|---|---|
| CSA AI Security Guidance | Guide | 2 hrs | CSA |
| MLflow Security | Docs | 1 hr | MLflow |
| Kubeflow Security | Docs | 1 hr | Kubeflow |
-
Tool Exploration (Lab Environment)
- Set up Adversarial Robustness Toolbox in test environment
- Run sample adversarial attack simulations
- Document: attack types, detection methods, defensive measures
-
Cloud Security Assessment
- Select one cloud platform (AWS/Azure/GCP)
- Review AI service security configurations
- Create security checklist for AI workloads covering:
- Identity and access management
- Data encryption
- Network security
- Logging and monitoring
-
Model Monitoring Design
- Design a model monitoring solution including:
- Performance drift detection
- Data drift detection
- Anomaly detection on inputs/outputs
- Security event logging
- Document tool selection and architecture
- Design a model monitoring solution including:
- Install and explore ART
- Review Counterfit capabilities
- Complete cloud security assessment
- Design model monitoring solution
- Document tool evaluation findings
- Apply Berryville architectural risk analysis
- Integrate security into AI development lifecycle
- Conduct comprehensive AI risk assessments
| Resource | Type | Time | Cost | Link |
|---|---|---|---|---|
| Berryville ML Risk Framework | Paper | 2 hrs | Free | BIML |
| Architectural Risk Analysis | Paper | 2 hrs | Free | BIML ARA |
| BIML Taxonomy | Document | 2 hrs | Free | BIML |
| AI Threat Modeling Guide | Guide | 2 hrs | Free | Various |
| Resource | Type | Time | Link |
|---|---|---|---|
| STRIDE for AI | Article | 1 hr | Various |
| AI Attack Surface Analysis | Paper | 2 hrs | Academic sources |
-
BIML Risk Analysis
- Select an ML system for analysis
- Apply BIML methodology:
- System characterization
- Risk source identification
- Attack surface analysis
- Risk prioritization
- Document findings and recommendations
-
Integrated Threat Model
- Combine learnings from ATLAS, OWASP, and BIML
- Create comprehensive threat model for complex AI system
- Include: threats, vulnerabilities, controls, residual risks
-
Security Architecture Review
- Design secure AI system architecture
- Address: data pipeline, training, serving, monitoring
- Document security controls at each stage
-
Technical Path Assessment
- Compile all technical artifacts
- Self-assess against learning objectives
- Identify areas for continued development
- Study BIML methodology
- Complete BIML risk analysis
- Create integrated threat model
- Design secure architecture
- Complete technical self-assessment
- What are the 12 ATLAS tactics?
- Name the OWASP ML Top 10 risks
- What is prompt injection and how do you mitigate it?
- Describe the Berryville architectural risk analysis approach
- What cloud security controls are critical for AI workloads?
By completing this path, you should have:
- ATLAS tactical quick-reference
- AI system threat model (ATLAS Navigator export)
- Case study analysis document
- ML Top 10 assessment
- LLM security assessment
- OWASP-to-framework control mapping
- Cloud security checklist
- Model monitoring design
- BIML risk analysis
- Integrated threat model
- Secure architecture design
Congratulations on completing the Technical Path!
Continue to: Advanced Path - Risk assessments, vendor management, and operational governance
- GIAC Machine Learning Security (GMLS)
- OSCP (for broader security context)
- Cloud security certifications (AWS/Azure/GCP)
- HuggingFace model testing
- AI Village resources
- DEF CON AI Village archives
- arXiv ML security papers
- USENIX Security proceedings
- IEEE S&P AI security tracks
Last Updated: 2024 | Back to Learning Paths