A memory forensics triage assistant that combines Volatility3 with LLM analysis to help analysts quickly identify and prioritize suspicious artifacts in Windows memory dumps.
- Automated Volatility Execution: Runs comprehensive set of Volatility3 plugins to extract raw forensic data
- Two-Stage LLM Analysis:
- Stage 1 (Triage LLM): Analyzes raw Volatility data and scores all artifacts (processes, connections, injections)
- Stage 2 (Analysis LLM): Synthesizes triage findings and powers interactive chat
- LLM-Based Scoring: Google Gemini (1M context) or Anthropic Claude evaluates suspiciousness using deep understanding of malware behaviors
- Interactive Chat: Ask follow-up questions with LLM aware of available Volatility plugins
- Hunting Checklist: Automatically generates actionable investigation steps
- Web Interface: Simple, intuitive interface for submitting dumps and reviewing results
[Web UI (templates/static)] ⇄ [FastAPI Backend (api.py)]
│
├─► VolatilityRunner (runner.py)
│ └─► Raw CSV Data (runs/*/raw/)
│
├─► LLMAnalyzer (llm_interface.py)
│ ├─ Stage 1: perform_triage()
│ │ └─► Triage Findings (combined_findings.json)
│ └─ Stage 2: analyze_initial() + chat()
│ └─► Analysis + Interactive Chat
│
└─► ReportGenerator (reports.py)
└─► Markdown Reports (*.md)
Component Details:
- FastAPI Backend: Orchestrates the analysis pipeline and serves the web interface
- VolatilityRunner: Executes Volatility3 plugins and saves raw CSV output
- LLMAnalyzer: Core LLM component with two-stage analysis
- Stage 1 (
perform_triage()): Analyzes raw Volatility data directly - Stage 2 (
analyze_initial()+chat()): Synthesizes findings and powers interactive chat
- Stage 1 (
- ReportGenerator: Creates markdown reports from triage findings
- TriageAnalyzer: Thin coordinator that invokes LLMAnalyzer.perform_triage()
The easiest way to run the tool is using Docker:
# Pull from Docker Hub and run
docker run -d \
-p 8000:8000 \
-v $(pwd)/dumps:/dumps:ro \
-v $(pwd)/runs:/app/runs \
-e GOOGLE_API_KEY=your_key_here \
--name vol3-triage \
therealpotus/vol3-triage:latest
# Or use docker-compose (download compose file first)
wget https://raw.githubusercontent.com/vermi/vol3-triage/main/docker-compose.yml
echo "GOOGLE_API_KEY=your_key_here" > .env
docker-compose up -dPlace your memory dumps in ./dumps/ and access the web UI at http://localhost:8000/app
See Docker Deployment section below for detailed instructions.
- Docker (Recommended): No other dependencies needed
- OR Local Installation: Python 3.8+ and Google/Anthropic API key
Docker is the recommended deployment method - no Python setup required!
-
Create directories for your dumps and outputs:
mkdir -p dumps runs
-
Pull and run from Docker Hub:
docker run -d \ -p 8000:8000 \ -v $(pwd)/dumps:/dumps:ro \ -v $(pwd)/runs:/app/runs \ -e LLM_PROVIDER=google \ -e GOOGLE_API_KEY=your_google_api_key \ --name vol3-triage \ therealpotus/vol3-triage:latest
For Anthropic Claude:
docker run -d \ -p 8000:8000 \ -v $(pwd)/dumps:/dumps:ro \ -v $(pwd)/runs:/app/runs \ -e LLM_PROVIDER=anthropic \ -e ANTHROPIC_API_KEY=your_anthropic_api_key \ --name vol3-triage \ therealpotus/vol3-triage:latest
-
Access the web UI: Open
http://localhost:8000/appin your browser -
Place memory dumps in the dumps folder:
cp /path/to/your/memory.dmp ./dumps/
Then use path
/dumps/memory.dmpin the web interface
-
Download the docker-compose.yml file:
wget https://raw.githubusercontent.com/vermi/vol3-triage/main/docker-compose.yml
-
Create a .env file with your API key:
# For Google Gemini (recommended) echo "LLM_PROVIDER=google" > .env echo "GOOGLE_API_KEY=your_key_here" >> .env # OR for Anthropic Claude echo "LLM_PROVIDER=anthropic" > .env echo "ANTHROPIC_API_KEY=your_key_here" >> .env
-
Start the container:
docker-compose up -d
-
View logs:
docker-compose logs -f
-
Stop the container:
docker-compose down
API Keys:
- Google Gemini: Get your key from https://aistudio.google.com/apikey
- Anthropic Claude: Get your key from https://console.anthropic.com/
git clone https://github.com/vermi/vol3-triage.git
cd vol3-triagepython3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateThis includes Volatility3 and all required packages:
pip install -r requirements.txtCopy the example file and add your API key:
cp .env.example .env
# Edit .env and add your API keyFor Google Gemini (Recommended - 1M token context):
Add to your .env file:
LLM_PROVIDER=google
GOOGLE_API_KEY=your-google-api-key-here
Get your key from: https://aistudio.google.com/apikey
For Anthropic Claude (200K token context):
Add to your .env file:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key-here
Get your key from: https://console.anthropic.com/
Docker (Recommended):
# Using docker-compose
docker-compose up -d
# Or using docker run
docker run -d -p 8000:8000 -v $(pwd)/dumps:/dumps:ro -v $(pwd)/runs:/app/runs -e GOOGLE_API_KEY=your_key therealpotus/vol3-triage:latestLocal Installation:
# Linux/macOS
./run.sh
# Windows
run.batThe server will start on http://localhost:8000.
- Open your browser to
http://localhost:8000/app - Fill in the analysis form:
- Memory Image Path:
- Docker users:
/dumps/your-file.dmp(files in your./dumps/directory) - Local users: Full path to your memory dump file
- Docker users:
- Case Name: A short identifier for this analysis
- Scenario Description: Brief context about the incident
- Memory Image Path:
- Click Run Triage
- Wait for analysis to complete (this may take several minutes)
- Review the results in the tabs:
- Summary: Overview of suspicious processes and connections
- Checklist: Actionable investigation steps
- LLM Analysis: AI-generated analysis and hypotheses
- Chat: Ask follow-up questions
You can also interact with the backend directly via API:
curl -X POST http://localhost:8000/api/analyze \
-H "Content-Type: application/json" \
-d '{
"image_path": "/path/to/memory.dmp",
"case_name": "case001",
"scenario_description": "Suspected ransomware infection"
}'curl http://localhost:8000/api/analyze/{run_id}/statuscurl http://localhost:8000/api/analyze/{run_id}/resultscurl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{
"run_id": "{run_id}",
"message": "What makes PID 1234 suspicious?"
}'vol3-triage/
├── backend/
│ ├── __init__.py
│ ├── api.py # FastAPI backend
│ ├── config.py # Configuration and constants
│ ├── llm_interface.py # LLM integration (two-stage analysis)
│ ├── reports.py # Report generation
│ ├── runner.py # Volatility3 execution
│ └── triage.py # Triage coordinator
├── static/
│ ├── app.js # Frontend JavaScript
│ └── style.css # Frontend styles
├── templates/
│ └── index.html # Web UI
├── .github/
│ └── workflows/
│ └── docker-publish.yml # GitHub Actions CI/CD
├── dumps/ # Memory dump files (volume mount)
├── runs/ # Analysis output directory
├── .dockerignore # Docker build exclusions
├── .env.example # Environment variable template
├── docker-compose.yml # Docker Compose configuration
├── Dockerfile # Docker image definition
├── docker-build.sh # Manual Docker build script
├── LICENSE # License file
├── QUICKSTART.md # Quick start guide
├── README.md # This file
├── requirements.txt # Python dependencies
├── run.sh # Linux/macOS startup script
├── run.bat # Windows startup script
└── test_setup.py # Setup validation script
The tool runs a comprehensive set of Volatility3 plugins:
windows.info.Info- System metadata (OS version, kernel, CPU count)windows.pslist.PsList- Standard process listwindows.psscan.PsScan- Memory scan for processes (finds hidden processes)windows.cmdline.CmdLine- Command line argumentswindows.netscan.NetScan- Network connectionswindows.dlllist.DllList- Loaded DLLs per processwindows.handles.Handles- Open handles (files, registry, mutants)windows.ldrmodules.LdrModules- DLL loading anomaly detectionwindows.malware.malfind.Malfind- Code injection detectionwindows.malware.psxview.PsXView- Process visibility analysis
Raw CSV output is saved to runs/<timestamp>_<case>/raw/
The Triage LLM (Google Gemini 2.5 Pro by default, configurable) analyzes raw Volatility data directly:
What the Triage LLM Does:
- Comprehensive Analysis: Reviews all processes, connections, DLLs, handles, and files from raw CSV data
- Context-Aware Scoring: Evaluates suspiciousness (0-10 scale) based on deep understanding of:
- Process behaviors and parent-child relationships
- Network connection patterns (legitimate browsing vs C2 beaconing)
- Code injection indicators
- DLL loading anomalies
- Suspicious file locations and handles
- Hidden processes (psscan vs pslist discrepancies)
- Structured Output: Returns JSON with scored findings, reasons, and full process tree
- No Hard-Coded Rules: Uses LLM's training on malware behaviors instead of brittle heuristics
Key Advantages Over Heuristic Scoring:
- Better context understanding (e.g., powershell.exe from System32 vs Temp)
- Adapts to novel malware behaviors
- Provides human-readable explanations for each finding
- Reduces false positives through holistic analysis
Generates markdown reports:
- Initial Summary: Top suspicious processes, connections, and injections
- Hunting Checklist: Prioritized investigation steps
Stage 1: Triage LLM (Raw Data Analysis)
- Input: Raw Volatility CSV data with smart filtering to reduce token usage:
- Handles filtered to File, Key, Mutant, and Process types (removes Thread, Event, Section noise)
- Malfind filtered to remove hexdump and disasm columns
- LdrModules filtered to show only anomalous entries
- Processing: Analyzes processes, connections, handles, DLLs, malfind, ldrmodules, psxview
- Scoring: Assigns 0-10 suspicion scores based on behavior patterns
- Output: Structured JSON with:
- Top suspicious processes with reasons
- Top suspicious network connections
- Code injection indicators
- Suspicious files
- Complete process tree
- Expert narrative analysis
Stage 2: Analysis LLM (Synthesis + Chat)
- Input:
- Triage LLM's structured findings
- Available Volatility plugins list
- Scenario description
- Provides:
- High-level situation assessment
- Attack pattern identification (C2, ransomware, cryptominer, etc.)
- Specific next-step recommendations
- Volatility commands using available plugins
- Evidence gap identification
- Interactive Q&A about findings
Follow-up questions are answered by the Analysis LLM using:
- Complete triage context from both stages
- Previous conversation history
- Available Volatility plugins for accurate command suggestions
- Low temperature (0.2) to reduce hallucinations
Edit backend/config.py to customize:
- Volatility3 plugins to run
- LLM model and parameters
- Manual Command Execution: LLM suggests Volatility commands but cannot execute them automatically
- Windows Only: Focused on Windows memory dumps (Linux/macOS not yet supported)
- Local Only: Designed for local analysis, not multi-user deployment
- LLM Dependency: Requires Google or Anthropic API key and internet connection
- Symbol Table Dependency: Requires appropriate Volatility3 symbol tables for target OS
- Large memory dumps (>8GB) may take 15-30 minutes for Volatility execution
- Two-stage LLM analysis adds ~2-3 minutes but provides comprehensive coverage
- LLM may occasionally miss subtle indicators in very large datasets
- Limited to English-language LLM output
If you have a memory dump (benign or malicious), you can test:
# Start the server
./run.sh # or run.bat on Windows
# Open browser to http://localhost:8000/app or use the API- Benign: Clean Windows 10/11 system memory dump
- Malicious: Public malware samples (e.g., from MalwareBazaar)
Verify your API key is set:
# For Google Gemini
echo $GOOGLE_API_KEY
# For Anthropic Claude
echo $ANTHROPIC_API_KEYCheck Volatility3 symbol tables are available for your dump's OS version.
For very large dumps, consider increasing system memory or using a smaller test dump.
This is a research project. Contributions welcome for:
- Enhanced LLM prompts and analysis techniques
- Support for Linux/macOS memory dumps
- Performance optimizations
- Better error handling
- Additional Volatility plugin integrations
This project is for educational and research purposes.
- QUICKSTART.md - Quick start guide
- Volatility3 Documentation
- Volatility3 Plugins Reference
- Google Gemini API
- Anthropic Claude API
- Memory Forensics Resources
If you use this tool in research, please cite:
@misc{vol3-llm-triage,
title={LLM-Guided Volatility3 Triage Tool},
author={Justin Vermillion},
year={2025},
howpublished={\url{https://github.com/vermi/vol3-triage}}
}