This document provides comprehensive manual test procedures for the AI Pattern Detector system. It covers all features, components, and integration points.
- Python 3.8 or higher installed
- Virtual environment created and activated
- All dependencies installed:
pip install -r ai_tools/requirements.txt - Optional: Ollama installed and running (for AI features)
- Normal traffic patterns
- Attack traffic patterns (GTG-1002 style)
- Various endpoint patterns
- Different IP addresses
- Multiple user agents
Objective: Verify system installs and configures correctly
Steps:
- Clone repository
- Create virtual environment:
python3 -m venv venv - Activate virtual environment
- Install dependencies:
pip install -r ai_tools/requirements.txt - Verify installation:
python -c "import ai_tools; print('OK')"
Expected Results:
- All dependencies install without errors
- No import errors
- System ready for use
Test Data: None required
Objective: Verify dashboard starts and displays correctly
Steps:
- Start dashboard:
streamlit run dashboard/app.py - Wait for browser to open
- Verify dashboard loads
- Check sidebar is visible
- Verify main content area displays
Expected Results:
- Dashboard opens at
http://localhost:8501 - No errors in console
- All UI elements visible
- Metrics show zero/initial values
Test Data: None required
Objective: Verify normal traffic generation and detection
Steps:
- Start dashboard
- Click "Start Simulation" button
- Observe traffic generation
- Monitor metrics panel
- Check threat timeline chart
- Review recent detections table
Expected Results:
- Simulation starts successfully
- Requests generate continuously
- Metrics update in real-time
- Threat scores remain low (< 30)
- Most detections marked as "normal"
- Timeline shows green/low-threat indicators
Test Data: Normal traffic patterns
Objective: Verify attack detection capabilities
Steps:
- Start dashboard
- Click "Start Simulation"
- Wait for normal traffic baseline (10-20 requests)
- Click "Trigger Attack" button
- Observe detection changes
- Monitor alert feed
- Check threat scores increase
- Verify pattern types detected
Expected Results:
- Attack simulation triggers successfully
- Threat scores increase significantly (> 50)
- Alerts appear in alert feed
- Pattern types include: superhuman_speed, systematic_enumeration, or behavioral_anomaly
- Threat level changes to "suspicious" or "malicious"
- Timeline shows orange/red indicators
Test Data: Attack traffic patterns
Objective: Verify superhuman speed pattern detection
Steps:
- Start dashboard
- Configure speed threshold: Set to 5 req/s in sidebar
- Start simulation
- Trigger attack
- Monitor for speed detections
- Check alert messages mention speed
- Verify threat scores reflect speed detection
Expected Results:
- Speed detections appear in alerts
- Pattern type shows "superhuman_speed"
- Threat score increases (40+ points)
- Alert message describes speed anomaly
Test Data: Rapid request patterns (> 5 req/s)
Objective: Verify enumeration pattern detection
Steps:
- Start dashboard
- Start simulation
- Trigger attack
- Monitor endpoint patterns
- Look for sequential endpoint access
- Check for enumeration alerts
Expected Results:
- Enumeration detections appear
- Pattern type shows "systematic_enumeration"
- Sequential endpoint patterns visible
- Threat score reflects enumeration (35+ points)
Test Data: Sequential endpoint patterns (/api/users/1, /api/users/2, etc.)
Objective: Verify anomaly detection capabilities
Steps:
- Start dashboard
- Start simulation
- Allow normal traffic baseline
- Trigger attack
- Monitor for anomaly detections
- Check statistical pattern analysis
Expected Results:
- Anomaly detections appear
- Pattern type shows "behavioral_anomaly"
- Unusual patterns identified
- Threat score reflects anomalies (25+ points)
Test Data: Unusual request patterns (deep paths, unusual parameters)
Objective: Verify AI-enhanced features when Ollama is available
Prerequisites: Ollama installed and running
Steps:
- Verify Ollama running:
ollama list - Start dashboard
- Check "Enable AI Analysis" checkbox in sidebar
- Verify Ollama status shows "Connected"
- Start simulation
- Trigger attack
- Click "AI Insights" button on an alert
- Verify AI explanation appears
- Check recommendations panel
- Test Security Assistant Q&A
Expected Results:
- Ollama status shows "Connected"
- AI Insights panel displays
- Natural language explanations provided
- Intent classification shown
- Recommendations appear
- Security Assistant answers questions
Test Data: Attack detections with AI analysis
Objective: Verify system works without Ollama
Prerequisites: Ollama NOT running or disabled
Steps:
- Ensure Ollama is not running
- Start dashboard
- Check "Enable AI Analysis" checkbox
- Verify Ollama status shows "Unavailable"
- Start simulation
- Trigger attack
- Verify detections still work
- Check alerts display (without AI explanations)
- Verify no errors occur
Expected Results:
- Ollama status shows "Unavailable"
- System continues to function
- Rule-based detection works
- No errors or crashes
- Basic threat explanations provided
- Recommendations panel may be empty or show basic recommendations
Test Data: Normal and attack traffic
Objective: Verify configuration adjustments work
Steps:
- Start dashboard
- Adjust "Superhuman Speed Threshold" slider
- Adjust "Attack Intensity" slider
- Start simulation
- Observe detection sensitivity changes
- Reset to defaults
- Verify behavior returns to normal
Expected Results:
- Threshold changes affect detection sensitivity
- Lower threshold = more detections
- Higher threshold = fewer detections
- Attack intensity affects traffic mix
- Reset restores defaults
Test Data: Various threshold values
Objective: Verify detection export works
Steps:
- Start dashboard
- Start simulation
- Generate detections (normal + attack)
- Click "Export Detections" button
- Download CSV file
- Open CSV in spreadsheet application
- Verify data structure
- Check all fields present
Expected Results:
- CSV file downloads successfully
- File contains detection data
- Columns include: timestamp, threat_score, threat_level, pattern_type, endpoint, IP
- Data matches dashboard display
Test Data: Multiple detections
Objective: Verify reset clears history
Steps:
- Start dashboard
- Start simulation
- Generate detections
- Verify metrics show detections
- Click "Reset Detector" button
- Verify metrics reset to zero
- Check detection history cleared
Expected Results:
- Reset button clears all detections
- Metrics return to zero
- Charts reset
- Alert feed clears
- No errors occur
Test Data: Existing detections
Objective: Verify dashboard handles high load
Steps:
- Start dashboard
- Start simulation
- Let run for extended period (5+ minutes)
- Monitor memory usage
- Check response times
- Verify no slowdowns
- Test with 1000+ detections
Expected Results:
- Dashboard remains responsive
- Memory usage stable
- No performance degradation
- Charts update smoothly
- No crashes or errors
Test Data: High-volume traffic
Objective: Verify error handling and recovery
Steps:
- Start dashboard
- Start simulation
- Stop Ollama (if running) during operation
- Verify graceful degradation
- Restart Ollama
- Verify reconnection works
- Test invalid configurations
- Verify error messages display
Expected Results:
- Errors handled gracefully
- No crashes
- Error messages informative
- System recovers from errors
- Degradation works smoothly
Test Data: Error conditions
Objective: Verify dashboard works in different browsers
Steps:
- Test in Chrome
- Test in Firefox
- Test in Safari
- Test in Edge
- Verify all features work
- Check visualizations render
- Test interactions
Expected Results:
- Dashboard works in all browsers
- Visualizations render correctly
- No browser-specific errors
- Consistent behavior
Test Data: Multiple browsers
- TS-1: Installation and Setup
- TS-2: Dashboard Startup
- TS-3: Normal Traffic Simulation
- TS-4: Attack Traffic Simulation
- TS-5: Superhuman Speed Detection
- TS-6: Systematic Enumeration Detection
- TS-7: Behavioral Anomaly Detection
- TS-8: AI Features - With Ollama
- TS-9: AI Features - Without Ollama
- TS-10: Configuration Changes
- TS-11: Export Functionality
- TS-12: Reset Functionality
- TS-13: Dashboard Performance
- TS-14: Error Handling
- TS-15: Cross-Browser Compatibility
For each test scenario, document:
- Test ID: TS-X
- Date: YYYY-MM-DD
- Tester: Name
- Environment: OS, Python version, Browser
- Result: PASS / FAIL / BLOCKED
- Notes: Any observations or issues
- Screenshots: If applicable
- AI features require Ollama to be running
- High request rates may impact performance
- Some browsers may have rendering differences
- Ollama connection status may take a moment to update
After code changes, re-run:
- TS-2: Dashboard Startup
- TS-3: Normal Traffic Simulation
- TS-4: Attack Traffic Simulation
- TS-8/TS-9: AI Features (both scenarios)
- Dashboard startup: < 5 seconds
- Request processing: < 100ms per request
- Chart rendering: < 500ms
- AI analysis (if enabled): < 2 seconds per detection
Last Updated: 2025-01-XX Version: 1.0