Skip to content

Commit 8b3a190

Browse files
committed
feat: Implement security hardening by removing Docker socket access, enabling non-root execution, adding basic UI authentication, and documenting security architecture.
1 parent 38f8d24 commit 8b3a190

File tree

10 files changed

+333
-352
lines changed

10 files changed

+333
-352
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,9 +68,9 @@ A containerized secure proxy with advanced filtering capabilities, real-time mon
6868
- Traffic pattern visualization
6969
- Blocked request reporting
7070
- Exportable reports
71-
- **Enterprise Management**:
71+
- **Proxy Management**:
7272
- Configuration backup and restore
73-
- Role-based access control
73+
- Basic authentication for UI access
7474
- API for automation and integration
7575
- Health monitoring endpoints
7676

SECURITY.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Security Policy
2+
3+
## Security Architecture Decisions
4+
5+
This document explains the security architecture of Secure Proxy Manager and the rationale behind key security decisions.
6+
7+
### Docker Socket Removed
8+
9+
**Previous State**: The backend container had access to `/var/run/docker.sock` to query container statistics.
10+
11+
**Current State**: Docker socket access has been **removed** as of version 0.0.9.
12+
13+
**Rationale**: Mounting the Docker socket into a web-facing container creates a critical vulnerability. If an attacker compromises the backend application, they gain root-level access to the host system. This risk was deemed unacceptable for a security-focused proxy manager.
14+
15+
**Impact**: Container statistics (memory, CPU, uptime) now show "N/A" in the dashboard. Cache statistics are calculated from database logs instead.
16+
17+
**Future**: We may implement a Prometheus metrics endpoint on the proxy container for safer metrics collection.
18+
19+
---
20+
21+
### Non-Root Container Execution
22+
23+
The backend container now runs as a non-root user (`proxyuser`) to limit the blast radius of any potential container compromise.
24+
25+
---
26+
27+
### NET_ADMIN Capability
28+
29+
The proxy container requires `NET_ADMIN` capability for:
30+
- Transparent proxy mode via iptables rules
31+
- NAT redirection of HTTP/HTTPS traffic
32+
33+
**Risk**: This capability allows network configuration changes within the container. The proxy container does NOT have Docker socket access and is isolated from the backend.
34+
35+
**Mitigation**: If transparent proxy mode is not needed, you can remove this capability from `docker-compose.yml`:
36+
37+
```yaml
38+
proxy:
39+
# Remove or comment out:
40+
# cap_add:
41+
# - NET_ADMIN
42+
```
43+
44+
---
45+
46+
### Authentication
47+
48+
The application uses HTTP Basic Authentication. While Basic Auth is simple, note:
49+
50+
- Always use HTTPS in production to protect credentials in transit
51+
- Change the default `admin/admin` credentials immediately
52+
- Credentials are stored hashed in SQLite
53+
54+
---
55+
56+
## Reporting Vulnerabilities
57+
58+
If you discover a security vulnerability, please report it responsibly by opening a private security advisory on GitHub.
59+
60+
## Version History
61+
62+
- **0.0.9**: Removed Docker socket mount, enabled non-root user, security hardening
63+
- **0.0.8**: Initial security headers and rate limiting

backend/Dockerfile

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,9 @@ COPY requirements.txt .
77
RUN pip install --no-cache-dir werkzeug==2.2.3 && \
88
pip install --no-cache-dir -r requirements.txt
99

10-
# Install Docker CLI for container management
10+
# Install curl for healthcheck (Docker CLI removed for security)
1111
RUN apt-get update && \
12-
apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release && \
13-
mkdir -p /etc/apt/keyrings && \
14-
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && \
15-
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null && \
16-
apt-get update && \
17-
apt-get install -y docker-ce-cli && \
12+
apt-get install -y --no-install-recommends curl && \
1813
apt-get clean && \
1914
rm -rf /var/lib/apt/lists/*
2015

@@ -32,16 +27,15 @@ RUN mkdir -p /data /logs /config && \
3227
COPY --chown=proxyuser:proxyuser . .
3328

3429
# Switch to non-root user for running the application
35-
# But we'll need to run as root for Docker CLI access
36-
# USER proxyuser
30+
USER proxyuser
3731

3832
# Set secure environment variables
3933
ENV PYTHONDONTWRITEBYTECODE=1 \
4034
PYTHONUNBUFFERED=1
4135

4236
# Healthcheck to ensure service is running properly
4337
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
44-
CMD curl -f http://localhost:5000/health || exit 1
38+
CMD curl -f http://localhost:5000/health || exit 1
4539

4640
# Run the API server with Gunicorn (production-ready WSGI server)
4741
# Bind to 0.0.0.0:5000 to match healthcheck and docker-compose expectations

backend/app/app.py

Lines changed: 41 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@ def get_status():
350350
"proxy_host": PROXY_HOST,
351351
"proxy_port": PROXY_PORT,
352352
"timestamp": datetime.now().isoformat(),
353-
"version": "0.0.8"
353+
"version": "0.0.9"
354354
}
355355

356356
# Add today's request count
@@ -376,56 +376,13 @@ def get_status():
376376
logger.error(f"Error getting today's request count: {str(e)}")
377377
stats["requests_count"] = 0
378378

379-
# Add memory usage, CPU usage, and uptime
380-
try:
381-
container_name = os.environ.get('PROXY_CONTAINER_NAME', 'secure-proxy-proxy-1')
382-
# Validate container name to prevent command injection
383-
if not re.match(r'^[a-zA-Z0-9_-]+$', container_name):
384-
raise ValueError(f"Invalid container name format: {container_name}")
385-
386-
# Get container stats using docker stats
387-
stats_cmd = subprocess.run(
388-
['docker', 'stats', container_name, '--no-stream', '--format', '{{.MemPerc}}|{{.CPUPerc}}'],
389-
capture_output=True, text=True, check=False
390-
)
391-
392-
if stats_cmd.returncode == 0 and stats_cmd.stdout:
393-
parts = stats_cmd.stdout.strip().split('|')
394-
if len(parts) >= 2:
395-
stats["memory_usage"] = parts[0].strip()
396-
stats["cpu_usage"] = parts[1].strip()
397-
398-
# Get container uptime
399-
uptime_cmd = subprocess.run(
400-
['docker', 'inspect', '--format', '{{.State.StartedAt}}', container_name],
401-
capture_output=True, text=True, check=False
402-
)
403-
404-
if uptime_cmd.returncode == 0 and uptime_cmd.stdout:
405-
started_at = uptime_cmd.stdout.strip()
406-
start_time = datetime.fromisoformat(started_at.replace('Z', '+00:00'))
407-
now = datetime.now(start_time.tzinfo)
408-
uptime_seconds = (now - start_time).total_seconds()
409-
410-
# Format uptime as days, hours, minutes
411-
days, remainder = divmod(uptime_seconds, 86400)
412-
hours, remainder = divmod(remainder, 3600)
413-
minutes, _ = divmod(remainder, 60)
414-
415-
if days > 0:
416-
uptime_str = f"{int(days)}d {int(hours)}h {int(minutes)}m"
417-
elif hours > 0:
418-
uptime_str = f"{int(hours)}h {int(minutes)}m"
419-
else:
420-
uptime_str = f"{int(minutes)}m"
421-
422-
stats["uptime"] = uptime_str
423-
except Exception as e:
424-
logger.error(f"Error getting system stats: {str(e)}")
425-
# Provide default values if unable to get real stats
426-
stats["memory_usage"] = "N/A"
427-
stats["cpu_usage"] = "N/A"
428-
stats["uptime"] = "N/A"
379+
# Container stats (memory, CPU, uptime) - Docker CLI access removed for security
380+
# These stats are not available without Docker socket access
381+
# Future: Could be replaced with Prometheus metrics from proxy container
382+
stats["memory_usage"] = "N/A"
383+
stats["cpu_usage"] = "N/A"
384+
stats["uptime"] = "N/A"
385+
logger.debug("Container stats unavailable - Docker socket access removed for security")
429386

430387
return jsonify({"status": "success", "data": stats})
431388

@@ -2475,76 +2432,42 @@ def get_cache_statistics():
24752432
avg_response_time = "N/A"
24762433
cache_usage_percentage = 0
24772434

2435+
# NOTE: Docker exec to squidclient removed for security (no Docker socket access)
2436+
# Cache statistics are now calculated from database logs only
2437+
# Future: Could use HTTP-based squidclient or Prometheus metrics
2438+
logger.debug("Cache statistics from squidclient unavailable - using database fallback")
2439+
2440+
# Database-based calculations only
24782441
try:
2479-
# Try to get real cache information using squidclient
2480-
result = subprocess.run(
2481-
['docker', 'exec', container_name, 'squidclient', '-h', 'localhost', 'mgr:info'],
2482-
capture_output=True, text=True, check=False, timeout=5
2483-
)
2484-
2485-
if result.returncode == 0:
2486-
info_output = result.stdout
2487-
2488-
# Parse the relevant cache information from squidclient output
2489-
storage_pattern = r'Storage Swap size:\s+(\d+)\s+KB'
2490-
usage_pattern = r'Storage Swap capacity:\s+(\d+\.\d+)%'
2491-
hits_pattern = r'Request Hit Ratios:\s+5min: (\d+\.\d+)%'
2492-
response_time_pattern = r'Average HTTP Service Time:\s+(\d+\.\d+) seconds'
2493-
2494-
# Extract storage info
2495-
storage_match = re.search(storage_pattern, info_output)
2496-
if storage_match:
2497-
storage_kb = int(storage_match.group(1))
2498-
cache_usage = int(storage_kb / 1024) # Convert KB to MB
2499-
2500-
# Extract usage percentage
2501-
usage_match = re.search(usage_pattern, info_output)
2502-
if usage_match:
2503-
cache_usage_percentage = float(usage_match.group(1))
2442+
# Calculate hit ratio from logs
2443+
cursor.execute("""
2444+
SELECT
2445+
COUNT(CASE WHEN status LIKE '%HIT%' THEN 1 END) as hits,
2446+
COUNT(*) as total,
2447+
AVG(CASE WHEN bytes > 0 THEN bytes ELSE NULL END) as avg_bytes
2448+
FROM proxy_logs
2449+
WHERE timestamp > datetime('now', '-1 day')
2450+
""")
2451+
result = cursor.fetchone()
2452+
if result and result['total'] > 0:
2453+
hit_ratio = round((result['hits'] / result['total']) * 100, 1)
25042454

2505-
# Extract hit ratio
2506-
hits_match = re.search(hits_pattern, info_output)
2507-
if hits_match:
2508-
hit_ratio = float(hits_match.group(1))
2455+
# Calculate response time - estimate based on bytes transferred
2456+
if result and result['avg_bytes'] is not None and result['avg_bytes'] > 0:
2457+
# Estimate: 1MB = 0.1 seconds (very rough approximation)
2458+
avg_bytes_mb = result['avg_bytes'] / (1024 * 1024)
2459+
avg_response_time = round(max(0.05, min(5.0, avg_bytes_mb * 0.1)), 3)
25092460

2510-
# Extract average response time
2511-
response_time_match = re.search(response_time_pattern, info_output)
2512-
if response_time_match:
2513-
avg_response_time = float(response_time_match.group(1))
2514-
except Exception as e:
2515-
logger.warning(f"Error getting direct cache statistics: {e}")
2516-
2517-
# Fall back to database calculations
2518-
try:
2519-
# Calculate hit ratio from logs
2520-
cursor.execute("""
2521-
SELECT
2522-
COUNT(CASE WHEN status LIKE '%HIT%' THEN 1 END) as hits,
2523-
COUNT(*) as total,
2524-
AVG(CASE WHEN bytes > 0 THEN bytes ELSE NULL END) as avg_bytes
2525-
FROM proxy_logs
2526-
WHERE timestamp > datetime('now', '-1 day')
2527-
""")
2528-
result = cursor.fetchone()
2529-
if result and result['total'] > 0:
2530-
hit_ratio = round((result['hits'] / result['total']) * 100, 1)
2531-
2532-
# Calculate response time - estimate based on bytes transferred
2533-
if result and result['avg_bytes'] is not None and result['avg_bytes'] > 0:
2534-
# Estimate: 1MB = 0.1 seconds (very rough approximation)
2535-
avg_bytes_mb = result['avg_bytes'] / (1024 * 1024)
2536-
avg_response_time = round(max(0.05, min(5.0, avg_bytes_mb * 0.1)), 3)
2537-
2538-
# Estimate cache usage based on logs volume
2539-
cursor.execute("SELECT COUNT(*) as count FROM proxy_logs")
2540-
log_count = cursor.fetchone()['count']
2541-
if log_count > 0:
2542-
# Very rough estimation: assume each log entry corresponds to ~10KB in cache
2543-
estimated_cache_kb = log_count * 10
2544-
cache_usage = min(cache_size, int(estimated_cache_kb / 1024)) # Convert to MB, cap at cache_size
2545-
cache_usage_percentage = min(100, round((cache_usage / cache_size) * 100, 1))
2546-
except Exception as log_error:
2547-
logger.warning(f"Error getting cache metrics from logs: {log_error}")
2461+
# Estimate cache usage based on logs volume
2462+
cursor.execute("SELECT COUNT(*) as count FROM proxy_logs")
2463+
log_count = cursor.fetchone()['count']
2464+
if log_count > 0:
2465+
# Very rough estimation: assume each log entry corresponds to ~10KB in cache
2466+
estimated_cache_kb = log_count * 10
2467+
cache_usage = min(cache_size, int(estimated_cache_kb / 1024)) # Convert to MB, cap at cache_size
2468+
cache_usage_percentage = min(100, round((cache_usage / cache_size) * 100, 1))
2469+
except Exception as log_error:
2470+
logger.warning(f"Error getting cache metrics from logs: {log_error}")
25482471

25492472
# Format the response
25502473
return jsonify({

docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ services:
3333
- ./config:/config
3434
- ./data:/data
3535
- ./logs:/logs
36-
- /var/run/docker.sock:/var/run/docker.sock # Mount Docker socket from host
36+
# Docker socket mount removed for security - see SECURITY.md
3737
environment:
3838
- FLASK_ENV=production
3939
- PROXY_HOST=proxy

tests/e2e_test.py

Lines changed: 8 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,7 @@
66
with a focus on validating direct IP blocking and other security settings.
77
88
Requirements:
9-
- rich (pip install rich)
10-
- requests (pip install requests)
9+
Install dependencies with: pip install -r requirements-test.txt
1110
"""
1211

1312
import os
@@ -21,24 +20,13 @@
2120
import urllib.parse
2221
from datetime import datetime
2322

24-
try:
25-
import requests
26-
from rich.console import Console
27-
from rich.table import Table
28-
from rich.panel import Panel
29-
from rich.progress import Progress, SpinnerColumn, TextColumn
30-
from rich.syntax import Syntax
31-
from rich.text import Text
32-
except ImportError:
33-
print("Required packages not found. Installing them now...")
34-
subprocess.run(["pip", "install", "rich", "requests"], check=True)
35-
import requests
36-
from rich.console import Console
37-
from rich.table import Table
38-
from rich.panel import Panel
39-
from rich.progress import Progress, SpinnerColumn, TextColumn
40-
from rich.syntax import Syntax
41-
from rich.text import Text
23+
import requests
24+
from rich.console import Console
25+
from rich.table import Table
26+
from rich.panel import Panel
27+
from rich.progress import Progress, SpinnerColumn, TextColumn
28+
from rich.syntax import Syntax
29+
from rich.text import Text
4230

4331
# Initialize rich console
4432
console = Console()

tests/requirements-test.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Test dependencies for secure-proxy-manager
2+
# Install with: pip install -r requirements-test.txt
3+
4+
requests>=2.31.0
5+
rich>=13.0.0

ui/app.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,4 @@
1-
# Import workaround for Werkzeug compatibility issue
21
from markupsafe import escape
3-
from flask import redirect, url_for
4-
52
from flask import Flask, render_template, request, redirect, url_for, flash, jsonify
63
from flask_basicauth import BasicAuth
74
import os

0 commit comments

Comments
 (0)