-
Notifications
You must be signed in to change notification settings - Fork 0
Database Operations
This guide covers PostgreSQL and Redis operations, backup/restore procedures, migrations, and database maintenance for TMI.
TMI uses two database systems:
- PostgreSQL: Primary data storage for threat models, diagrams, users, and persistent data
- Redis: Session storage, caching, and real-time WebSocket coordination
This guide provides practical procedures for database operations teams.
# Connection test
psql -h postgres-host -U tmi_user -d tmi -c "SELECT 1"
# Database backup
pg_dump -h postgres-host -U tmi_user -d tmi -Fc -f tmi_backup_$(date +%Y%m%d).dump
# Database restore
pg_restore -h postgres-host -U tmi_user -d tmi tmi_backup_20251112.dump
# Check database status
psql -h postgres-host -U tmi_user -d tmi -c "\dt"
# View table sizes
psql -h postgres-host -U tmi_user -d tmi -c "
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC"Ubuntu/Debian:
sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql
sudo systemctl enable postgresqlCentOS/RHEL:
sudo yum install postgresql-server postgresql-contrib
sudo postgresql-setup initdb
sudo systemctl start postgresql
sudo systemctl enable postgresqlDocker:
docker run -d \
--name tmi-postgres \
-e POSTGRES_USER=tmi_user \
-e POSTGRES_PASSWORD=secure_password \
-e POSTGRES_DB=tmi \
-v postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15-- Connect as superuser
sudo -u postgres psql
-- Create user and database
CREATE USER tmi_user WITH PASSWORD 'secure_password';
CREATE DATABASE tmi OWNER tmi_user;
GRANT ALL PRIVILEGES ON DATABASE tmi TO tmi_user;
-- Connect to tmi database
\c tmi
-- Grant schema privileges
GRANT USAGE ON SCHEMA public TO tmi_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO tmi_user;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO tmi_user;
-- Exit PostgreSQL
\qTMI uses these PostgreSQL connection parameters:
# Environment variables
POSTGRES_HOST=postgres-host
POSTGRES_PORT=5432
POSTGRES_USER=tmi_user
POSTGRES_PASSWORD=secure_password
POSTGRES_DATABASE=tmi
POSTGRES_SSL_MODE=require # Use 'require' for productionTMI uses golang-migrate for schema management. Migrations are in auth/migrations/.
Migration Versions:
-
000001_initial_schema.up.sql- Base tables -
000002_add_indexes.up.sql- Performance indexes -
000003_add_constraints.up.sql- Foreign keys -
000004_add_refresh_tokens.up.sql- OAuth refresh tokens -
000005_add_user_providers.up.sql- Multi-provider auth -
000006_update_schema.up.sql- Schema refinements -
000007_add_missing_indexes.up.sql- Additional indexes -
000008_add_additional_constraints.up.sql- CHECK constraints
Running Migrations:
# From local machine (recommended for initial setup)
cd tmi
export POSTGRES_HOST=your-host
export POSTGRES_PORT=5432
export POSTGRES_USER=tmi_user
export POSTGRES_PASSWORD=your-password
export POSTGRES_DATABASE=tmi
export POSTGRES_SSL_MODE=require
# Build and run migrations
make build-migrate
./bin/migrate up
# Check migration status
./bin/migrate versionAutomated Migrations (via TMI server):
TMI server can run migrations automatically on startup. This is the recommended approach for production.
Validate database schema matches expected structure:
# Using check-db tool
go run cmd/check-db/main.go
# Or from built binary
make build-check-db
./bin/check-dbOutput includes:
- Connection status
- Table existence verification
- Column validation (name, type, nullability)
- Index verification
- Constraint validation
- Row counts per table
-- List all tables
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY table_name;
-- Show table structure
\d+ table_name
-- List all indexes
SELECT
schemaname,
tablename,
indexname,
indexdef
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename, indexname;
-- Show foreign key constraints
SELECT
tc.table_name,
kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY';
-- Show CHECK constraints
SELECT
tc.table_name,
tc.constraint_name,
cc.check_clause
FROM information_schema.table_constraints tc
JOIN information_schema.check_constraints cc
ON tc.constraint_name = cc.constraint_name
WHERE tc.constraint_type = 'CHECK'
ORDER BY tc.table_name;1. Logical Backups (pg_dump):
# Full database backup (custom format - recommended)
pg_dump -h postgres-host -U tmi_user -d tmi -Fc \
-f tmi_backup_$(date +%Y%m%d_%H%M%S).dump
# SQL format backup
pg_dump -h postgres-host -U tmi_user -d tmi \
-f tmi_backup_$(date +%Y%m%d_%H%M%S).sql
# Compressed SQL backup
pg_dump -h postgres-host -U tmi_user -d tmi | \
gzip > tmi_backup_$(date +%Y%m%d_%H%M%S).sql.gz
# Schema-only backup
pg_dump -h postgres-host -U tmi_user -d tmi --schema-only \
-f tmi_schema.sql
# Data-only backup
pg_dump -h postgres-host -U tmi_user -d tmi --data-only \
-f tmi_data.sql2. Physical Backups (pg_basebackup):
# Full cluster backup
pg_basebackup -h postgres-host -U postgres \
-D /backup/location -Ft -z -P
# Incremental backups require WAL archiving3. Continuous Archiving (WAL):
Enable in postgresql.conf:
archive_mode = on
archive_command = 'cp %p /archive/location/%f'
wal_level = replica
Create /usr/local/bin/backup-tmi.sh:
#!/bin/bash
# TMI Database Backup Script
BACKUP_DIR="/var/backups/postgresql/tmi"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DB_NAME="tmi"
DB_USER="tmi_user"
DB_HOST="postgres-host"
RETENTION_DAYS=7
# Create backup directory
mkdir -p $BACKUP_DIR
# Perform backup
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME -Fc \
-f "$BACKUP_DIR/tmi_$TIMESTAMP.dump"
# Check backup success
if [ $? -eq 0 ]; then
echo "$(date): Backup completed successfully: tmi_$TIMESTAMP.dump" \
>> /var/log/tmi/backup.log
else
echo "$(date): Backup FAILED" >> /var/log/tmi/backup.log
exit 1
fi
# Remove old backups
find $BACKUP_DIR -name "tmi_*.dump" -mtime +$RETENTION_DAYS -delete
# Optional: Upload to cloud storage
# aws s3 cp "$BACKUP_DIR/tmi_$TIMESTAMP.dump" s3://my-backups/tmi/Schedule with cron:
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-tmi.sh
# Or use systemd timer (see Maintenance Tasks)From Custom Format Backup:
# Create new database (if needed)
createdb -h postgres-host -U postgres tmi_restore
# Restore from backup
pg_restore -h postgres-host -U tmi_user -d tmi_restore \
tmi_backup_20251112.dump
# Or restore to existing database (with clean)
pg_restore -h postgres-host -U tmi_user -d tmi --clean \
tmi_backup_20251112.dumpFrom SQL Backup:
# Restore from SQL file
psql -h postgres-host -U tmi_user -d tmi < tmi_backup_20251112.sql
# Restore from compressed backup
gunzip -c tmi_backup_20251112.sql.gz | \
psql -h postgres-host -U tmi_user -d tmiPoint-in-Time Recovery (requires WAL archiving):
# 1. Restore base backup
pg_basebackup -h postgres-host -U postgres -D /restore/location
# 2. Create recovery.conf
cat > /restore/location/recovery.conf <<EOF
restore_command = 'cp /archive/location/%f %p'
recovery_target_time = '2025-11-12 10:30:00'
EOF
# 3. Start PostgreSQL from restore locationConfigure connection pool in TMI:
database:
postgres:
max_open_conns: 25 # Maximum open connections
max_idle_conns: 5 # Idle connections to keep
conn_max_lifetime: 5m # Maximum connection lifetimeOr via environment:
POSTGRES_MAX_OPEN_CONNS=25
POSTGRES_MAX_IDLE_CONNS=5
POSTGRES_CONN_MAX_LIFETIME=5m-- Enable pg_stat_statements extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find slow queries
SELECT
query,
mean_time,
calls,
total_time,
min_time,
max_time
FROM pg_stat_statements
WHERE mean_time > 100 -- Queries slower than 100ms
ORDER BY mean_time DESC
LIMIT 20;
-- Reset statistics
SELECT pg_stat_statements_reset();-- Check index usage
SELECT
schemaname,
tablename,
indexname,
idx_scan AS scans,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched
FROM pg_stat_user_indexes
WHERE schemaname = 'public'
ORDER BY idx_scan ASC;
-- Find unused indexes (0 scans)
SELECT
schemaname,
tablename,
indexname,
indexdef
FROM pg_stat_user_indexes
JOIN pg_indexes USING (schemaname, tablename, indexname)
WHERE schemaname = 'public'
AND idx_scan = 0
AND indexname NOT LIKE '%_pkey';
-- Find missing indexes (high sequential scans)
SELECT
schemaname,
tablename,
seq_scan,
seq_tup_read,
idx_scan,
seq_tup_read / seq_scan AS avg_seq_tup_per_scan
FROM pg_stat_user_tables
WHERE seq_scan > 0
AND schemaname = 'public'
ORDER BY seq_tup_read DESC
LIMIT 20;-- Manual vacuum
VACUUM ANALYZE;
-- Vacuum specific table
VACUUM ANALYZE threat_models;
-- Check autovacuum status
SELECT
schemaname,
tablename,
last_vacuum,
last_autovacuum,
last_analyze,
last_autoanalyze,
n_dead_tup
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY n_dead_tup DESC;
-- Vacuum full (requires table lock)
VACUUM FULL threat_models;-- Current connection count
SELECT count(*) FROM pg_stat_activity;
-- Connections by state
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state;
-- Active queries
SELECT
pid,
usename,
application_name,
client_addr,
state,
query,
now() - query_start AS duration
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY duration DESC;
-- Long-running transactions
SELECT
pid,
now() - xact_start AS duration,
state,
query
FROM pg_stat_activity
WHERE xact_start < now() - interval '1 minute'
ORDER BY duration DESC;-- Database size
SELECT
pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database
ORDER BY pg_database_size(pg_database.datname) DESC;
-- Table sizes with indexes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) -
pg_relation_size(schemaname||'.'||tablename)) AS indexes_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;Problem: Cannot connect to PostgreSQL
# Check if PostgreSQL is running
systemctl status postgresql
# Check listening port
sudo netstat -nlp | grep 5432
# Test local connection
psql -U postgres -d postgres
# Test remote connection
psql -h postgres-host -U tmi_user -d tmi
# Check pg_hba.conf for connection rules
sudo cat /etc/postgresql/*/main/pg_hba.confProblem: Permission denied errors
-- Grant necessary privileges
GRANT CONNECT ON DATABASE tmi TO tmi_user;
GRANT USAGE ON SCHEMA public TO tmi_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO tmi_user;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO tmi_user;
-- Grant future table privileges
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO tmi_user;Problem: Slow queries
-- Enable query logging
ALTER DATABASE tmi SET log_min_duration_statement = 1000; -- Log queries > 1s
-- Check slow queries
SELECT * FROM pg_stat_statements
WHERE mean_time > 1000
ORDER BY mean_time DESC;
-- Analyze query plan
EXPLAIN ANALYZE
SELECT * FROM threats WHERE threat_model_id = 'uuid-here';# Connection test
redis-cli -h redis-host -p 6379 -a password ping
# Memory usage
redis-cli -h redis-host -a password info memory
# Key count
redis-cli -h redis-host -a password DBSIZE
# Clear cache (use with caution!)
redis-cli -h redis-host -a password FLUSHDB
# Monitor commands in real-time
redis-cli -h redis-host -a password monitor
# Check slow commands
redis-cli -h redis-host -a password slowlog get 10Ubuntu/Debian:
sudo apt update
sudo apt install redis-server
sudo systemctl start redis-server
sudo systemctl enable redis-serverCentOS/RHEL:
sudo yum install epel-release
sudo yum install redis
sudo systemctl start redis
sudo systemctl enable redisDocker:
docker run -d \
--name tmi-redis \
-p 6379:6379 \
-v redis_data:/data \
redis:7 redis-server --appendonly yesEdit /etc/redis/redis.conf:
# Bind to specific interfaces
bind 127.0.0.1 10.0.0.5
# Set password
requirepass your_redis_password
# Memory management
maxmemory 1gb
maxmemory-policy allkeys-lru
# Persistence
save 900 1
save 300 10
save 60 10000
# Disable dangerous commands (production)
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command CONFIG ""Restart Redis:
sudo systemctl restart redis-serverTMI uses structured key naming:
{namespace}:{type}:{identifier}:{sub-identifier}
Key Categories:
-
Sessions:
session:{user_id}:{session_id}(24h TTL) -
Cache:
cache:{entity_type}:{entity_id}(2-15m TTL) -
Auth:
auth:token:{token_id}(token expiry TTL) -
Rate Limiting:
rate_limit:user:{user_id}:{action}(1m TTL) -
Locks:
lock:{resource}:{id}(30s TTL)
See Database Schema Reference for complete Redis schema.
# Get cache hit rate
redis-cli -h redis-host -a password info stats | \
awk '/keyspace_hits|keyspace_misses/ {
split($0,a,":");
if ($1 ~ /hits/) hits=a[2];
if ($1 ~ /misses/) misses=a[2]
}
END {
total=hits+misses;
if (total > 0) {
rate=(hits/total)*100;
printf "Hit Rate: %.2f%% (Hits: %d, Misses: %d)\n", rate, hits, misses
}
}'
# Key distribution by pattern
for pattern in "cache:*" "session:*" "auth:*"; do
count=$(redis-cli -h redis-host -a password --scan --pattern "$pattern" | wc -l)
echo "$pattern: $count keys"
done# View cache keys
redis-cli -h redis-host -a password --scan --pattern "cache:threat_model:*"
# Get cached value
redis-cli -h redis-host -a password GET "cache:threat_model:uuid-here"
# Check TTL
redis-cli -h redis-host -a password TTL "cache:threat_model:uuid-here"
# Delete specific cache key
redis-cli -h redis-host -a password DEL "cache:threat_model:uuid-here"
# Delete cache by pattern (use with caution)
redis-cli -h redis-host -a password --scan --pattern "cache:threat_model:*" | \
xargs redis-cli -h redis-host -a password DELRDB Snapshots:
# Manual snapshot
redis-cli -h redis-host -a password SAVE
# Or background save
redis-cli -h redis-host -a password BGSAVE
# Check last save time
redis-cli -h redis-host -a password LASTSAVE
# Copy RDB file
cp /var/lib/redis/dump.rdb /backup/location/redis_$(date +%Y%m%d).rdbAOF (Append-Only File):
# Enable AOF in redis.conf
appendonly yes
appendfilename "appendonly.aof"
# Manual AOF rewrite
redis-cli -h redis-host -a password BGREWRITEAOF
# Copy AOF file
cp /var/lib/redis/appendonly.aof /backup/location/Automated Backup Script:
#!/bin/bash
# redis-backup.sh
REDIS_HOST="redis-host"
REDIS_PASSWORD="your-password"
BACKUP_DIR="/var/backups/redis"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Trigger background save
redis-cli -h $REDIS_HOST -a $REDIS_PASSWORD BGSAVE
# Wait for save to complete
while [ $(redis-cli -h $REDIS_HOST -a $REDIS_PASSWORD LASTSAVE) -eq $LASTSAVE ]; do
sleep 1
done
# Copy dump file
cp /var/lib/redis/dump.rdb "$BACKUP_DIR/redis_$TIMESTAMP.rdb"
# Clean old backups
find $BACKUP_DIR -name "redis_*.rdb" -mtime +7 -delete
echo "$(date): Redis backup completed" >> /var/log/tmi/redis-backup.log# Stop Redis
systemctl stop redis
# Restore RDB file
cp /backup/location/redis_20251112.rdb /var/lib/redis/dump.rdb
chown redis:redis /var/lib/redis/dump.rdb
# Start Redis
systemctl start redis# Check memory usage
redis-cli -h redis-host -a password info memory
# Memory usage by key type
redis-cli -h redis-host -a password --bigkeys
# Check slow commands
redis-cli -h redis-host -a password config get slowlog-log-slower-than
redis-cli -h redis-host -a password slowlog get 10
# Monitor latency
redis-cli -h redis-host -a password --latency# Check client connections
redis-cli -h redis-host -a password client list
# Kill specific client
redis-cli -h redis-host -a password client kill <ip:port>
# Set max clients
redis-cli -h redis-host -a password config set maxclients 10000# Find memory usage by key pattern
for pattern in cache: session: auth:; do
count=$(redis-cli --scan --pattern "${pattern}*" | wc -l)
echo "$pattern: $count keys"
done
# Clear old sessions
redis-cli --scan --pattern "session:*" | \
while read key; do
ttl=$(redis-cli TTL "$key")
if [ $ttl -lt 0 ]; then
redis-cli DEL "$key"
fi
done# Check if Redis is running
systemctl status redis
# Check listening port
sudo netstat -nlp | grep 6379
# Test connection
redis-cli -h redis-host ping
# Check logs
tail -f /var/log/redis/redis-server.log-- Create read-only user
CREATE USER tmi_readonly WITH PASSWORD 'readonly_password';
GRANT CONNECT ON DATABASE tmi TO tmi_readonly;
GRANT USAGE ON SCHEMA public TO tmi_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO tmi_readonly;
-- Revoke write access
REVOKE INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public FROM tmi_readonly;# Generate self-signed certificate (development)
openssl req -new -x509 -days 365 -nodes \
-out /etc/postgresql/server.crt \
-keyout /etc/postgresql/server.key
# Set permissions
chmod 600 /etc/postgresql/server.key
chown postgres:postgres /etc/postgresql/server.*
# Configure postgresql.conf
ssl = on
ssl_cert_file = '/etc/postgresql/server.crt'
ssl_key_file = '/etc/postgresql/server.key'
# Restart PostgreSQL
systemctl restart postgresqlConfigure TMI to require SSL:
POSTGRES_SSL_MODE=require# Set strong password in redis.conf
requirepass your_strong_redis_password_here
# Disable dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command CONFIG "CONFIG-SECRET-NAME"
rename-command KEYS ""
# Bind to specific interface
bind 127.0.0.1
# Restart Redis
systemctl restart redis- Monitoring and Health - Database monitoring and alerting
- Security Operations - Database security practices
- Performance and Scaling - Scaling databases
- Maintenance Tasks - Routine maintenance procedures
- Database Setup - Initial database deployment
- PostgreSQL Operations Guide - Detailed PostgreSQL documentation
- PostgreSQL Schema Reference - Complete schema documentation
- Redis Schema Reference - Redis key patterns and caching strategies
- Using TMI for Threat Modeling
- Accessing TMI
- Creating Your First Threat Model
- Understanding the User Interface
- Working with Data Flow Diagrams
- Managing Threats
- Collaborative Threat Modeling
- Using Notes and Documentation
- Metadata and Extensions
- Planning Your Deployment
- Deploying TMI Server
- Deploying TMI Web Application
- Setting Up Authentication
- Database Setup
- Component Integration
- Post-Deployment
- Monitoring and Health
- Database Operations
- Security Operations
- Performance and Scaling
- Maintenance Tasks