A modern, web-based inventory management system for homelab infrastructure with integrated Prometheus monitoring configuration export. Built with React and Flask, this application helps you track, organize, and monitor all your homelab devices in one place.
- Comprehensive Device Tracking: Track physical and virtual servers, network switches, wireless access points, IP cameras, IoT devices, and more
- Rich Device Information: Store device names, IP addresses, device functions, vendors, models, locations, serial numbers, and network assignments
- Multiple Device Types: Support for 13+ device types including Linux/FreeBSD servers, network equipment, cameras, and specialized devices
- Location Organization: Organize devices by physical or logical locations (e.g., Data Center, Office, Rack 1)
- Vendor & Model Management: Maintain a catalog of vendors and models for consistent device tracking
- Device History & Change Tracking: Automatic audit log for device create/update/delete and bulk import/delete, with per-device history endpoint and UI timeline plus a global history view (paginated 50/100/250/500/all) under Other Actions
- Prometheus Export: Automatically generate Prometheus target configurations from your inventory
- Multiple Monitor Types: Support for Node Exporter, smartctl_exporter, SNMP, ICMP, HTTP/HTTPS, DNS, IPMI, NUT, and Docker monitoring
- Flexible Monitoring: Enable or disable monitoring per device, with support for multiple monitors per device
- Export Options: Write Prometheus configs directly to disk or download as a ZIP archive
- Organized Exports: Automatically organize Prometheus targets by monitor type and device category
- Network Discovery: Probe single IPs/hostnames, dashed IP ranges, or CIDR blocks
- ICMP + Reverse DNS: Reachability with RTT plus PTR lookup to prefill hostnames
- Inline Add: Add discovered devices one-by-one without leaving the results list
- Modern React UI: Responsive interface built with React and Tailwind CSS
- Dark Mode with Auto Theme: Light/dark themes with system-preference auto mode; all dialogs (Admin, Bulk Ops, Advanced Search, Device modals) are theme aware
- Mobile-Optimized: Touch-friendly controls, safe area support, and optimized touch targets
- Advanced Search: Multi-field search with expandable filters (vendor, model, location, monitoring status, PoE, IP address)
- Search & Filter: Real-time search across device fields with debouncing and filter by device type
- View Modes: Switch between full detail view and condensed list view
- Real-time Stats: Dashboard showing total devices, monitoring status, and device type breakdowns
- Device Cloning: Quickly duplicate devices with similar configurations
- Bulk Operations: Import/export devices in JSON or CSV format, bulk delete multiple devices
- Network Segmentation: Assign devices to specific networks (LAN, IoT, DMZ, GUEST, or ALL)
- Interface Types: Track network interface types from 10Base-T to 400G QSFP-DD, including Wi-Fi standards
- PoE Support: Track Power over Ethernet (PoE) powered devices and standards (802.3af/at/bt)
- Serial Number Tracking: Maintain serial number records for warranty and asset management
- Vendor Management: Add, edit, and remove vendors with automatic model relationship handling
- Model Management: Associate models with vendors, filter models by vendor, and track device counts
- Location Management: Create and organize locations with device count tracking
- SQLite Web Interface: Optional SQLite-web container for direct database access
- Import Devices: Import multiple devices from JSON array or CSV file with validation and error reporting
- Export Devices: Export all devices or filtered by type in JSON or CSV format
- Bulk Delete: Delete multiple devices at once by ID (up to 100 at a time)
- Error Handling: Detailed import/export results showing successful and failed operations
- Multi-field Search: Search across name, IP address, device function, serial number, networks, interface types, and PoE standards
- Advanced Filters: Filter by device type, vendor, model, location, monitoring status, PoE powered status, and IP address presence
- Filter Chips: Visual representation of active filters with easy removal
- Real-time Results: Instant search results with debouncing for optimal performance
- Health Checks: Basic and detailed health endpoints for monitoring application status
- System Metrics: CPU, memory, and disk usage metrics (requires psutil)
- Database Statistics: Device counts and database connectivity status
- Device History API:
/api/devices/<id>/historyfor audit visibility
Frontend:
- React 18.2 with modern hooks
- Vite for fast development and optimized builds
- Tailwind CSS for responsive, utility-first styling
- Lucide React for beautiful icons
- Mobile-optimized with iOS-specific enhancements
Backend:
- Flask 3.0 RESTful API with modular blueprint architecture
- SQLAlchemy ORM for database operations
- Flask-Migrate/Alembic for database schema versioning (auto-migrate on startup enabled by default, can disable via
AUTO_MIGRATE=false) - SQLite database (lightweight, file-based) with optimized indexes
- Flask-CORS for cross-origin resource sharing
- Flask-Limiter for API rate limiting
- Marshmallow for input validation and sanitization
- iputils-ping for ICMP discovery (bundled in backend image) with Python ping3 fallback
- PyYAML for Prometheus configuration generation
- psutil for system metrics (optional)
- pytest for testing framework
- Automated database backup system
- Standardized API response format
- Environment validation on startup
Infrastructure:
- Docker & Docker Compose for containerization
- Nginx for serving the frontend
- Multi-stage builds for optimized image sizes
- Volume mounts for persistent data storage
homelab-inventory/
βββ backend/ # Flask API, migrations, services, scripts, tests
βββ frontend/ # React UI + nginx reverse proxy for /api
βββ docker-compose.yaml # Default stack (frontend, backend, sqlite-web)
βββ docker-compose.yaml.example
βββ BACKUP_README.md # Standalone backup guide
βββ screenshots/ # Light/dark galleries
βββ targets/ # Prometheus export directory (created at runtime)
βββ README.md # This file
The data/ directory (SQLite DB + backups) is created on first run.
Before you begin, ensure you have the following installed:
- Docker (version 20.10 or later)
- Docker Compose (version 2.0 or later)
- Git (for cloning the repository)
For local development:
- Node.js (version 20 or later recommended; Docker uses Node 22) and npm
- Python (version 3.13 or later) and pip
The repo ships with a ready-to-run Compose file that pulls the pre-built images published from this repositoryβno local build required.
-
Clone the repository:
git clone <repository-url> cd homelab-inventory
-
Set up Docker Compose (optional clean copy):
cp docker-compose.yaml.example docker-compose.yaml
The included
docker-compose.yamlalready points to the published images if you just want toupand go. -
Start the containers:
docker compose up -d
The default images come from GitHub Container Registry for this repo:
ghcr.io/pyrodex/homelab-inventory/backend:latestghcr.io/pyrodex/homelab-inventory/frontend:latest
If you fork/publish your own images, update the
image:fields accordingly. -
Access the application:
- Web UI: http://localhost:5000
- SQLite Web Interface (optional): http://localhost:5001
Note: If you prefer to build the images locally, run
docker compose build(or switchimage:tobuild:indocker-compose.yaml) and push to your own registry if desired.
-
Navigate to backend directory:
cd backend -
Create virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set environment variables:
export FLASK_ENV=development export DATABASE_PATH=./homelab.db export PROMETHEUS_EXPORT_PATH=./prometheus_targets
-
Run the Flask application:
python app.py
The backend will be available at http://localhost:5000
-
Navigate to frontend directory:
cd frontend -
Install dependencies:
npm install
-
Start development server:
npm run dev
The frontend will be available at http://localhost:5173 (or the port Vite assigns)
-
Build for production:
npm run build
| Variable | Description | Default |
|---|---|---|
FLASK_ENV |
Flask environment (development/production) | production |
DATABASE_PATH |
Path to SQLite database file | homelab.db |
PROMETHEUS_EXPORT_PATH |
Directory for Prometheus config exports | /app/prometheus_targets |
AUTO_MIGRATE |
Apply Alembic migrations on startup (true/false) |
true |
RATELIMIT_STORAGE_URL |
Rate limit storage (use Redis/Memcached for prod) | memory:// |
CORS_ORIGINS |
Comma-separated list of allowed CORS origins (use * for all) |
* |
BACKUP_DIRECTORY |
Directory for database backups | /app/data/backups |
BACKUP_RETENTION_DAYS |
Number of days to keep backups before cleanup | 30 |
BACKUP_SCHEDULE |
Cron schedule for automatic backups | 0 2 * * * |
SECRET_KEY |
Flask secret key (change in production!) | dev-secret-key-change-in-production |
LOG_LEVEL |
Logging level (DEBUG, INFO, WARNING, ERROR) | INFO |
Edit docker-compose.yaml to customize:
- Ports: Change
5000:80for frontend port mapping - Volumes: Adjust
./dataand./targetspaths as needed - Database Path: Modify
DATABASE_PATHenvironment variable - Prometheus Export Path: Modify
PROMETHEUS_EXPORT_PATHenvironment variable
The application uses SQLite by default (DATABASE_PATH, default homelab.db). The database file is created if missing.
On a brand-new database, core tables are bootstrapped automatically (after migrations) on startup.
Schema management (migrations):
- Flask-Migrate/Alembic manage schema. Migrations live in
backend/migrations/versions/. - Auto-migration runs on startup by default (
AUTO_MIGRATE=true). SetAUTO_MIGRATE=falseto skip. - Generate migrations after model changes:
flask db migrate -m "description"and apply withflask db upgrade.
Performance Optimizations:
- Database indexes on frequently queried fields (name, device_type, ip_address, monitoring_enabled, etc.)
- Composite indexes for common query patterns
- Foreign key indexes for faster joins
Database Backups:
- Automated backup script included (
backend/scripts/backup_db.py) - Creates timestamped backups with automatic cleanup
- Configurable retention period (default: 30 days)
- Automatic daily backups via cron (configured in Docker container)
- Uses SQLite's native backup API for consistency
- Manual backup:
docker exec homelab-inventory-backend python3 /app/scripts/backup_db.py - Backups stored in
data/backups/directory - See Database Backup & Restore section below for detailed instructions
Environment Validation:
- Application validates environment variables and paths on startup
- Automatically creates required directories if they don't exist
- Provides security warnings in production (default SECRET_KEY, open CORS)
- Fails fast in production if critical configuration is missing
To use a different database:
- Set the
DATABASE_PATHenvironment variable - Ensure the directory exists and is writable
- Run migrations if needed
- Restart the application
The application exports Prometheus target configurations in YAML format. Configure your Prometheus instance to scrape from the export directory:
# prometheus.yml
scrape_configs:
- job_name: 'linux_node_exporter'
file_sd_configs:
- files:
- '/path/to/prometheus_targets/linux_node_exporter/*.yaml'- Click the "Add Device" button in the header
- Fill in required fields:
- Device Name
- Device Type
- Address (IP address, hostname, or URL)
- Device Function
- Vendor (create new if needed)
- Model (create new if needed)
- Location (create new if needed)
- Serial Number
- Configure optional settings:
- Networks (LAN, IoT, DMZ, GUEST, or ALL)
- Interface Types
- PoE Standards (if applicable)
- PoE Powered checkbox
- Monitoring Enabled checkbox
- Add monitors (e.g., Node Exporter on port 9100)
- Click "Create Device"
- Open Other Actions (desktop) or the Actions menu (mobile), then select Discover.
- Enter targets:
- Single IPs/hostnames (comma or newline separated)
- IP range (e.g.,
192.168.1.10-192.168.1.20) - CIDR (e.g.,
192.168.1.0/28)
- Run discovery to see ICMP reachability (with RTT) and reverse DNS results.
- Click Add Device on any row to prefill and save a device while staying on the discovery results.
- Click the "Admin" button in the header
- Navigate between tabs:
- Vendors: Add, edit, or remove vendors
- Models:
- Select a vendor from the dropdown to filter models (or "All Vendors" to see all)
- Add models associated with the selected vendor
- Models list automatically filters when a vendor is selected
- Locations: Add, edit, or remove locations
- Use inline editing for quick updates
- Write to Disk: Click "Write Prometheus Files" to generate configs in the export directory
- Download ZIP: Click "Download Config" to download a ZIP archive of Prometheus targets
The export organizes targets by monitor type:
linux_node_exporter/- Linux servers with Node Exporterfreebsd_node_exporter/- FreeBSD servers with Node Exportersnmp/- SNMP-monitored devicesicmp/- ICMP-only devices- And more...
- Quick Search: Use the search box to find devices by name, IP, device function, serial number, networks, interface types, or PoE standards
- Advanced Search: Click the "Filters" button to expand advanced filtering options:
- Filter by device type, vendor, model, or location
- Filter by monitoring status (enabled/disabled)
- Filter by PoE powered status
- Filter by IP address presence
- Combine multiple filters for precise results
- Filter Chips: Active filters are displayed as removable chips above the results
- Dark Mode Friendly: Advanced search dropdown and chips adapt to dark mode
- Device Type Filter: Select a device type from the dropdown to filter devices
- View Modes: Toggle between full detail view and condensed list view
-
Bulk Import:
- Import devices from JSON array or CSV file
- CSV format: name, device_type, ip_address, deviceFunction, vendor, model, location, serial_number, networks, interface_type, poe_powered, poe_standards, monitoring_enabled
- JSON format: Array of device objects matching the API schema
- Results show successful imports and failed items with error details
-
Bulk Export:
- Export all devices or filter by device type
- Choose JSON or CSV format
- Files are automatically downloaded with descriptive filenames
-
Bulk Delete:
- Delete multiple devices by entering comma-separated IDs
- Maximum 100 devices per operation
- Confirmation dialog prevents accidental deletions
- Edit: Click the edit icon to modify device details
- Clone: Click the clone icon to duplicate a device (useful for similar devices)
- Toggle Monitoring: Click the check/X icon to enable/disable monitoring
- Delete: Click the trash icon to remove a device (with confirmation)
- View History: In the device modal, the History tab shows per-device change log with field-level diffs; use Other Actions β History for a global, paginated history across all devices (50/100/250/500/all)
All API endpoints are prefixed with /api
GET /api/devices- Get all devices (optional?type=<device_type>filter)GET /api/devices/<id>- Get device by IDGET /api/devices/<id>/history- Get device change history (supportslimit/offset)GET /api/devices/history- Get change history across all devices (supportslimit/offset, acceptslimit=all)POST /api/devices- Create new devicePUT /api/devices/<id>- Update deviceDELETE /api/devices/<id>- Delete device
POST /api/devices/<device_id>/monitors- Add monitor to devicePUT /api/monitors/<id>- Update monitorDELETE /api/monitors/<id>- Delete monitor
GET /api/vendors- Get all vendorsPOST /api/vendors- Create vendorPUT /api/vendors/<id>- Update vendorDELETE /api/vendors/<id>- Delete vendor
GET /api/models- Get all models (optional?vendor_id=<id>filter)POST /api/models- Create modelPUT /api/models/<id>- Update modelDELETE /api/models/<id>- Delete model
GET /api/locations- Get all locationsPOST /api/locations- Create locationPUT /api/locations/<id>- Update locationDELETE /api/locations/<id>- Delete location
GET /api/stats- Get dashboard statistics
GET /api/prometheus/export?mode=write- Write Prometheus configs to diskGET /api/prometheus/export?mode=download- Download Prometheus configs as ZIP
POST /api/bulk/devices/import- Import devices from JSON array or CSV file- JSON: Send array of device objects in request body
- CSV: Send multipart/form-data with
filefield
GET /api/bulk/devices/export?format=json&type=<device_type>- Export devicesformat:jsonorcsvtype: Optional device type filter
POST /api/bulk/devices/delete- Bulk delete devices- Body:
{ "device_ids": [1, 2, 3, ...] } - Maximum 100 devices per request
- Body:
GET /api/search/devices- Advanced search with multiple filters- Query parameters:
q: Search term (searches across multiple fields)type: Device type filtervendor_id: Filter by vendor IDmodel_id: Filter by model IDlocation_id: Filter by location IDmonitoring_enabled:trueorfalsepoe_powered:trueorfalsehas_ip:trueorfalse
- Returns:
{ "results": [...], "count": N, "filters_applied": {...} }
- Query parameters:
GET /api/health- Basic health check- Returns:
{ "status": "healthy", "timestamp": "...", "service": "..." }
- Returns:
GET /api/health/detailed- Detailed health check with system metrics- Returns: Database status, system metrics (CPU, memory, disk), device counts
- Requires psutil for system metrics (gracefully degrades if unavailable)
POST /api/discovery- Probe IPs/hostnames/ranges/CIDRs with ICMP + reverse DNS- Body supports
targets(array or comma/newline string),range(192.168.1.10-192.168.1.20), andcidr(192.168.1.0/28) - Returns reachability summary, RTT (ms), resolved IP, and PTR hostname when available
- Body supports
All API responses follow a standardized format:
Success Response:
{
"success": true,
"data": {...},
"message": "Device created successfully"
}Error Response:
{
"success": false,
"error": "Validation error",
"details": {
"name": ["Missing data for required field."],
"device_type": ["Invalid device type."]
},
"error_code": "VALIDATION_ERROR"
}# Create a device
curl -X POST http://localhost:5000/api/devices \
-H "Content-Type: application/json" \
-d '{
"name": "Web Server 01",
"device_type": "linux_server_physical",
"ip_address": "192.168.1.10",
"deviceFunction": "Web Server",
"vendor_id": 1,
"model_id": 1,
"location_id": 1,
"serial_number": "SN123456",
"networks": "LAN",
"monitoring_enabled": true
}'
# Response:
# {
# "success": true,
# "data": {
# "id": 1,
# "name": "Web Server 01",
# ...
# },
# "message": "Device created successfully"
# }-
Backend Development:
cd backend python -m venv venv source venv/bin/activate pip install -r requirements.txt export FLASK_ENV=development export DATABASE_PATH=./homelab.db export PROMETHEUS_EXPORT_PATH=./prometheus_targets # Initialize database migrations (first time only) flask db upgrade # Run the application python app.py
-
Frontend Development:
cd frontend npm install npm run dev
- Backend: Modular Flask architecture with blueprints, services, and models separation
- Routes organized by domain (devices, monitors, admin, bulk, search, health)
- Business logic in services layer
- Database models in separate module
- Input validation via Marshmallow schemas
- Custom exception handling
- Frontend: Component-based React architecture with service layer for API calls
- Components organized by feature (devices, admin, bulk, search, common)
- Centralized API service layer
- Utility functions for common operations
- Styling: Tailwind CSS utility classes with custom CSS for iOS optimizations
- Mobile-first responsive design
- Touch-friendly controls (44px minimum)
- Safe area support for notched devices
Using Pre-built Images (Recommended): The project uses GitHub Actions to automatically build and publish images to GitHub Container Registry. Simply use the pre-built images as shown in the Quick Start section.
Building Locally:
Frontend:
cd frontend
npm run buildDocker:
docker compose build --no-cacheBuilding and Publishing via CI/CD: Images are automatically built and published when you:
- Push to the
mainbranch - Create a git tag (e.g.,
git tag v1.0.0 && git push --tags)
See the CI/CD Pipeline section for more details.
The project includes a basic test structure using pytest:
Backend Tests:
cd backend
pytestTest Structure:
tests/test_models.py- Unit tests for database modelstests/test_api.py- Integration tests for API endpoints
Adding Tests:
- Follow pytest conventions
- Use pytest-flask fixtures for Flask app context
- Test both success and error cases
- Mock external dependencies when appropriate
Future Testing Enhancements:
- Frontend component tests with React Testing Library
- E2E tests with Playwright or Cypress
- Performance and load testing
-
Clone and configure:
git clone <repository-url> cd homelab-inventory cp docker-compose.yaml.example docker-compose.yaml
-
Review and customize
docker-compose.yamlas needed- Update image names to match your GitHub username/organization (if different)
- Set
SECRET_KEYenvironment variable (important for production!) - Configure
CORS_ORIGINSto restrict allowed origins - Adjust backup retention days if needed
-
Pull and start the containers:
docker compose pull docker compose up -d
The pre-built images will be pulled from GitHub Container Registry automatically.
-
Check health status:
docker compose ps # Should show "healthy" status for backend and frontend -
Check logs:
docker compose logs -f
The Docker Compose configuration includes health checks for both services:
- Backend: Checks
/api/healthendpoint every 30 seconds - Frontend: Verifies web server is responding
- Services wait for backend to be healthy before starting
- Unhealthy containers are automatically restarted
-
Pull latest images:
docker compose pull
-
Restart containers with new images:
docker compose up -d
Or in one command:
docker compose pull && docker compose up -d
Note: New images are automatically built and pushed to GitHub Container Registry via GitHub Actions when code is pushed to the
mainbranch or when tags are created. See CI/CD Pipeline section below.
The project includes a GitHub Actions workflow that automatically builds and publishes Docker images to GitHub Container Registry (ghcr.io).
The CI/CD pipeline automatically:
- Builds images for both backend and frontend on every push to
main - Tags images with:
latestfor the main branch- Version tags (e.g.,
v1.0.0,1.0,1) when git tags are created - Branch names for feature branches
- SHA-based tags for traceability
- Pushes to GitHub Container Registry at
ghcr.io/pyrodex/homelab-inventory/backendandghcr.io/pyrodex/homelab-inventory/frontend(update the namespace if you publish from a fork) - Uses build cache for faster subsequent builds
The workflow runs automatically on:
- Push to
mainbranch - Git tags (e.g.,
v1.0.0) - Pull requests (builds but doesn't push)
- Manual workflow dispatch
The docker-compose.yaml.example file is configured to use the pre-built images from GitHub Container Registry. Simply copy it to docker-compose.yaml and run:
docker compose up -dThe images will be automatically pulled from GitHub Container Registry.
- Backend:
ghcr.io/pyrodex/homelab-inventory/backend:latest - Frontend:
ghcr.io/pyrodex/homelab-inventory/frontend:latest
If you publish under a different namespace (e.g., your fork), update the image names in docker-compose.yaml.
- Check the Actions tab in your GitHub repository to see build status
- View published packages in the Packages section of your repository
- Images are public by default (for public repositories) or can be configured with appropriate permissions
For local development and testing, you can still build images locally:
docker compose -f docker-compose.yaml buildUpdate docker-compose.yaml to use build: instead of image: if you want to build locally or push to your own registry.
The Homelab Inventory application includes comprehensive automated database backup functionality to protect your data.
Note: For a standalone backup reference document, see BACKUP_README.md.
- Timestamped Backups: Each backup includes a timestamp in the filename (e.g.,
homelab_backup_20241120_143022.db) - Automatic Cleanup: Removes backups older than the retention period (default: 30 days)
- SQLite Backup API: Uses SQLite's native backup API for consistency and reliability
- Statistics: Reports backup count, total size, oldest and newest backups
- Zero Configuration: Automatic daily backups enabled by default in Docker Compose setup
| Variable | Description | Default |
|---|---|---|
BACKUP_DIRECTORY |
Directory for backups | /app/data/backups |
BACKUP_RETENTION_DAYS |
Number of days to keep backups | 30 |
BACKUP_SCHEDULE |
Cron schedule for automatic backups | 0 2 * * * (daily at 2 AM) |
Automatic Daily Backups (Docker Compose):
Backups are automatically configured when using Docker Compose! The container includes a cron daemon that runs daily backups automatically.
Configure via environment variables in your docker-compose.yaml:
services:
backend:
environment:
- BACKUP_DIRECTORY=/app/data/backups
- BACKUP_RETENTION_DAYS=30
- BACKUP_SCHEDULE=0 2 * * * # Daily at 2 AM (cron format)Cron Schedule Format Examples:
0 2 * * *- Daily at 2:00 AM (default)0 */6 * * *- Every 6 hours0 0 * * 0- Weekly on Sunday at midnight*/30 * * * *- Every 30 minutes
The backup runs automatically in the background. Check logs with:
docker compose logs backend | grep -i backupRun the backup script manually:
# Inside Docker container
docker exec homelab-inventory-backend python3 /app/scripts/backup_db.py
# Or if running locally
python3 backend/scripts/backup_db.pyAlternative Manual Methods:
# Copy database directly
cp data/homelab.db data/homelab.db.backup
# Or use Docker
docker exec homelab-inventory-backend cp /app/data/homelab.db /app/data/homelab.db.backup-
Stop the application:
docker compose stop backend
-
Backup current database (safety):
cp data/homelab.db data/homelab.db.current
-
Restore from backup:
cp data/backups/homelab_backup_YYYYMMDD_HHMMSS.db data/homelab.db
-
Set correct permissions:
chmod 644 data/homelab.db
-
Start the application:
docker compose start backend
The backup script logs:
- Backup creation success/failure
- Backup file size
- Cleanup operations
- Statistics (count, total size, oldest/newest)
Check logs:
docker compose logs backend | grep -i backup- Regular Backups: Schedule daily backups during low-traffic hours
- Offsite Storage: Copy backups to external storage or cloud storage
- Test Restores: Periodically test restoring from backups
- Monitor Disk Space: Ensure backup directory has sufficient space
- Retention Policy: Adjust
BACKUP_RETENTION_DAYSbased on your needs
Create /etc/systemd/system/homelab-backup.service:
[Unit]
Description=Homelab Inventory Database Backup
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/docker exec homelab-inventory-backend python3 /app/scripts/backup_db.pyCreate /etc/systemd/system/homelab-backup.timer:
[Unit]
Description=Daily backup for Homelab Inventory
Requires=homelab-backup.service
[Timer]
OnCalendar=daily
OnCalendar=02:00
Persistent=true
[Install]
WantedBy=timers.targetEnable and start:
sudo systemctl enable homelab-backup.timer
sudo systemctl start homelab-backup.timerBackup fails:
- Check database path is correct
- Verify write permissions on backup directory
- Check disk space availability
- Review application logs:
docker compose logs backend | grep -i backup
Backups not being cleaned up:
- Verify
BACKUP_RETENTION_DAYSis set correctly - Check backup directory permissions
- Review script logs for errors
Cannot restore backup:
- Ensure application is stopped before restoring
- Verify backup file is not corrupted
- Check file permissions after restore
Backup Location:
Backups are stored in data/backups/ directory with timestamped filenames (e.g., homelab_backup_20241120_143022.db).
Ensure volumes are properly mounted:
./data:/app/data- Database storage./targets:/app/prometheus_targets- Prometheus export directory
-
Input Validation & Sanitization: All API endpoints use Marshmallow schemas to validate and sanitize input data
- IP addresses and hostnames are validated
- Port numbers are validated (1-65535)
- String fields are sanitized to prevent injection attacks
- Device types, monitor types, and other enums are validated against allowed values
-
Rate Limiting: Flask-Limiter is configured to prevent abuse
- Default limits: 200 requests/hour, 50 requests/minute per IP
- Write operations (POST/PUT/DELETE): 20-30 requests/minute
- Bulk operations: 5-10 requests/minute (more restrictive)
- Prometheus export: 10 requests/minute
- Search operations: 60 requests/minute
- Rate limit errors return HTTP 429 with descriptive messages
-
CORS Configuration: Configurable CORS origins via environment variable
- Development: Set
CORS_ORIGINS=*to allow all origins - Production: Set
CORS_ORIGINS=https://yourdomain.com,https://www.yourdomain.comto restrict
- Development: Set
-
Error Handling: Comprehensive error handling with proper HTTP status codes
- Validation errors return 400 with detailed messages
- Database errors are logged but don't expose sensitive information
- Rate limit errors return 429
- Database: SQLite database should be stored in a secure location with proper file permissions
- Network: Use a reverse proxy (nginx, Traefik) with SSL/TLS for production
- Authentication: Consider adding authentication for production use (not currently implemented)
- Rate Limiting Storage: For production or multi-instance deployments, configure
RATELIMIT_STORAGE_URLto Redis/Memcached instead of the in-memory default - Secrets Management: Use environment variables or secrets manager for sensitive configuration
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow existing code style
- Add comments for complex logic
- Update documentation as needed
- Test your changes thoroughly
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
Database is read-only:
- Check file permissions on the database file and directory
- Ensure the Docker volume is mounted correctly
- Verify the user running the container has write permissions
Frontend not connecting to backend:
- Verify both containers are running:
docker compose ps - Check container health status:
docker compose ps(should show "healthy") - Check backend logs:
docker compose logs backend - Ensure CORS is properly configured
- Verify backend health endpoint:
curl http://localhost:5000/api/health
Prometheus export fails:
- Verify the export directory exists and is writable
- Check
PROMETHEUS_EXPORT_PATHenvironment variable - Review backend logs for specific errors
Port already in use:
- Change the port mapping in
docker-compose.yaml - Or stop the service using the port
Backup fails:
- Verify
BACKUP_DIRECTORYexists and is writable - Check disk space availability
- Review backup script logs:
docker compose logs backend | grep -i backup - Ensure database file exists and is accessible
- See Database Backup & Restore section for detailed troubleshooting
Environment validation errors:
- Check application logs for specific validation errors
- Verify required directories exist or can be created
- Review environment variable values
- In production, ensure
SECRET_KEYis set (not using default)
API returns unexpected format:
- All endpoints now return standardized format with
successfield - Check for
datafield in success responses - Check for
erroranddetailsfields in error responses - Update client code if needed to handle new format
- Check the logs:
docker compose logs - Review the Issues page
- Create a new issue with:
- Description of the problem
- Steps to reproduce
- Expected vs actual behavior
- Environment details (OS, Docker version, etc.)
- Input validation and sanitization (Marshmallow schemas)
- API rate limiting (Flask-Limiter)
- CORS configuration
- Modular backend architecture (blueprints)
- Database migrations (Flask-Migrate)
- Comprehensive error handling
- Basic test structure (pytest)
- Bulk import/export operations
- Advanced search and filtering
- Health check endpoints
- Performance optimizations (database indexes)
- iOS mobile optimizations
- Database backup automation
- Standardized API responses
- Enhanced UI error handling with field-specific validation
- Docker health checks
- Environment variable validation
- Device history and change tracking (audit log + API + UI)
- Dark mode theme with system auto option
- User authentication and authorization
- Multi-user support with role-based access
- Device templates and presets
- Integration with other monitoring systems (Grafana, etc.)
- Scheduled/recurring discovery runs with notifications
- Internationalization (i18n) support
- Caching layer (Redis) for improved performance
- API documentation (OpenAPI/Swagger)
- WebSocket support for real-time updates
- Built with React and Flask
- Icons provided by Lucide
- Styled with Tailwind CSS
- Database management with SQLAlchemy
Made with β€οΈ for the homelab community