The definitive Software Factory for Prometheus Exporters.
|
Catalog Browse all exporters |
Documentation User guides & API |
Registry Container images |
Issues Report bugs |
Changelog Release notes |
Monitoring Hub is an automated Software Factory that transforms simple YAML manifests into production-ready monitoring tools. It focuses on Enterprise Standards, Multi-Architecture support, GPG Security, and Full Automation.
- Native Multi-Arch: Every tool is built for
x86_64andaarch64(ARM64). - Multi-Format Packages: RPM (RHEL 8/9/10), DEB (universal package for Ubuntu 20.04+/Debian 11+), and OCI containers.
- Universal DEB Packages: Single DEB per architecture built on Debian 12, compatible with all modern Ubuntu/Debian distributions. No version-specific packages needed for static binaries.
- GPG-Signed Packages: All RPM and DEB packages are cryptographically signed for integrity verification.
- Security-First: Container images scanned with Trivy, Python code with Bandit, dependencies with pip-audit.
- Hardened Containers: All Docker images use Red Hat UBI 9 Minimal base.
- Linux Standards (FHS): Packages include system users, standard paths (
/etc,/var/lib), and systemd integration. - Zero-Touch Automation: Version watcher opens PRs, triggers CI validation, and auto-merges when tests pass.
- Always Up-to-Date: Never worry about upstream releases again.
Adding a new tool takes less than 1 minute using our Docker-first CLI tool.
- Docker - The only requirement! No Python installation needed.
We provide ./devctl - a unified Docker-first CLI for all development tasks:
# Interactive mode (recommended)
./devctl create-exporter
# Or specify name and repository
./devctl create-exporter my_exporter --repo owner/repo --category SystemThis will automatically:
- Create the directory structure (
exporters/my_exporter/). - Generate a clean
manifest.yaml. - Generate a standard
README.md.
Alternative: If you prefer using Make:
make create-exporterEdit the generated file to match specific needs (binary names, config files). See manifest.reference.yaml for the full schema and all available options.
Place any configuration files or scripts in the assets/ folder and reference them in the manifest.
If the default templates don't fit your needs, you can provide your own Jinja2 templates. The engine will automatically detect these and use them instead of the global defaults while still providing all dynamic variables from the manifest.
- Custom RPM Spec: Place a template named
<exporter_name>.spec.j2inexporters/<exporter_name>/templates/. - Custom Dockerfile: Place a template named
Dockerfile.j2inexporters/<exporter_name>/templates/.
This allows for complex packaging logic (e.g., custom %post scripts in RPM or multi-stage builds in Docker) while keeping the benefit of automated versioning.
To install extra packages in your container, create exporters/my_exporter/templates/Dockerfile.j2:
FROM {{ artifacts.docker.base_image }}
# Custom logic: Install specific tools
RUN microdnf install -y curl && microdnf clean all
# Standard logic (using variables from manifest)
COPY {{ build.binary_name }} /usr/bin/{{ name }}
ENTRYPOINT {{ artifacts.docker.entrypoint | tojson }}Test your exporter locally using our Docker-based development environment:
# Test build an exporter (generates artifacts + builds RPM + Docker image)
./devctl test-exporter node_exporter
# Test with specific options
./devctl test-exporter node_exporter --arch arm64 --el10Or using Make:
make test-exporter EXPORTER=node_exporterBuild artifacts only:
./devctl build-exporter node_exporter
# Output: build/node_exporter/List all available exporters:
./devctl list-exportersIf you need to debug or develop the core engine locally:
# Install dependencies locally
make install
# Run tests locally
make local-test
# Run linter locally
make local-lintNote: For most development tasks, the Docker workflow (./devctl) is recommended as it requires no Python installation and ensures consistency.
For proprietary or custom exporters not available on GitHub:
# Create exporter directory structure
mkdir -p exporters/my_exporter/assets
# Place your binary in the assets directory
cp /path/to/my_exporter exporters/my_exporter/assets/
chmod +x exporters/my_exporter/assets/my_exporter
# Create manifest interactively
./devctl create-exporterEdit the generated manifest.yaml to use local binary:
upstream:
type: local
local_binary: assets/my_exporterBuild and test:
./devctl test-exporter my_exporterNote: Local sources are not tracked by the automatic version watcher. Update the version field manually when your binary changes.
monitoring-hub/
├── config/ # Configuration files (Docker, linting, Python, docs)
├── requirements/ # Python dependencies (base, dev, docs)
├── scripts/ # Utility scripts
├── core/ # Core engine (builder, schema, templates, tests)
├── exporters/ # Exporter manifests (one per directory)
├── docs/ # MkDocs documentation
├── devctl # Docker-first development CLI
├── Makefile # Make commands (aliases to devctl)
└── manifest.reference.yaml # Manifest schema reference
See config/README.md and requirements/README.md for detailed documentation.
The "Magic" happens in the core/ engine:
- Smart Filter: Compares local manifests against the deployed
catalog.json(State Management) to only rebuild what changed. - Modular Engine (
core/engine/):- Builder: Downloads binaries and orchestrates the build.
- Schema: Validates YAML manifests (
marshmallow). - State Manager: Handles the incremental build logic.
- Templater: Uses Jinja2 (with auto-escape enabled) to render
.specfiles andDockerfiles. - Publisher: A parallelized Matrix CI builds all targets and updates the YUM repository.
Monitoring Hub uses a granular catalog structure with atomic writes to eliminate race conditions in parallel builds:
catalog/
├── node_exporter/
│ ├── rpm_amd64_el9.json # Atomic: 1 job = 1 file
│ ├── rpm_amd64_el10.json
│ ├── rpm_arm64_el9.json
│ ├── deb_amd64_debian-12.json # Universal DEB (Ubuntu 20.04+, Debian 11+)
│ ├── deb_arm64_debian-12.json # Universal DEB (Ubuntu 20.04+, Debian 11+)
│ ├── docker.json
│ └── metadata.json # Aggregated (generated on-demand)
└── catalog.json # Global index (backward compatible)
Key Benefits:
- No Race Conditions: Each GitHub Actions job writes exactly 1 file - no concurrent writes to the same file
- Format Versioning: All files have
"format_version": "3.0"for future compatibility - On-Demand Aggregation: Granular files are aggregated into
metadata.jsonat read-time by the portal - Atomic Publishing: Individual artifact failures don't corrupt the entire catalog
- Faster Rebuilds: Only changed artifacts are regenerated
V3 Scripts:
generate_artifact_metadata.py- Generate atomic artifact JSON files (RPM/DEB/Docker)aggregate_catalog_metadata.py- Aggregate granular files into metadata.jsonpublish_artifact_metadata.sh- Publish atomic files to gh-pagessite_generator_v2.py- Portal generation with V3 catalog support
See docs/architecture/refactoring-v2-plan.md for complete V3 implementation details.
# RHEL/CentOS/Rocky/Alma 8, 9, 10
sudo rpm --import https://sckyzo.github.io/monitoring-hub/RPM-GPG-KEY-monitoring-hub
sudo dnf config-manager --add-repo https://sckyzo.github.io/monitoring-hub/el9/$(arch)/
sudo dnf install <exporter_name># Universal DEB packages compatible with Ubuntu 20.04+, Debian 11+
curl -fsSL https://sckyzo.github.io/monitoring-hub/apt/monitoring-hub.asc | \
sudo gpg --dearmor -o /usr/share/keyrings/monitoring-hub.gpg
# Replace 'bookworm' with your distribution codename:
# - Ubuntu: focal (20.04), jammy (22.04), noble (24.04)
# - Debian: bullseye (11), bookworm (12), trixie (13)
echo "deb [signed-by=/usr/share/keyrings/monitoring-hub.gpg] \
https://sckyzo.github.io/monitoring-hub/apt bookworm main" | \
sudo tee /etc/apt/sources.list.d/monitoring-hub.list
sudo apt update && sudo apt install <exporter_name>docker pull ghcr.io/sckyzo/monitoring-hub/<exporter_name>:latestWe welcome new exporters! Feel free to open a Pull Request following the guide above.
Monitoring Hub takes security seriously. We implement multiple layers of protection:
- Code Scanning: Automated Bandit security scanner on all PRs
- Dependency Scanning: pip-audit and Dependabot for vulnerability detection
- Container Scanning: Trivy scans all container images with SARIF upload to GitHub Security
- YAML Validation: yamllint ensures configuration file integrity
- Network Security: All HTTP requests include timeouts and exponential backoff retry logic
- SSL/TLS Resilience: Automatic retry on SSL errors and connection failures
- Template Security: Jinja2 templates use autoescape to prevent injection attacks
- Input Validation: Strict manifest schema validation with marshmallow
- Package Signing: GPG-signed RPM and DEB packages for integrity verification
If you discover a security vulnerability, please follow our Security Policy for responsible disclosure. Do not open public issues for security vulnerabilities.
For more details, see:
- SECURITY.md - Vulnerability reporting process
- Security Guidelines - Development security best practices
Distributed under the MIT License. See LICENSE for more information.