Skip to content

Latest commit

 

History

History
366 lines (279 loc) · 16 KB

File metadata and controls

366 lines (279 loc) · 16 KB

Project Goal

Monitoring Hub is an automated Software Factory that transforms simple YAML manifests into production-ready monitoring tools. It focuses on Enterprise Standards, Multi-Architecture support, GPG Security, and Full Automation.

Key Features

  • Native Multi-Arch: Every tool is built for x86_64 and aarch64 (ARM64).
  • Multi-Format Packages: RPM (RHEL 8/9/10), DEB (universal package for Ubuntu 20.04+/Debian 11+), and OCI containers.
  • Universal DEB Packages: Single DEB per architecture built on Debian 12, compatible with all modern Ubuntu/Debian distributions. No version-specific packages needed for static binaries.
  • GPG-Signed Packages: All RPM and DEB packages are cryptographically signed for integrity verification.
  • Security-First: Container images scanned with Trivy, Python code with Bandit, dependencies with pip-audit.
  • Hardened Containers: All Docker images use Red Hat UBI 9 Minimal base.
  • Linux Standards (FHS): Packages include system users, standard paths (/etc, /var/lib), and systemd integration.
  • Zero-Touch Automation: Version watcher opens PRs, triggers CI validation, and auto-merges when tests pass.
  • Always Up-to-Date: Never worry about upstream releases again.

Developer Guide: Adding an Exporter

Adding a new tool takes less than 1 minute using our Docker-first CLI tool.

Prerequisites

  • Docker - The only requirement! No Python installation needed.

1. Run the Creator Script

We provide ./devctl - a unified Docker-first CLI for all development tasks:

# Interactive mode (recommended)
./devctl create-exporter

# Or specify name and repository
./devctl create-exporter my_exporter --repo owner/repo --category System

This will automatically:

  • Create the directory structure (exporters/my_exporter/).
  • Generate a clean manifest.yaml.
  • Generate a standard README.md.

Alternative: If you prefer using Make:

make create-exporter

2. Customize manifest.yaml

Edit the generated file to match specific needs (binary names, config files). See manifest.reference.yaml for the full schema and all available options.

3. Add Optional Assets

Place any configuration files or scripts in the assets/ folder and reference them in the manifest.

4. Template Overrides (Advanced)

If the default templates don't fit your needs, you can provide your own Jinja2 templates. The engine will automatically detect these and use them instead of the global defaults while still providing all dynamic variables from the manifest.

  • Custom RPM Spec: Place a template named <exporter_name>.spec.j2 in exporters/<exporter_name>/templates/.
  • Custom Dockerfile: Place a template named Dockerfile.j2 in exporters/<exporter_name>/templates/.

This allows for complex packaging logic (e.g., custom %post scripts in RPM or multi-stage builds in Docker) while keeping the benefit of automated versioning.

Example: Custom Dockerfile

To install extra packages in your container, create exporters/my_exporter/templates/Dockerfile.j2:

FROM {{ artifacts.docker.base_image }}

# Custom logic: Install specific tools
RUN microdnf install -y curl && microdnf clean all

# Standard logic (using variables from manifest)
COPY {{ build.binary_name }} /usr/bin/{{ name }}
ENTRYPOINT {{ artifacts.docker.entrypoint | tojson }}

5. Local Testing (Docker-First Workflow)

Test your exporter locally using our Docker-based development environment:

Quick Test (Recommended)

# Test build an exporter (generates artifacts + builds RPM + Docker image)
./devctl test-exporter node_exporter

# Test with specific options
./devctl test-exporter node_exporter --arch arm64 --el10

Or using Make:

make test-exporter EXPORTER=node_exporter

Individual Build Steps

Build artifacts only:

./devctl build-exporter node_exporter
# Output: build/node_exporter/

List all available exporters:

./devctl list-exporters

Advanced: Local Python Development

If you need to debug or develop the core engine locally:

# Install dependencies locally
make install

# Run tests locally
make local-test

# Run linter locally
make local-lint

Note: For most development tasks, the Docker workflow (./devctl) is recommended as it requires no Python installation and ensures consistency.

6. Custom/Proprietary Binaries

For proprietary or custom exporters not available on GitHub:

# Create exporter directory structure
mkdir -p exporters/my_exporter/assets

# Place your binary in the assets directory
cp /path/to/my_exporter exporters/my_exporter/assets/
chmod +x exporters/my_exporter/assets/my_exporter

# Create manifest interactively
./devctl create-exporter

Edit the generated manifest.yaml to use local binary:

upstream:
  type: local
  local_binary: assets/my_exporter

Build and test:

./devctl test-exporter my_exporter

Note: Local sources are not tracked by the automatic version watcher. Update the version field manually when your binary changes.


Architecture

Project Structure

monitoring-hub/
├── config/              # Configuration files (Docker, linting, Python, docs)
├── requirements/        # Python dependencies (base, dev, docs)
├── scripts/             # Utility scripts
├── core/                # Core engine (builder, schema, templates, tests)
├── exporters/           # Exporter manifests (one per directory)
├── docs/                # MkDocs documentation
├── devctl               # Docker-first development CLI
├── Makefile             # Make commands (aliases to devctl)
└── manifest.reference.yaml  # Manifest schema reference

See config/README.md and requirements/README.md for detailed documentation.

How It Works

The "Magic" happens in the core/ engine:

  1. Smart Filter: Compares local manifests against the deployed catalog.json (State Management) to only rebuild what changed.
  2. Modular Engine (core/engine/):
    • Builder: Downloads binaries and orchestrates the build.
    • Schema: Validates YAML manifests (marshmallow).
    • State Manager: Handles the incremental build logic.
  3. Templater: Uses Jinja2 (with auto-escape enabled) to render .spec files and Dockerfiles.
  4. Publisher: A parallelized Matrix CI builds all targets and updates the YUM repository.

V3 Catalog Architecture

Monitoring Hub uses a granular catalog structure with atomic writes to eliminate race conditions in parallel builds:

catalog/
├── node_exporter/
│   ├── rpm_amd64_el9.json           # Atomic: 1 job = 1 file
│   ├── rpm_amd64_el10.json
│   ├── rpm_arm64_el9.json
│   ├── deb_amd64_debian-12.json     # Universal DEB (Ubuntu 20.04+, Debian 11+)
│   ├── deb_arm64_debian-12.json     # Universal DEB (Ubuntu 20.04+, Debian 11+)
│   ├── docker.json
│   └── metadata.json                # Aggregated (generated on-demand)
└── catalog.json                     # Global index (backward compatible)

Key Benefits:

  • No Race Conditions: Each GitHub Actions job writes exactly 1 file - no concurrent writes to the same file
  • Format Versioning: All files have "format_version": "3.0" for future compatibility
  • On-Demand Aggregation: Granular files are aggregated into metadata.json at read-time by the portal
  • Atomic Publishing: Individual artifact failures don't corrupt the entire catalog
  • Faster Rebuilds: Only changed artifacts are regenerated

V3 Scripts:

  • generate_artifact_metadata.py - Generate atomic artifact JSON files (RPM/DEB/Docker)
  • aggregate_catalog_metadata.py - Aggregate granular files into metadata.json
  • publish_artifact_metadata.sh - Publish atomic files to gh-pages
  • site_generator_v2.py - Portal generation with V3 catalog support

See docs/architecture/refactoring-v2-plan.md for complete V3 implementation details.

Distribution

YUM Repository (RPM)

# RHEL/CentOS/Rocky/Alma 8, 9, 10
sudo rpm --import https://sckyzo.github.io/monitoring-hub/RPM-GPG-KEY-monitoring-hub
sudo dnf config-manager --add-repo https://sckyzo.github.io/monitoring-hub/el9/$(arch)/
sudo dnf install <exporter_name>

APT Repository (DEB)

# Universal DEB packages compatible with Ubuntu 20.04+, Debian 11+
curl -fsSL https://sckyzo.github.io/monitoring-hub/apt/monitoring-hub.asc | \
  sudo gpg --dearmor -o /usr/share/keyrings/monitoring-hub.gpg

# Replace 'bookworm' with your distribution codename:
# - Ubuntu: focal (20.04), jammy (22.04), noble (24.04)
# - Debian: bullseye (11), bookworm (12), trixie (13)
echo "deb [signed-by=/usr/share/keyrings/monitoring-hub.gpg] \
  https://sckyzo.github.io/monitoring-hub/apt bookworm main" | \
  sudo tee /etc/apt/sources.list.d/monitoring-hub.list

sudo apt update && sudo apt install <exporter_name>

Container Registry (OCI)

docker pull ghcr.io/sckyzo/monitoring-hub/<exporter_name>:latest

Contributing

We welcome new exporters! Feel free to open a Pull Request following the guide above.


🔒 Security

Monitoring Hub takes security seriously. We implement multiple layers of protection:

Security Measures

  • Code Scanning: Automated Bandit security scanner on all PRs
  • Dependency Scanning: pip-audit and Dependabot for vulnerability detection
  • Container Scanning: Trivy scans all container images with SARIF upload to GitHub Security
  • YAML Validation: yamllint ensures configuration file integrity
  • Network Security: All HTTP requests include timeouts and exponential backoff retry logic
  • SSL/TLS Resilience: Automatic retry on SSL errors and connection failures
  • Template Security: Jinja2 templates use autoescape to prevent injection attacks
  • Input Validation: Strict manifest schema validation with marshmallow
  • Package Signing: GPG-signed RPM and DEB packages for integrity verification

Reporting Vulnerabilities

If you discover a security vulnerability, please follow our Security Policy for responsible disclosure. Do not open public issues for security vulnerabilities.

For more details, see:


License

Distributed under the MIT License. See LICENSE for more information.