Reaper is a lightweight Kubernetes container-less runtime that executes commands directly on cluster nodes without traditional container isolation. Think of it as a way to run host-native processes through Kubernetes' orchestration layer.
Reaper is an experimental, personal project built to explore what's possible with AI-assisted development. It is under continuous development with no stability guarantees — there is no assurance it will work correctly in your environment. No support of any kind is provided. Unless you fully understand what Reaper does and how it works, you probably don't want to run it. Use entirely at your own risk. That said, the code is open — feel free to read it and send PRs.
Reaper is a containerd shim that runs processes directly on the host system while integrating with Kubernetes' workload management. Unlike traditional container runtimes that provide isolation through namespaces and cgroups, Reaper intentionally runs processes with full host access.
What Reaper provides:
- Standard Kubernetes API (Pods, kubectl logs, kubectl exec)
- Process lifecycle management (start, stop, restart)
- Shared overlay filesystem for workload isolation from host changes
- Kubernetes volumes (ConfigMap, Secret, hostPath, emptyDir)
- Sensitive host file filtering (SSH keys, passwords, SSL keys)
- Interactive sessions (PTY support for
kubectl run -itandkubectl exec -it) - UID/GID switching with
securityContext - Per-pod configuration via Kubernetes annotations
- Custom Resource Definitions: ReaperPod (simplified workloads), ReaperOverlay (overlay lifecycle), ReaperDaemonJob (node-wide config tasks)
- Helm chart for one-command installation (
helm install)
What Reaper does NOT provide:
- Container isolation (namespaces, cgroups)
- Resource limits (CPU, memory)
- Network isolation (uses host networking)
- Container image pulling
Use cases: privileged system utilities, cluster maintenance, legacy host-level applications, HPC workloads, development and debugging workflows.
Spin up a 3-node Kind cluster with Reaper pre-installed:
# Build from source (compiles inside Docker — no local Rust toolchain needed)
./scripts/setup-playground.sh
# Or use pre-built images from the latest release (no build)
./scripts/setup-playground.sh --releaseOnce ready:
kubectl apply -f - <<'YAML'
apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
name: hello
spec:
command: ["/bin/sh", "-c", "echo Hello from $(hostname) && uname -a"]
YAML
kubectl logs helloTo tear down: ./scripts/setup-playground.sh --cleanup
helm upgrade --install reaper deploy/helm/reaper/ \
--namespace reaper-system --create-namespace \
--wait --timeout 120sgit clone https://github.com/miguelgila/reaper && cd reaper
cargo build --releaseFor cross-compilation to Linux (from macOS), see docs/DEVELOPMENT.md.
apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
name: my-task
spec:
command: ["/bin/sh", "-c", "echo Hello from host && uname -a"]kubectl apply -f my-task.yaml
kubectl logs my-task
kubectl get reaperpodsapiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
name: config-reader
spec:
command: ["/bin/sh", "-c", "cat /config/settings.yaml"]
volumes:
- name: config
mountPath: /config
configMap: "my-config"
readOnly: trueapiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
name: compute-task
spec:
command: ["/bin/sh", "-c", "echo Running on $(hostname)"]
nodeSelector:
workload-type: computeYou can also use standard Kubernetes Pods with runtimeClassName: reaper-v2 directly. This gives access to the full Pod API (interactive sessions, DaemonSets, Deployments, etc.). See the Quick Start guide for details and Pod Compatibility for supported fields.
Kubernetes/containerd
↓ (ttrpc)
containerd-shim-reaper-v2 (shim binary)
↓ (exec: create/start/state/delete)
reaper-runtime (OCI runtime binary)
↓ (fork + spawn)
monitoring daemon → workload process
- Fork-first architecture: Daemon monitors workload, captures real exit codes
- Shared overlay filesystem: Writable layer per K8s namespace (host root is read-only)
- PTY support: Interactive containers with
kubectl run -itandkubectl exec -it
For architecture details, see docs/SHIMV2_DESIGN.md and docs/OVERLAY_DESIGN.md.
Reaper reads configuration from /etc/reaper/reaper.conf on each node. Per-pod overrides are available via Kubernetes annotations:
annotations:
reaper.runtime/dns-mode: "kubernetes"
reaper.runtime/overlay-name: "my-group"See docs/CONFIGURATION.md for the full reference.
The examples/ directory contains runnable demos, each with a setup.sh that creates a Kind cluster with Reaper pre-installed:
| Example | Description |
|---|---|
| 01-scheduling | DaemonSets on all nodes vs. a labeled subset |
| 02-client-server | TCP server + clients across nodes via host networking |
| 03-client-server-runas | Client-server running as a shared non-root user |
| 04-volumes | Kubernetes volume mounts (ConfigMap, Secret, hostPath, emptyDir) |
| 05-kubemix | Jobs, DaemonSets, and Deployments on a 10-node cluster |
| 06-ansible-jobs | Sequential Jobs: install Ansible, then run a playbook |
| 07-ansible-complex | DaemonSet bootstrap + role-based Ansible playbooks |
| 08-mix-container-runtime-engines | Mixed runtimes: OpenLDAP (default) + SSSD (Reaper) |
| 09-reaperpod | ReaperPod CRD: simplified Reaper-native workloads |
| 10-slurm-hpc | Slurm HPC: containerized scheduler + Reaper worker daemons |
| 11-node-monitoring | Prometheus node_exporter (Reaper) + Prometheus server |
| Document | Description |
|---|---|
| CONFIGURATION.md | Node config, environment variables, pod annotations |
| COMPATIBILITY.md | Pod field compatibility reference |
| SHIMV2_DESIGN.md | Shim v2 protocol implementation |
| OVERLAY_DESIGN.md | Overlay filesystem design |
| DEVELOPMENT.md | Development setup, tooling, contributing |
| TESTING.md | Testing guide (unit, integration, coverage) |
| CONTRIBUTING.md | Contributing guidelines |
| examples/README.md | Runnable examples with Kind clusters |
Runtime (cluster nodes): Linux kernel with overlayfs (3.18+), Kubernetes with containerd, root access on nodes.
Playground: Docker, kind, kubectl, Helm.
Building from source: All of the above, plus Rust (version pinned in rust-toolchain.toml).
cargo test # Unit tests (fast)
./scripts/run-integration-tests.sh # Full integration tests (Kubernetes)See docs/TESTING.md for the complete guide.
- "write on closed stream 0" warning on interactive exit: Cosmetic race condition in containerd's CRI streaming handler during FIFO teardown. Does not affect workload exit code or pod status.
See docs/DEVELOPMENT.md and docs/CONTRIBUTING.md.
MIT