Manage UptimeRobot monitors as Kubernetes resources. Automatic drift detection, self-healing, and GitOps-ready.
- Declarative monitor configuration via CRDs
- Drift detection and automatic correction
- All monitor types: HTTPS, Keyword, Ping, Port, Heartbeat, DNS
- Maintenance window scheduling
- Alert contact management
- Adopt existing monitors - Migrate monitors created outside Kubernetes without losing history
- Prometheus metrics - API performance, reconciliation duration, error tracking
All images are:
- Signed with Cosign — Keyless signing via GitHub Actions OpenID Connect (OIDC)
- Scanned for vulnerabilities — Trivy scanning; critical/high severity blocks the build
- SBOM included — Software Bill of Materials (SBOM) in SPDX and CycloneDX formats
See SECURITY.md for verification instructions and deployment best practices.
Install the operator:
kubectl apply -f https://github.com/joelp172/uptime-robot-operator/releases/latest/download/install.yamlCreate your first monitor:
# Store your API key
kubectl create secret generic uptimerobot-api-key \
--namespace uptime-robot-system \
--from-literal=apiKey=YOUR_API_KEY
# Configure account
kubectl apply -f - <<EOF
apiVersion: uptimerobot.com/v1alpha1
kind: Account
metadata:
name: default
spec:
isDefault: true
apiKeySecretRef:
name: uptimerobot-api-key
key: apiKey
EOF
# Get your contact ID
kubectl get account default -o jsonpath='{.status.alertContacts[0].id}'
# Create contact (replace YOUR_CONTACT_ID)
kubectl apply -f - <<EOF
apiVersion: uptimerobot.com/v1alpha1
kind: Contact
metadata:
name: default
spec:
isDefault: true
contact:
id: "YOUR_CONTACT_ID"
EOF
# Create monitor
kubectl apply -f - <<EOF
apiVersion: uptimerobot.com/v1alpha1
kind: Monitor
metadata:
name: my-website
spec:
monitor:
name: My Website
url: https://example.com
interval: 5m
EOF| Document | Purpose |
|---|---|
| Installation | Install via kubectl or Helm |
| Getting Started | Create your first monitor (tutorial) |
| Security | Verify images and deployment best practices |
| Monitors | Configure monitor types and alerts |
| Metrics | Prometheus metrics and Grafana dashboards |
| Migration Guide | Adopt existing UptimeRobot resources |
| Maintenance Windows | Schedule planned downtime |
| Architecture | System architecture and data flows |
| Troubleshooting | Diagnose and fix common issues |
| API Reference | Complete CRD field reference |
| Development | Contributing and testing |
| Type | Use Case |
|---|---|
| HTTPS | HTTP/HTTPS endpoints |
| Keyword | Page content verification |
| Ping | ICMP availability |
| Port | TCP port connectivity |
| Heartbeat | Cron jobs and scheduled tasks |
| DNS | DNS record validation |
The operator reconciles Monitor resources with UptimeRobot via the API. It detects drift (manual changes in UptimeRobot) and corrects them to match your Kubernetes configuration. When you delete a Monitor resource, the operator removes it from UptimeRobot (configurable via prune field).
The operator exposes custom Prometheus metrics for monitoring API performance, reconciliation behavior, and resource health:
- API Metrics: Request rate, latency percentiles, error rate, retry patterns
- Reconciliation Metrics: Duration, error rate by controller and error type
- Resource Metrics: Monitor counts by type/status, maintenance windows, monitor groups
See docs/metrics.md for complete documentation and a sample Grafana dashboard.
# Enable metrics endpoint (chart/manifests default to :8443; :8080 is also valid)
kubectl patch deployment -n uptime-robot-system uptime-robot-controller-manager \
--type=json -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--metrics-bind-address=:8443"}]'
# Access metrics
kubectl port-forward -n uptime-robot-system deployment/uptime-robot-controller-manager 8443:8443
curl http://localhost:8443/metrics | grep uptimerobot_See CONTRIBUTING.md for development setup and PR guidelines.
Apache License 2.0 - see LICENSE