This project implements a Kubernetes operator that allows you to specify a TTL (Time To Live) for an object and once that time passes the object is deleted. The operator dynamically deploys a controller for each Group-Version-Kind (GVK) you configure it to monitor. Each controller watches and manages resources of its assigned GVK, enabling scalable lease management across multiple resource types. The service account for the controller is granted a role the allows it to manage the specified GVK only.
- Deploys as an operator.
- Dynamically deploys a controller for each configured GVK.
- Controllers are only managing one GVK each, increasing scaleability.
- Leader election support for high availability.
The operator is designed to be highly extensible and scalable. Once deployed, the operator looks for CRDs and for each GVK specified in a CRD, a dedicated controller is launched.
Each controller:
- Watches for changes to resources of its GVK.
- Manages lease of objects associated with those resources.
- Ensures lease lifecycle (renewal, expiration) is handled appropriately.
---
apiVersion: object-lease-controller.ullberg.io/v1
kind: LeaseController
metadata:
name: application-controller
spec:
group: "startpunkt.ullberg.us"
version: "v1alpha2"
kind:
singular: "Application"
plural: "Applications"
---
apiVersion: object-lease-controller.ullberg.io/v1
kind: LeaseController
metadata:
name: deployment-controller
spec:
group: ""
kind:
singular: "Deployment"
plural: "Deployments"
version: "v1"
apiVersion: startpunkt.ullberg.us/v1alpha2
kind: Application
metadata:
name: google
annotations:
object-lease-controller.ullberg.io/ttl: "30m"
spec:
name: Google
url: https://google.com
This will allow you to configure the time until the object will be deleted.
kubectl annotate pod test object-lease-controller.ullberg.io/ttl=1h30m
You can specify the time in hours, minutes, days, weeks, etc.
Value | Description |
---|---|
2d |
2 days |
1h30m |
1 hour 30 minutes |
5m |
5 minutes |
1w |
1 week |
3h |
3 hours |
10s |
10 seconds |
RFC3339 UTC timestamp. Single source of truth for when the lease started.
Controller behavior:
- If
ttl
exists andlease-start
is missing or invalid, the controller setslease-start
to now. - To extend a lease, delete
lease-start
. The controller sets it to now on the next reconcile. - You can set
lease-start
explicitly to backdate or align with an external clock.
Examples:
# Extend now by resetting the start
kubectl annotate pod test object-lease-controller.ullberg.io/lease-start- --overwrite
# Set a specific start time
kubectl annotate pod test object-lease-controller.ullberg.io/lease-start=2025-01-01T12:00:00Z --overwrite
Set by the controller. RFC3339 UTC timestamp for when the object will expire. Safe for dashboards to read.
Set by the controller. Human readable status or validation errors.
Remove ttl
to stop lease management. The controller clears lease-start
, expire-at
, and lease-status
.
kubectl annotate pod test object-lease-controller.ullberg.io/ttl-
- Automatically manage leases for custom resources (e.g., Applications, Databases, Services)
- Enforce expiration policies
- Integrate with external systems for lease validation or renewal
make build
# Example: monitor multiple GVKs by running multiple controllers
./bin/lease-controller -group startpunkt.ullberg.us -kind Application -version v1alpha2 -leader-elect -leader-elect-namespace default
./bin/lease-controller -group another.group -kind AnotherKind -version v1beta1 -leader-elect -leader-elect-namespace default
cd object-lease-operator
make run
By adding a catalog source to the cluster, you are able to install and manage the operator through the regular Operator Hub interface.
NOTE: The catalog is currently a preview feature and is being finalized. (See #35)
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: object-lease-operator-catalog
namespace: openshift-marketplace
spec:
displayName: Object Lease Operator Catalog
image: 'ghcr.io/ullbergm/object-lease-operator-catalog:latest'
publisher: Magnus Ullberg
sourceType: grpc
updateStrategy:
registryPoll:
interval: 5m
Using ServiceMonitor objects, the prometheus metrics are ingested in to the OpenShift monitoring platform and can be used for monitoring, alerting and reporting.
If you install the console plugin, a menu option is added and allows administrators to view all the leases configured in the cluster.
kubectl -n object-lease-operator-system apply -k object-lease-console-plugin/k8s
- Add
ttl
to start management. Controller setslease-start
if missing. - Delete
lease-start
to extend from now. - Optionally set
lease-start
to a specific RFC3339 UTC time. - Delete
ttl
to stop management. Controller removes lease annotations. - Reconcile filters only react to changes in
ttl
andlease-start
. - The controller computes
expire-at
fromlease-start + ttl
and requeues until expiry.
If you want OpenShift User Workload Monitoring (UWM) to scrape your ServiceMonitor, ensure UWM is enabled and your operator’s namespace participates. Example:
# Enable UWM cluster-wide
oc -n openshift-monitoring patch configmap cluster-monitoring-config \
--type merge -p '{"data":{"config.yaml":"enableUserWorkload: true\n"}}'
# Label your namespace if needed
oc label namespace <ns> 'openshift.io/user-monitoring=true' --overwrite
Details on enabling and namespace participation are documented by Red Hat:
That is all you need. Your counters and histogram are already registered on the default registry, so Prometheus will scrape them from /metrics
on the ServiceMonitor endpoint.