A Kubernetes operator for managing Grafana Mimir Prometheus Rules and Alertmanager configurations in a multi-tenant environment.
The OpenAwareness Controller provides Kubernetes-native management of:
- Prometheus recording and alerting rules via Grafana Mimir
- Alertmanager configurations for notification routing and templates
- Multi-tenant support for isolated alert management
- DevOps pipeline integration for automated alert configuration
This operator bridges the gap between Kubernetes-native PrometheusRule resources and Grafana Mimir's API, enabling GitOps-style management of alerting infrastructure.
Defines connection settings for Mimir or Prometheus instances.
apiVersion: openawareness.syndlex/v1beta1
kind: ClientConfig
metadata:
name: mimir-client
spec:
address: "https://mimir.example.com"
type: MimirManages Alertmanager configurations for a specific tenant in Grafana Mimir.
apiVersion: openawareness.syndlex/v1beta1
kind: MimirAlertTenant
metadata:
name: team-alerts
annotations:
openawareness.io/client-name: "mimir-client"
openawareness.io/mimir-tenant: "devops-team"
spec:
templateFiles:
default_template: |
{{ define "__alertmanager" }}AlertManager{{ end }}
{{ define "__alertmanagerURL" }}{{ .ExternalURL }}/#/alerts?receiver={{ .Receiver | urlquery }}{{ end }}
alertmanagerConfig: |
global:
smtp_smarthost: 'localhost:25'
smtp_from: 'alerts@example.org'
templates:
- 'default_template'
route:
receiver: 'default-receiver'
group_by: ['alertname', 'cluster', 'service']
group_wait: 10s
group_interval: 10s
repeat_interval: 12h
routes:
- match:
severity: critical
receiver: 'critical-alerts'
receivers:
- name: 'default-receiver'
email_configs:
- to: 'team@example.org'
- name: 'critical-alerts'
email_configs:
- to: 'oncall@example.org'The controller automatically syncs standard Kubernetes PrometheusRule resources to Grafana Mimir.
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: example-rules
annotations:
openawareness.io/client-name: "mimir-client"
openawareness.io/mimir-tenant: "devops-team"
spec:
groups:
- name: example
rules:
- alert: HighErrorRate
expr: rate(http_errors_total[5m]) > 0.05
labels:
severity: critical
annotations:
summary: "High error rate detected"- Go version v1.23.0+
- Docker version 17.03+
- kubectl version v1.11.3+
- Access to a Kubernetes v1.11.3+ cluster
- Grafana Mimir instance with API access
helm install openawareness oci://ghcr.io/syndlex/charts/openawareness-controller \
--version 0.1.0 \
--namespace openawareness-system \
--create-namespace# Create a values file
cat > my-values.yaml <<EOF
replicaCount: 2
resources:
limits:
cpu: 1000m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
metrics:
enabled: true
serviceMonitor:
enabled: true
EOF
# Install with custom values
helm install openawareness oci://ghcr.io/syndlex/charts/openawareness-controller \
--version 0.1.0 \
--namespace openawareness-system \
--create-namespace \
--values my-values.yamlhelm upgrade openawareness oci://ghcr.io/syndlex/charts/openawareness-controller \
--version <new-version> \
--namespace openawareness-systemmake installOption A: Deploy to MicroK8s (Recommended for local development)
The Makefile includes targets for deploying to MicroK8s with its local registry:
# Build, push to MicroK8s registry, and deploy in one command
make microk8s-deployThis command will:
- Build the container image
- Push it to the MicroK8s registry at
localhost:32000 - Deploy the controller to your MicroK8s cluster
Option B: Deploy to External Registry
Build and push your image to an external registry:
make docker-build docker-push IMG=<your-registry>/openawareness-controller:tagDeploy to cluster:
make deploy IMG=<your-registry>/openawareness-controller:tagAvailable Make Targets for MicroK8s:
make microk8s-build- Build image for MicroK8s registrymake microk8s-push- Build and push image to MicroK8s registrymake microk8s-deploy- Build, push, and deploy to MicroK8s cluster
First, create a ClientConfig to connect to your Mimir instance:
kubectl apply -f - <<EOF
apiVersion: openawareness.syndlex/v1beta1
kind: ClientConfig
metadata:
name: mimir-client
spec:
address: "https://your-mimir-instance.example.com"
type: Mimir
EOFCreate a MimirAlertTenant to configure Alertmanager for your tenant:
kubectl apply -f config/samples/openawareness_v1beta1_mimiralerttenant.yamlCreate PrometheusRule resources with the appropriate annotations:
kubectl apply -f config/samples/promrule.yamlThe controller uses annotations to determine routing and tenant isolation:
openawareness.io/client-name: References the ClientConfig to use for API callsopenawareness.io/mimir-tenant: Specifies the Mimir tenant/namespace
The MimirAlertTenant CRD supports:
-
Template Files (optional): Go templates for notification formatting
- Define custom templates for email, Slack, PagerDuty, etc.
- Reference templates in the alertmanagerConfig
-
Alertmanager Config (required): Full Alertmanager configuration in YAML
- Global settings (SMTP, Slack, PagerDuty, etc.)
- Routing tree with matchers
- Receivers with notification integrations
- Inhibition rules
See the Grafana Mimir Alertmanager API documentation for detailed configuration options.
- Separate configuration from secrets: Keep sensitive data (API keys, webhook URLs) in Secrets
- Environment-specific configs: Use different values for dev/staging/prod
- Shared configuration: Reuse common settings across multiple alert tenants
- GitOps-friendly: Store configuration templates in Git, secrets elsewhere !! Do not use Git to store you Credentials here!
apiVersion: openawareness.syndlex/v1beta1
kind: MimirAlertTenant
metadata:
name: devops-alerts
annotations:
openawareness.io/client-name: "mimir-client"
openawareness.io/mimir-tenant: "devops-team"
spec:
# Reference ConfigMaps and Secrets for template variables
secretDataReferences:
- name: alert-smtp-config
kind: ConfigMap
- name: alert-webhooks
kind: Secret
optional: true
# Use Go template syntax with [[ ]] delimiters to inject values
# Note: Alertmanager's own {{ }} templates are preserved
alertmanagerConfig: |
global:
smtp_smarthost: '[[ .SMTP_HOST ]]'
smtp_from: '[[ .SMTP_FROM ]]'
receivers:
- name: 'critical'
email_configs:
- to: '[[ .ONCALL_EMAIL ]]'
[[- if .SLACK_WEBHOOK_URL ]]
slack_configs:
- api_url: '[[ .SLACK_WEBHOOK_URL ]]'
# Alertmanager templates use {{ }}
text: 'Alert: {{ .GroupLabels.alertname }}'
[[- end ]]- Variable substitution:
[[ .VARIABLE_NAME ]](uses[[ ]]to avoid conflicts with Alertmanager templates or helm) - Default values:
[[ .VAR | default "fallback" ]] - Conditional sections:
[[- if .VAR ]]...[[- end ]] - Multiple data sources: Reference multiple ConfigMaps and Secrets
- Optional references: Mark references as optional to avoid failures
- Alertmanager templates preserved: Native Alertmanager
{{ }}templates are passed through unchanged
See the examples/templating directory for complete examples:
- Basic ConfigMap templating
- Using Secrets for sensitive data
- Environment-specific configurations
- Conditional receiver configurations
For detailed documentation, see examples/templating/README.md.
Run unit tests with code coverage:
make testRun e2e tests against a running Kubernetes cluster:
make test-e2eThis command will automatically:
- Check and switch to the microk8s context if available
- Install Mimir via Helm if not already installed
- Create a service alias for e2e test access
- Run the full e2e test suite
Prerequisites for E2E Tests:
- microk8s cluster configured with kubectl context
- Helm 3.x installed
- kubectl available in PATH
Running Specific E2E Tests:
# Run only MimirAlertTenant tests
ginkgo --focus="MimirAlertTenant E2E" test/e2e
# Run with verbose output
ginkgo -v test/e2eManual Setup (if needed):
# Switch to microk8s context
make ensure-microk8s-context
# Install Mimir
make ensure-mimirSee test/e2e/README.md for detailed e2e test documentation and troubleshooting.
make runmake lintmake buildThe controller watches for:
- PrometheusRule resources and syncs them to Mimir as rule groups
- MimirAlertTenant resources and configures Alertmanager settings
- ClientConfig resources to manage Mimir API connections
Each controller:
- Uses finalizers to ensure proper cleanup
- Validates configurations before applying
- Provides structured logging for debugging
- Supports multi-tenancy through annotations
The controller supports multi-tenant deployments:
- Use the
openawareness.io/mimir-tenantannotation to specify tenants - Each tenant has isolated alert rules and Alertmanager configurations
- ClientConfigs can be shared across tenants or isolated per-tenant
The controller is designed for DevOps workflows:
- GitOps: Store PrometheusRules and MimirAlertTenants in Git
- CI/CD: Automatically deploy alert configurations via pipelines
- Alert Management: Version control your alerting strategy
- Team Isolation: Use tenants to separate team alerts
- Notification Routing: Configure per-team notification channels
kubectl logs -n openawareness-controller-system deployment/openawareness-controller-controller-managerkubectl get crd | grep openawarenesskubectl describe mimiralerttenant <name>
kubectl describe prometheusrule <name>- Rules not appearing in Mimir: Check that
openawareness.io/client-nameannotation references an existing ClientConfig - Alertmanager config errors: Validate YAML syntax in the alertmanagerConfig field
- Connection errors: Verify Mimir endpoint in ClientConfig and network connectivity
The Helm chart is automatically generated from Kustomize manifests using helmify:
# Generate Helm chart
make helm
# Lint Helm chart
make helm-lint
# Package Helm chart
make helm-package
# Install chart locally for testing
make helm-install
# Uninstall chart
make helm-uninstallchart/openawareness-controller/
├── Chart.yaml # Chart metadata
├── values.yaml # Default values (auto-generated)
├── .helmignore # Files to ignore
├── crds/ # CRD definitions
│ ├── clientconfigs-crd.yaml
│ └── mimiralerttenants-crd.yaml
└── templates/ # Kubernetes manifests
├── deployment.yaml
├── service.yaml
├── serviceaccount.yaml
└── ...
When you modify Kubernetes manifests in config/, regenerate the Helm chart:
# Make changes to config/ files
vim config/manager/manager.yaml
# Regenerate chart
make helm
# Test the updated chart
make helm-lint
make helm-installhelm uninstall openawareness --namespace openawareness-systemDelete sample resources:
kubectl delete -k config/samples/Undeploy controller:
make undeployRemove CRDs:
make uninstallContributions are welcome! Please:
- Follow the Cline Rules
- Write tests for new features (TDD approach)
- Use conventional commits for commit messages
- Run linters and tests before submitting PRs
- Update documentation for new features
