- Node.js and npm installed on your system.
- Docker installed on your system.
- Helm installed on your system.
- If you need to install Helm, please follow the official installation guide.
- A Kubernetes cluster with KEDA installed.
⚠️ Note: If you want to enable support for Prometheus metrics exposed by KEDA, please refer to the optional section below before installing KEDA.- If you need to install KEDA directly in your cluster without any additional configurations, please follow the official installation guide.
- Prometheus installed in your Kubernetes cluster.
- If you need to install Prometheus in your cluster, please follow the official installation guide of
kube-prometheus-stackHelm Chart onArtifactHUB.
- If you need to install Prometheus in your cluster, please follow the official installation guide of
- A Kubernetes cluster with KEDA installed via special flags enabling Prometheus metrics export using the below command:
-
helm install keda kedacore/keda \ --namespace keda \ --create-namespace \ --set prometheus.operator.enabled=true \ --set prometheus.operator.serviceMonitor.enabled=true \ --set prometheus.operator.serviceMonitor.interval="10s" \ --set prometheus.operator.serviceMonitor.additionalLabels.release="prometheus"
-
-
Clone the plugins repository:
git clone https://github.com/headlamp-k8s/plugins.git
-
Switch to the keda branch:
git checkout keda
-
Navigate to the keda plugin directory:
cd keda -
Install the required dependencies:
npm install
-
Start the plugin in development mode:
npm run start
-
Launch Headlamp. You should now see "KEDA" in the sidebar.
Before deploying the examples, build the message consumer Docker image using the provided Dockerfile:
# Navigate to the 'test-files' directory
cd test-files
# Build & Push the image to ttl.sh for temporary hosting
# Images on ttl.sh expire after 24 hours by default
# To specify a different expiration time, add a suffix like ":1h" for 1 hour
IMAGE_NAME=$(uuidgen)
docker build -t ttl.sh/${IMAGE_NAME}:1h .
docker push ttl.sh/${IMAGE_NAME}:1hTake a note of your image name as you'll need to update it in the deployment YAML files.
Update the image reference in these files:
For example:
Replace
image: some-registry/keda-message-consumer:latestwith
image: ttl.sh/36733c85-1c5e-470b-bb86-f9b93b6d8d82:1h # Replace with your actual image nameSince the Helm stable repository was migrated to the Bitnami Repository, add the Bitnami repo and use it during the installation:
helm repo add bitnami https://charts.bitnami.com/bitnamiRabbitMQ Helm Chart version 7.0.0 or later
helm install rabbitmq --set auth.username=user --set auth.password=PASSWORD bitnami/rabbitmq --waitNotes:
-
If you are running the RabbitMQ image on KinD, you will run into permission issues unless you set
volumePermissions.enabled=true. Use the following command if you are using KinD:helm install rabbitmq --set auth.username=user --set auth.password=PASSWORD --set volumePermissions.enabled=true bitnami/rabbitmq --wait
-
With RabbitMQ Helm Chart version 6.x.x or earlier, username and password should be specified with rabbitmq.username and rabbitmq.password parameters https://hub.helm.sh/charts/bitnami/rabbitmq
RabbitMQ Helm Chart version 7.0.0 or later
helm install --name rabbitmq --set auth.username=user --set auth.password=PASSWORD bitnami/rabbitmq --waitInstall Redis using the Bitnami Redis Helm chart:
helm install redis --set auth.password=PASSWORD bitnami/redis --waitNotes:
-
If you are running the Redis image on KinD, you may need to set
volumePermissions.enabled=true:helm install redis --set auth.password=PASSWORD --set volumePermissions.enabled=true bitnami/redis --wait
kubectl get pods
NAME READY STATUS RESTARTS AGE
rabbitmq-0 1/1 Running 0 3m3s
redis-master-0 1/1 Running 0 2m45s
redis-replica-0 1/1 Running 0 2m45s
redis-replica-1 1/1 Running 0 2m30sDeploy the authentication resources for both RabbitMQ and Redis:
kubectl apply -f authentication.yamlThis creates:
- A Secret with connection strings for both RabbitMQ and Redis
- TriggerAuthentication resources for both message brokers
The authentication resources provide the necessary credentials for KEDA to connect to both RabbitMQ and Redis and monitor their respective queues/lists.
5. Testing KEDA ScaledObject
kubectl apply -f consumer-deployment.yamlkubectl get deploymentsYou should see message-consumer deployment with 1 running pod but after deploying the ScaledObject, it will display 0 running pods as we haven't yet published any messages to RabbitMQ or Redis and hence KEDA will scale down the pods to 0 since we have not mentioned any minReplicaCount in the spec.
NAME READY UP-TO-DATE AVAILABLE AGE
message-consumer 1/1 1 1 2m32sThis consumer is designed to process messages from both RabbitMQ queues and Redis lists. It consumes one message per instance, sleeps for 1 second to simulate work, and then acknowledges completion of the message.
kubectl apply -f scaledObject.yamlThis ScaledObject configures autoscaling for the message-consumer deployment with dual triggers:
- RabbitMQ Trigger: Monitors the
helloqueue, scaling based on a target of 5 messages per replica - Redis Trigger: Monitors the
taskslist, scaling based on a target of 5 messages per replica
The ScaledObject is configured to:
- Scale to a minimum of 0 replicas when there are no messages in either queue/list
- Scale up to a maximum of 30 replicas when either queue/list is heavily loaded
- Use a polling interval of 5 seconds (faster than the default 30 seconds)
- Apply a cooldown period of 30 seconds before scaling down
These settings can be adjusted in the YAML Specification as needed.
Now that all components are set up, trigger the publisher job to send messages to both RabbitMQ and Redis:
kubectl apply -f publisher-job.yamlThis job will publish:
- 300 messages to the RabbitMQ
helloqueue - 500 messages to the Redis
taskslist
This will trigger scaling of the consumer deployment via the ScaledObject's dual triggers.
Monitor the scaling activities:
# Watch the deployments scale up
kubectl get deployments -w
# Check the pods being created
kubectl get pods
# View KEDA ScaledObject status
kubectl get scaledobject
# Check the specific ScaledObject details
kubectl describe scaledobject message-consumer-scaledobjectYou should see the number of replicas for the message-consumer deployment increase as messages are added to either the RabbitMQ queue or Redis list, and decrease as messages are processed from both sources.
You can monitor the message processing by checking the logs:
# View consumer logs
kubectl logs -l app=message-consumer -f
# Check RabbitMQ queue status
kubectl exec rabbitmq-0 -- rabbitmqctl list_queues
# Check Redis list length
kubectl exec redis-master-0 -- redis-cli -a PASSWORD llen tasksWhen you're done testing, you can clean up the deployed resources:
kubectl delete -f publisher-job.yaml
kubectl delete -f scaledObject.yaml
kubectl delete -f consumer-deployment.yaml6. Testing KEDA ScaledJob
kubectl apply -f scaledJob.yamlThis ScaledJob creates Kubernetes Jobs to process messages from both RabbitMQ and Redis in batch mode. It features dual triggers similar to the ScaledObject:
- RabbitMQ Trigger: Monitors the
helloqueue with a target of 20 messages per job - Redis Trigger: Monitors the
taskslist with a target of 20 messages per job
The ScaledJob is configured to:
- Scale to a minimum of 0 jobs when there are no messages in either queue/list
- Scale up to a maximum of 10 jobs when either queue/list is heavily loaded
- Process 20 messages per job with a 2-minute timeout
- Scale from 0 when there are 5+ messages in either queue/list (activation threshold)
- Use a polling interval of 10 seconds
Unlike the ScaledObject which scales a Deployment, ScaledJob creates separate Job instances that process a batch of messages and then terminate. Each job runs in batch mode with the -batch flag, consuming up to 20 messages before completing.
Now that all components are set up, trigger the publisher job to send messages to both message brokers:
kubectl apply -f publisher-job.yamlThis job will publish:
- 300 messages to the RabbitMQ
helloqueue - 500 messages to the Redis
taskslist
This will trigger the creation of batch processing jobs via the ScaledJob's dual triggers.
Monitor the scaling activities:
# View the jobs being created
kubectl get jobs
# Check the pods being created
kubectl get pods
# View KEDA ScaledJob status
kubectl get scaledjob
# Check the specific ScaledJob details
kubectl describe scaledjob message-consumer-scaledjobYou will see jobs created to process batches of messages from both RabbitMQ and Redis based on the ScaledJob configuration.
Monitor the batch processing:
# View job logs to see batch processing
kubectl logs -l job-name=message-consumer-scaledjob-xxxxx -f
# Check message broker status
kubectl exec rabbitmq-0 -- rabbitmqctl list_queues
kubectl exec redis-master-0 -- redis-cli -a PASSWORD llen tasksWhen you're done testing, you can clean up the deployed resources:
kubectl delete -f publisher-job.yaml
kubectl delete -f scaledJob.yamlWhen you're done testing, you can clean up all the deployed resources:
# Clean up KEDA resources
kubectl delete -f publisher-job.yaml # ignore if you've already done this
kubectl delete -f scaledJob.yaml # ignore if you've already done this
kubectl delete -f scaledObject.yaml # ignore if you've already done this
kubectl delete -f consumer-deployment.yaml # ignore if you've already done this
kubectl delete -f authentication.yaml
# Uninstall message brokers
helm uninstall rabbitmq
helm uninstall redisThis testing setup demonstrates KEDA's ability to handle multiple message brokers simultaneously:
- Publisher Job sends messages to both RabbitMQ (queue) and Redis (list)
- KEDA Scalers monitor both message brokers independently
- Consumer Applications process messages from both sources
- Scaling decisions are made based on the combined load from both brokers
- ScaledObject: Maintains long-running pods that continuously process messages
- ScaledJob: Creates short-lived jobs that process batches of messages and terminate
- Dual Triggers: Both configurations monitor RabbitMQ and Redis simultaneously
- Independent Scaling: Each trigger can independently cause scaling events
This setup showcases KEDA's flexibility in handling complex, multi-broker scenarios commonly found in production environments.