Newcastle University | MSc Advanced Computer Science
Module: Cloud Computing (CSC8110)
Student: Aniket Vinod Nalawade | 250535354
A production-grade Kubernetes deployment demonstrating cloud-native application orchestration, monitoring, and performance testing. This project implements a complete microservices infrastructure using Kubernetes, Helm, Prometheus, and Grafana for deploying, scaling, and monitoring containerized applications.
This cloud-native platform demonstrates:
☸️ Kubernetes Orchestration - Deploy and manage containerized applications
📦 Helm Package Management - Declarative application deployment
📊 Prometheus Monitoring - Real-time metrics collection and storage
📈 Grafana Visualization - Interactive dashboards for system observability
🔐 RBAC Security - Role-based access control for dashboard authentication
🚀 Load Testing - Custom load generator for performance analysis
⚖️ Auto-scaling - Horizontal Pod Autoscaler for dynamic scaling
┌─────────────────────────────────────────────────────────────────┐
│ KUBERNETES CLUSTER │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ CONTROL PLANE │ │
│ │ - API Server │ │
│ │ - Scheduler │ │
│ │ - Controller Manager │ │
│ │ - etcd (Cluster State) │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────┴──────────────────────────────────┐ │
│ │ WORKER NODES │ │
│ ├──────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ ┌─────────────────┐ ┌─────────────────┐ │ │
│ │ │ Kubernetes │ │ Java Benchmark │ │ │
│ │ │ Dashboard │ │ Application │ │ │
│ │ │ (Web UI) │ │ (Helm Chart) │ │ │
│ │ │ Port: 30000 │ │ Port: 30000 │ │ │
│ │ └─────────────────┘ └─────────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ MONITORING STACK │ │ │
│ │ ├─────────────────────────────────────────────────┤ │ │
│ │ │ │ │ │
│ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │
│ │ │ │ Prometheus │◄───│ Node Exporter│ │ │ │
│ │ │ │ (Metrics DB) │ │ (DaemonSet) │ │ │ │
│ │ │ └──────┬───────┘ └──────────────┘ │ │ │
│ │ │ │ │ │ │
│ │ │ │ ┌──────────────┐ │ │ │
│ │ │ └────────────►│ Kube-State │ │ │ │
│ │ │ │ Metrics │ │ │ │
│ │ │ ┌─────────────┴──────────────┘ │ │ │
│ │ │ │ │ │ │
│ │ │ ▼ │ │ │
│ │ │ ┌──────────────┐ │ │ │
│ │ │ │ Grafana │ │ │ │
│ │ │ │ (Dashboard) │ │ │ │
│ │ │ │ Port: 32500 │ │ │ │
│ │ │ └──────────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────┐ │ │
│ │ │ Load Generator │ │ │
│ │ │ (Python) │────► HTTP Requests │ │
│ │ │ Frequency: 2/s │ to /primecheck │ │
│ │ └─────────────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
| Component | Technology | Purpose |
|---|---|---|
| Container Orchestration | Kubernetes (MicroK8s) | Cluster management, scheduling, scaling |
| Package Manager | Helm 3 | Application templating and deployment |
| Web Application | Java Benchmark App | CPU-intensive workload for testing |
| Metrics Collection | Prometheus | Time-series metrics database |
| Visualization | Grafana | Monitoring dashboards |
| Node Metrics | Node Exporter (DaemonSet) | Host-level metrics collection |
| Cluster Metrics | Kube-State-Metrics | Kubernetes resource metrics |
| Load Testing | Custom Python Script | HTTP request generation |
| Service Discovery | Kubernetes DNS | Internal service resolution |
| Access Control | RBAC (ServiceAccount) | Dashboard authentication |
Purpose: Deploy Kubernetes Dashboard and Java benchmark application using Helm
Manual Deployment (not MicroK8s addon):
# Apply official Kubernetes Dashboard manifests
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# Verify deployment
kubectl get pods -n kubernetes-dashboardRBAC Configuration:
# dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboardToken Generation:
kubectl -n kubernetes-dashboard create token admin-userAccess Dashboard:
URL: http://localhost:30000
Authentication: Bearer Token (from above command)
Key Concepts:
- ✅ ServiceAccount: Kubernetes identity for pods
- ✅ ClusterRoleBinding: Grants cluster-admin permissions
- ✅ NodePort: Exposes dashboard on port 30000
- ✅ RBAC: Role-Based Access Control for security
Helm Chart Structure:
javabenchmarkapp/
├── Chart.yaml # Helm chart metadata
├── values.yaml # Configuration values
└── templates/
├── deployment.yaml # Pod deployment spec
├── service.yaml # NodePort service
├── _helpers.tpl # Template helpers
└── NOTES.txt # Post-install instructions
values.yaml Configuration:
replicaCount: 1
image:
repository: nclcloudcomputing/javabenchmarkapp
pullPolicy: IfNotPresent
tag: "latest"
service:
type: NodePort
port: 8080
nodePort: 30000
resources: {} # No resource limits (for baseline testing)Deployment Commands:
# Create Helm chart
helm create javabenchmarkapp
# Install application
helm install javabenchmarkapp ./javabenchmarkapp
# Verify deployment
kubectl get pods
kubectl get svc
# Test application
curl http://localhost:30000/primecheckApplication Endpoint:
/primecheck - CPU-intensive prime number calculation
Response: JSON with computation result
Key Features:
- ✅ Spring Boot Java application
- ✅ Packaged as Helm chart for repeatability
- ✅ NodePort service for external access
- ✅ ConfigMap for environment variables (
SPRING_PROFILES_ACTIVE=web)
Purpose: Deploy Prometheus and Grafana for cluster observability
Helm Deployment:
# Add Helm repositories
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Install complete monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack
# Verify all components
kubectl get pods -A | grep monitoringComponents Deployed:
-
Prometheus Server
- Metrics database (time-series)
- Scrapes targets every 30s
- Stores data locally (PersistentVolume)
-
Alertmanager
- Handles alerts from Prometheus
- Routing and notification management
-
Node Exporter (DaemonSet)
- Runs on every cluster node
- Collects host-level metrics (CPU, memory, disk, network)
-
Kube-State-Metrics
- Kubernetes API listener
- Exports cluster state metrics (pods, deployments, nodes)
-
Grafana
- Visualization platform
- Pre-configured Prometheus data source
- Pre-loaded dashboards
grafana-nodeport.yaml:
apiVersion: v1
kind: Service
metadata:
name: grafana-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 80
targetPort: 3000
nodePort: 32500
protocol: TCP
selector:
app.kubernetes.io/name: grafanaApply Configuration:
kubectl apply -f grafana-nodeport.yaml
# Access Grafana
# URL: http://localhost:32500Login Credentials:
# Get admin password
kubectl get secret monitoring-grafana \
-o jsonpath="{.data.admin-password}" | base64 --decode
Username: admin
Password: [from above command]Prometheus Data Source:
Navigate to: Connections → Data Sources → Prometheus
URL: http://monitoring-kube-prometheus-prometheus:9090
Access: Server (default)
Pre-installed Dashboards:
- Kubernetes Cluster Monitoring - Overview of cluster health
- Node Exporter Full - Detailed node metrics
- Kubernetes Pods - Per-pod resource usage
- Kubernetes StatefulSets - StatefulSet metrics
- Prometheus Stats - Prometheus performance
Custom Metrics Visualization:
- CPU usage per pod
- Memory consumption trends
- Network I/O rates
- Disk utilization
- Pod restart counts
Key Features:
- ✅ Real-time metrics dashboard
- ✅ Historical data analysis
- ✅ Custom metric queries (PromQL)
- ✅ Alerting based on thresholds
- ✅ Multi-dashboard organization
Purpose: Generate synthetic load to test application performance and observe metrics
load_generator.py:
import os
import time
import requests
# Configuration from environment variables
TARGET = os.getenv("TARGET", "http://localhost:30000/primecheck")
FREQUENCY = float(os.getenv("FREQUENCY", "1")) # requests/second
print(f"Target URL: {TARGET}")
print(f"Request Frequency: {FREQUENCY} req/sec\n")
interval = 1.0 / FREQUENCY
total_requests = 0
failures = 0
total_time = 0.0
while True:
start_time = time.time()
total_requests += 1
try:
resp = requests.get(TARGET, timeout=10)
elapsed = (time.time() - start_time) * 1000 # ms
total_time += elapsed
print(f"[OK] Response time: {elapsed:.2f} ms")
except Exception as e:
failures += 1
print(f"[FAIL] Request failed: {str(e)}")
print(f"Total Requests: {total_requests}, Failures: {failures}")
time.sleep(interval)Key Features:
- ✅ Configurable target URL and request frequency
- ✅ Response time measurement (milliseconds)
- ✅ Failure tracking and logging
- ✅ Continuous operation (infinite loop)
- ✅ Environment variable configuration
Dockerfile:
FROM python:3.9-slim
WORKDIR /app
# Install dependencies
RUN pip install requests
# Copy load generator script
COPY load_generator.py .
# Run script
CMD ["python", "load_generator.py"]Build and Push:
# Build image
docker build -t load-generator:latest .
# Tag for local registry (MicroK8s)
docker tag load-generator:latest localhost:32000/load-generator:latest
# Push to MicroK8s registry
docker push localhost:32000/load-generator:latestload-generator.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: load-generator
spec:
replicas: 1
selector:
matchLabels:
app: load-generator
template:
metadata:
labels:
app: load-generator
spec:
containers:
- name: load-generator
image: localhost:32000/load-generator:latest
env:
- name: TARGET
value: "http://javabenchmarkapp:8080/primecheck" # Kubernetes DNS
- name: FREQUENCY
value: "2" # 2 requests per secondDeploy:
kubectl apply -f load-generator.yaml
# Monitor load generator logs
kubectl logs -f deployment/load-generator
# Expected output:
# Target URL: http://javabenchmarkapp:8080/primecheck
# Request Frequency: 2.0 req/sec
# [OK] Response time: 145.32 ms
# Total Requests: 1, Failures: 0
# [OK] Response time: 152.18 ms
# Total Requests: 2, Failures: 0Metrics to Monitor:
-
CPU Usage:
rate(container_cpu_usage_seconds_total{pod=~"javabenchmarkapp.*"}[5m]) -
Memory Usage:
container_memory_usage_bytes{pod=~"javabenchmarkapp.*"} -
HTTP Request Rate:
rate(http_requests_total[5m]) -
Response Time (if instrumented):
http_request_duration_seconds
Analysis:
- Observe CPU spike when load generator starts
- Monitor memory consumption under sustained load
- Identify performance bottlenecks
- Track pod restart events
Purpose: Automatically scale application based on CPU utilization
# MicroK8s includes metrics-server
microk8s enable metrics-server
# Verify
kubectl top nodes
kubectl top podsHPA Manifest:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: javabenchmarkapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: javabenchmarkapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50 # Target 50% CPUApply HPA:
kubectl apply -f hpa.yaml
# Watch HPA in action
kubectl get hpa -w
# Expected output as load increases:
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
# javabenchmarkapp-hpa Deployment/javabenchmark 45%/50% 1 10 1
# javabenchmarkapp-hpa Deployment/javabenchmark 75%/50% 1 10 2
# javabenchmarkapp-hpa Deployment/javabenchmark 55%/50% 1 10 3Increase load frequency:
# Edit load-generator deployment
kubectl edit deployment load-generator
# Change FREQUENCY from "2" to "10"
env:
- name: FREQUENCY
value: "10" # 10 requests/secondObserve Scaling:
- CPU usage increases above 50%
- HPA triggers scale-up
- New pods are created
- Load distributes across pods
- CPU per pod decreases
- System reaches equilibrium
Scale-Down:
- After load decreases, HPA waits 5 minutes (default)
- Gradually reduces replicas to minReplicas
# Ubuntu/Linux VM or WSL2
Operating System: Ubuntu 20.04+
# MicroK8s Kubernetes cluster
sudo snap install microk8s --classic
microk8s enable dns storage registry
# Helm package manager
sudo snap install helm --classic
# kubectl CLI
sudo snap install kubectl --classic
# Docker (for building load generator)
sudo apt install docker.io# 1. Deploy Kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 2. Create admin user (RBAC)
kubectl apply -f Task\ 1/dashboard-adminuser.yaml
# 3. Get access token
kubectl -n kubernetes-dashboard create token admin-user
# 4. Access dashboard
# Open browser: http://localhost:30000
# Paste token from step 3
# 5. Deploy Java application via Helm
cd Task\ 1/javabenchmarkapp
helm install javabenchmarkapp .
# 6. Test application
curl http://localhost:30000/primecheck# 1. Add Helm repositories
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# 2. Install monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack
# 3. Expose Grafana
kubectl apply -f Task\ 2/grafana-nodeport.yaml
# 4. Get Grafana password
kubectl get secret monitoring-grafana \
-o jsonpath="{.data.admin-password}" | base64 --decode
# 5. Access Grafana
# Open browser: http://localhost:32500
# Login: admin / [password from step 4]
# 6. Explore pre-built dashboards
# Navigate to: Dashboards → Browse
# Select: "Kubernetes / Compute Resources / Pod"# 1. Build load generator image
cd Task\ 3
docker build -t load-generator:latest .
# 2. Push to MicroK8s registry
docker tag load-generator:latest localhost:32000/load-generator:latest
docker push localhost:32000/load-generator:latest
# 3. Deploy load generator
kubectl apply -f load-generator.yaml
# 4. Monitor logs
kubectl logs -f deployment/load-generator
# 5. Watch metrics in Grafana
# Open Grafana dashboard
# Observe CPU/Memory increase for javabenchmarkapp pod# 1. Enable metrics-server
microk8s enable metrics-server
# 2. Create HPA
kubectl apply -f Task\ 4/hpa.yaml
# 3. Watch HPA
kubectl get hpa -w
# 4. Increase load (edit load-generator FREQUENCY to 10)
kubectl edit deployment load-generator
# 5. Observe scaling
kubectl get pods -wCloud Computing/
├── Task 1/
│ ├── dashboard-adminuser.yaml # RBAC for K8s Dashboard
│ ├── javabenchmarkapp/ # Helm chart
│ │ ├── Chart.yaml # Chart metadata
│ │ ├── values.yaml # Configuration values
│ │ └── templates/
│ │ ├── deployment.yaml # Pod spec
│ │ ├── service.yaml # NodePort service
│ │ ├── _helpers.tpl # Template functions
│ │ └── NOTES.txt # Install notes
│ └── javabenchmarkapp-deployment.yaml # Alternative deployment
├── Task 2/
│ └── grafana-nodeport.yaml # Expose Grafana
├── Task 3/
│ ├── Dockerfile # Load generator image
│ ├── load_generator.py # Python load script
│ └── load-generator.yaml # K8s deployment
├── Task 4/
│ └── load-generator.yaml # Updated with higher frequency
├── CSC8110_Report_AniketNalawade.pdf # Technical documentation
└── README.md # This file
✅ Pod Management - Smallest deployable unit in Kubernetes
✅ Deployments - Declarative updates for pods
✅ ReplicaSets - Maintains desired pod count
✅ Services - Stable network endpoint for pods
✅ DaemonSets - One pod per node (Node Exporter)
✅ Charts - Kubernetes resource templates
✅ Values - Configuration separation from templates
✅ Templating - Go template language for dynamic configs
✅ Releases - Versioned deployments
✅ ClusterIP - Internal cluster communication (default)
✅ NodePort - External access via node port (30000-32767)
✅ LoadBalancer - Cloud provider integration (not used)
✅ Prometheus - Metrics collection and time-series DB
✅ Grafana - Visualization and dashboards
✅ Node Exporter - Host-level metrics
✅ Kube-State-Metrics - Kubernetes resource metrics
✅ RBAC - Role-Based Access Control
✅ ServiceAccounts - Pod identity
✅ ClusterRoleBinding - Permissions assignment
✅ Secrets - Sensitive data storage (Grafana password)
✅ Horizontal Pod Autoscaler (HPA) - CPU-based scaling
✅ Metrics Server - Resource metrics API
✅ Load Testing - Performance validation
Node-Level Metrics (Node Exporter):
# CPU usage per core
node_cpu_seconds_total
# Memory usage
node_memory_MemAvailable_bytes
node_memory_MemTotal_bytes
# Disk I/O
node_disk_io_time_seconds_total
# Network traffic
node_network_receive_bytes_total
node_network_transmit_bytes_total
Pod-Level Metrics:
# CPU usage
container_cpu_usage_seconds_total
# Memory usage
container_memory_usage_bytes
container_memory_working_set_bytes
# Network I/O
container_network_receive_bytes_total
container_network_transmit_bytes_total
Kubernetes Metrics (Kube-State-Metrics):
# Pod status
kube_pod_status_phase
# Deployment replicas
kube_deployment_status_replicas
kube_deployment_status_replicas_available
# Node status
kube_node_status_condition
1. Kubernetes Cluster Overview:
- Total nodes, pods, deployments
- Cluster CPU/memory utilization
- Pod distribution across nodes
2. Pod Resource Usage:
- CPU usage per pod (line chart)
- Memory consumption per pod (area chart)
- Network I/O per pod (rate)
3. Node Exporter Metrics:
- CPU utilization per core
- Memory usage breakdown (used, cached, buffered)
- Disk space utilization
- Network throughput
4. Custom Dashboard (Load Testing):
- Request rate to javabenchmarkapp
- Response time percentiles (p50, p95, p99)
- Error rate (if instrumented)
- Pod count over time (HPA scaling events)
✅ Kubernetes Administration - Cluster setup and management
✅ Container Orchestration - Deploying and scaling microservices
✅ Cloud-Native Architecture - 12-factor app principles
✅ Infrastructure as Code - YAML manifests for reproducibility
✅ CI/CD Readiness - Helm charts for automated deployment
✅ Monitoring & Alerting - Prometheus + Grafana stack
✅ Load Testing - Performance validation under stress
✅ Auto-scaling - Horizontal Pod Autoscaler configuration
✅ Linux CLI - kubectl, helm, docker commands
✅ YAML Configuration - Kubernetes resource manifests
✅ Networking - Service discovery, NodePort, ClusterIP
✅ Security - RBAC, ServiceAccounts, Secrets
✅ Log Analysis - kubectl logs, describe, events
✅ Resource Debugging - kubectl top, get, describe
✅ Metrics Analysis - Prometheus queries (PromQL)
✅ Dashboard Interpretation - Grafana visualization
Problem: Kubernetes Dashboard requires authenticated access
Solution: Created ServiceAccount with ClusterRoleBinding to cluster-admin
Learning: Understanding Kubernetes RBAC model
Problem: Internal services not accessible from outside cluster
Solution: NodePort services on ports 30000 (app) and 32500 (Grafana)
Learning: Kubernetes service types and port mapping
Problem: Prometheus needs to discover monitoring targets
Solution: kube-prometheus-stack auto-configures ServiceMonitors
Learning: Service discovery and scrape configurations
Problem: Python script needs to run inside Kubernetes
Solution: Dockerized script, pushed to MicroK8s registry
Learning: Container image building and local registry usage
Problem: HPA requires metrics-server for CPU/memory data
Solution: Enabled MicroK8s metrics-server addon
Learning: Kubernetes metrics API and HPA configuration
- CPU Usage: ~5-10% (idle Spring Boot app)
- Memory: ~250MB (JVM heap)
- Pods: 1 replica
- CPU Usage: ~30-40%
- Memory: ~300MB (stable)
- Response Time: 120-180ms avg
- Pods: 1 replica (below HPA threshold)
- CPU Usage: 70-80% initially
- HPA Trigger: Scaled from 1 to 3 replicas
- CPU After Scaling: ~25-30% per pod
- Response Time: 100-150ms avg (improved distribution)
- Scale-Up Time: ~60 seconds (pod creation + readiness)
- Scale-Down Delay: 5 minutes (HPA stabilization)
- Max Replicas Tested: 10 (from HPA config)
This Kubernetes deployment demonstrates skills directly applicable to:
- Cloud Providers - AWS EKS, Azure AKS, Google GKE management
- SaaS Companies - Multi-tenant application deployment
- Financial Services - Highly available trading platforms
- E-commerce - Black Friday auto-scaling infrastructure
- Media Streaming - Content delivery platform orchestration
- Managed Kubernetes - EKS, AKS, GKE
- Service Mesh - Istio, Linkerd (next step)
- GitOps - ArgoCD, Flux for CD
- Observability - Datadog, New Relic, Elastic Stack
- Policy Enforcement - OPA (Open Policy Agent)
- Cloud Engineer / DevOps Engineer
- Site Reliability Engineer (SRE)
- Kubernetes Administrator
- Platform Engineer
- Cloud Architect
Production-ready extensions:
- Ingress Controller - NGINX for L7 routing
- TLS/SSL - Cert-manager for HTTPS
- Persistent Storage - StatefulSets for databases
- Secret Management - Sealed Secrets, Vault integration
- Service Mesh - Istio for advanced traffic management
- CI/CD Pipeline - GitHub Actions + ArgoCD
- Multi-Cluster - Federation for disaster recovery
- Cost Optimization - Cluster autoscaler, pod preemption
- Advanced Monitoring - Distributed tracing (Jaeger)
- Chaos Engineering - Chaos Mesh for resilience testing
Full documentation available: CSC8110_Report_AniketNalawade.pdf
The report includes:
- Detailed architecture diagrams
- Step-by-step deployment procedures
- Grafana dashboard screenshots
- Performance analysis graphs
- RBAC configuration explanations
- Troubleshooting methodology
Author: Aniket Vinod Nalawade
Student ID: 250535354
Institution: Newcastle University
Program: MSc Advanced Computer Science
Module: CSC8110 - Cloud Computing
Academic Year: 2025/2026
☸️ Full Kubernetes Stack - Dashboard, monitoring, auto-scaling
📦 Helm Packaging - Production-ready application deployment
📊 Complete Observability - Prometheus + Grafana integration
🐍 Custom Load Generator - Python-based performance testing
🔐 RBAC Security - Proper authentication and authorization
⚖️ Auto-scaling - Horizontal Pod Autoscaler demonstration
🎯 Production Patterns - Industry-standard cloud-native practices
Built with ☸️ Kubernetes, 📦 Helm, and 📊 Prometheus
Cloud-native infrastructure for scalable microservices