Skip to content

Aniket3434/CSC8110-Cloud-Computing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kubernetes Microservices Orchestration & Monitoring - CSC8110

Newcastle University | MSc Advanced Computer Science
Module: Cloud Computing (CSC8110)
Student: Aniket Vinod Nalawade | 250535354

Kubernetes Helm Prometheus Grafana Docker

📋 Project Overview

A production-grade Kubernetes deployment demonstrating cloud-native application orchestration, monitoring, and performance testing. This project implements a complete microservices infrastructure using Kubernetes, Helm, Prometheus, and Grafana for deploying, scaling, and monitoring containerized applications.

🎯 What This System Does

This cloud-native platform demonstrates:

☸️ Kubernetes Orchestration - Deploy and manage containerized applications
📦 Helm Package Management - Declarative application deployment
📊 Prometheus Monitoring - Real-time metrics collection and storage
📈 Grafana Visualization - Interactive dashboards for system observability
🔐 RBAC Security - Role-based access control for dashboard authentication
🚀 Load Testing - Custom load generator for performance analysis
⚖️ Auto-scaling - Horizontal Pod Autoscaler for dynamic scaling


🏗️ System Architecture

Kubernetes Cluster Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    KUBERNETES CLUSTER                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │               CONTROL PLANE                              │  │
│  │  - API Server                                            │  │
│  │  - Scheduler                                             │  │
│  │  - Controller Manager                                    │  │
│  │  - etcd (Cluster State)                                  │  │
│  └──────────────────────────────────────────────────────────┘  │
│                          │                                      │
│  ┌───────────────────────┴──────────────────────────────────┐  │
│  │               WORKER NODES                               │  │
│  ├──────────────────────────────────────────────────────────┤  │
│  │                                                          │  │
│  │  ┌─────────────────┐    ┌─────────────────┐             │  │
│  │  │  Kubernetes     │    │  Java Benchmark │             │  │
│  │  │  Dashboard      │    │  Application    │             │  │
│  │  │  (Web UI)       │    │  (Helm Chart)   │             │  │
│  │  │  Port: 30000    │    │  Port: 30000    │             │  │
│  │  └─────────────────┘    └─────────────────┘             │  │
│  │                                                          │  │
│  │  ┌─────────────────────────────────────────────────┐    │  │
│  │  │        MONITORING STACK                         │    │  │
│  │  ├─────────────────────────────────────────────────┤    │  │
│  │  │                                                 │    │  │
│  │  │  ┌──────────────┐    ┌──────────────┐          │    │  │
│  │  │  │ Prometheus   │◄───│ Node Exporter│          │    │  │
│  │  │  │ (Metrics DB) │    │ (DaemonSet)  │          │    │  │
│  │  │  └──────┬───────┘    └──────────────┘          │    │  │
│  │  │         │                                       │    │  │
│  │  │         │             ┌──────────────┐          │    │  │
│  │  │         └────────────►│ Kube-State   │          │    │  │
│  │  │                       │ Metrics      │          │    │  │
│  │  │         ┌─────────────┴──────────────┘          │    │  │
│  │  │         │                                       │    │  │
│  │  │         ▼                                       │    │  │
│  │  │  ┌──────────────┐                              │    │  │
│  │  │  │   Grafana    │                              │    │  │
│  │  │  │ (Dashboard)  │                              │    │  │
│  │  │  │ Port: 32500  │                              │    │  │
│  │  │  └──────────────┘                              │    │  │
│  │  └─────────────────────────────────────────────────┘    │  │
│  │                                                          │  │
│  │  ┌─────────────────┐                                    │  │
│  │  │  Load Generator │                                    │  │
│  │  │  (Python)       │────► HTTP Requests                 │  │
│  │  │  Frequency: 2/s │       to /primecheck               │  │
│  │  └─────────────────┘                                    │  │
│  │                                                          │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Technology Stack

Component Technology Purpose
Container Orchestration Kubernetes (MicroK8s) Cluster management, scheduling, scaling
Package Manager Helm 3 Application templating and deployment
Web Application Java Benchmark App CPU-intensive workload for testing
Metrics Collection Prometheus Time-series metrics database
Visualization Grafana Monitoring dashboards
Node Metrics Node Exporter (DaemonSet) Host-level metrics collection
Cluster Metrics Kube-State-Metrics Kubernetes resource metrics
Load Testing Custom Python Script HTTP request generation
Service Discovery Kubernetes DNS Internal service resolution
Access Control RBAC (ServiceAccount) Dashboard authentication

✨ Key Features by Task

🔹 Task 1: Kubernetes Dashboard & Application Deployment

Purpose: Deploy Kubernetes Dashboard and Java benchmark application using Helm

1.1 Kubernetes Dashboard Deployment

Manual Deployment (not MicroK8s addon):

# Apply official Kubernetes Dashboard manifests
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Verify deployment
kubectl get pods -n kubernetes-dashboard

RBAC Configuration:

# dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Token Generation:

kubectl -n kubernetes-dashboard create token admin-user

Access Dashboard:

URL: http://localhost:30000
Authentication: Bearer Token (from above command)

Key Concepts:

  • ServiceAccount: Kubernetes identity for pods
  • ClusterRoleBinding: Grants cluster-admin permissions
  • NodePort: Exposes dashboard on port 30000
  • RBAC: Role-Based Access Control for security

1.2 Java Benchmark Application Deployment

Helm Chart Structure:

javabenchmarkapp/
├── Chart.yaml              # Helm chart metadata
├── values.yaml             # Configuration values
└── templates/
    ├── deployment.yaml     # Pod deployment spec
    ├── service.yaml        # NodePort service
    ├── _helpers.tpl        # Template helpers
    └── NOTES.txt          # Post-install instructions

values.yaml Configuration:

replicaCount: 1

image:
  repository: nclcloudcomputing/javabenchmarkapp
  pullPolicy: IfNotPresent
  tag: "latest"

service:
  type: NodePort
  port: 8080
  nodePort: 30000

resources: {}  # No resource limits (for baseline testing)

Deployment Commands:

# Create Helm chart
helm create javabenchmarkapp

# Install application
helm install javabenchmarkapp ./javabenchmarkapp

# Verify deployment
kubectl get pods
kubectl get svc

# Test application
curl http://localhost:30000/primecheck

Application Endpoint:

/primecheck - CPU-intensive prime number calculation
Response: JSON with computation result

Key Features:

  • ✅ Spring Boot Java application
  • ✅ Packaged as Helm chart for repeatability
  • ✅ NodePort service for external access
  • ✅ ConfigMap for environment variables (SPRING_PROFILES_ACTIVE=web)

🔹 Task 2: Monitoring Stack Deployment

Purpose: Deploy Prometheus and Grafana for cluster observability

2.1 Prometheus-Grafana Stack Installation

Helm Deployment:

# Add Helm repositories
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

# Install complete monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack

# Verify all components
kubectl get pods -A | grep monitoring

Components Deployed:

  1. Prometheus Server

    • Metrics database (time-series)
    • Scrapes targets every 30s
    • Stores data locally (PersistentVolume)
  2. Alertmanager

    • Handles alerts from Prometheus
    • Routing and notification management
  3. Node Exporter (DaemonSet)

    • Runs on every cluster node
    • Collects host-level metrics (CPU, memory, disk, network)
  4. Kube-State-Metrics

    • Kubernetes API listener
    • Exports cluster state metrics (pods, deployments, nodes)
  5. Grafana

    • Visualization platform
    • Pre-configured Prometheus data source
    • Pre-loaded dashboards

2.2 Exposing Grafana via NodePort

grafana-nodeport.yaml:

apiVersion: v1
kind: Service
metadata:
  name: grafana-nodeport
  namespace: default
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 32500
    protocol: TCP
  selector:
    app.kubernetes.io/name: grafana

Apply Configuration:

kubectl apply -f grafana-nodeport.yaml

# Access Grafana
# URL: http://localhost:32500

2.3 Grafana Configuration

Login Credentials:

# Get admin password
kubectl get secret monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode

Username: admin
Password: [from above command]

Prometheus Data Source:

Navigate to: Connections → Data Sources → Prometheus
URL: http://monitoring-kube-prometheus-prometheus:9090
Access: Server (default)

Pre-installed Dashboards:

  1. Kubernetes Cluster Monitoring - Overview of cluster health
  2. Node Exporter Full - Detailed node metrics
  3. Kubernetes Pods - Per-pod resource usage
  4. Kubernetes StatefulSets - StatefulSet metrics
  5. Prometheus Stats - Prometheus performance

Custom Metrics Visualization:

  • CPU usage per pod
  • Memory consumption trends
  • Network I/O rates
  • Disk utilization
  • Pod restart counts

Key Features:

  • ✅ Real-time metrics dashboard
  • ✅ Historical data analysis
  • ✅ Custom metric queries (PromQL)
  • ✅ Alerting based on thresholds
  • ✅ Multi-dashboard organization

🔹 Task 3: Load Generation & Performance Testing

Purpose: Generate synthetic load to test application performance and observe metrics

3.1 Load Generator Implementation

load_generator.py:

import os
import time
import requests

# Configuration from environment variables
TARGET = os.getenv("TARGET", "http://localhost:30000/primecheck")
FREQUENCY = float(os.getenv("FREQUENCY", "1"))  # requests/second

print(f"Target URL: {TARGET}")
print(f"Request Frequency: {FREQUENCY} req/sec\n")

interval = 1.0 / FREQUENCY

total_requests = 0
failures = 0
total_time = 0.0

while True:
    start_time = time.time()
    total_requests += 1
    
    try:
        resp = requests.get(TARGET, timeout=10)
        elapsed = (time.time() - start_time) * 1000  # ms
        total_time += elapsed
        
        print(f"[OK] Response time: {elapsed:.2f} ms")
        
    except Exception as e:
        failures += 1
        print(f"[FAIL] Request failed: {str(e)}")
    
    print(f"Total Requests: {total_requests}, Failures: {failures}")
    
    time.sleep(interval)

Key Features:

  • ✅ Configurable target URL and request frequency
  • ✅ Response time measurement (milliseconds)
  • ✅ Failure tracking and logging
  • ✅ Continuous operation (infinite loop)
  • ✅ Environment variable configuration

3.2 Containerizing Load Generator

Dockerfile:

FROM python:3.9-slim

WORKDIR /app

# Install dependencies
RUN pip install requests

# Copy load generator script
COPY load_generator.py .

# Run script
CMD ["python", "load_generator.py"]

Build and Push:

# Build image
docker build -t load-generator:latest .

# Tag for local registry (MicroK8s)
docker tag load-generator:latest localhost:32000/load-generator:latest

# Push to MicroK8s registry
docker push localhost:32000/load-generator:latest

3.3 Deploying Load Generator to Kubernetes

load-generator.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: load-generator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: load-generator
  template:
    metadata:
      labels:
        app: load-generator
    spec:
      containers:
      - name: load-generator
        image: localhost:32000/load-generator:latest
        env:
        - name: TARGET
          value: "http://javabenchmarkapp:8080/primecheck"  # Kubernetes DNS
        - name: FREQUENCY
          value: "2"  # 2 requests per second

Deploy:

kubectl apply -f load-generator.yaml

# Monitor load generator logs
kubectl logs -f deployment/load-generator

# Expected output:
# Target URL: http://javabenchmarkapp:8080/primecheck
# Request Frequency: 2.0 req/sec
# [OK] Response time: 145.32 ms
# Total Requests: 1, Failures: 0
# [OK] Response time: 152.18 ms
# Total Requests: 2, Failures: 0

3.4 Observing Metrics in Grafana

Metrics to Monitor:

  1. CPU Usage:

    rate(container_cpu_usage_seconds_total{pod=~"javabenchmarkapp.*"}[5m])
    
  2. Memory Usage:

    container_memory_usage_bytes{pod=~"javabenchmarkapp.*"}
    
  3. HTTP Request Rate:

    rate(http_requests_total[5m])
    
  4. Response Time (if instrumented):

    http_request_duration_seconds
    

Analysis:

  • Observe CPU spike when load generator starts
  • Monitor memory consumption under sustained load
  • Identify performance bottlenecks
  • Track pod restart events

🔹 Task 4: Horizontal Pod Autoscaler (HPA)

Purpose: Automatically scale application based on CPU utilization

4.1 Enabling Metrics Server

# MicroK8s includes metrics-server
microk8s enable metrics-server

# Verify
kubectl top nodes
kubectl top pods

4.2 Creating HPA

HPA Manifest:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: javabenchmarkapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: javabenchmarkapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50  # Target 50% CPU

Apply HPA:

kubectl apply -f hpa.yaml

# Watch HPA in action
kubectl get hpa -w

# Expected output as load increases:
# NAME                  REFERENCE                  TARGETS   MINPODS   MAXPODS   REPLICAS
# javabenchmarkapp-hpa  Deployment/javabenchmark   45%/50%   1         10        1
# javabenchmarkapp-hpa  Deployment/javabenchmark   75%/50%   1         10        2
# javabenchmarkapp-hpa  Deployment/javabenchmark   55%/50%   1         10        3

4.3 Load Testing HPA

Increase load frequency:

# Edit load-generator deployment
kubectl edit deployment load-generator

# Change FREQUENCY from "2" to "10"
env:
- name: FREQUENCY
  value: "10"  # 10 requests/second

Observe Scaling:

  1. CPU usage increases above 50%
  2. HPA triggers scale-up
  3. New pods are created
  4. Load distributes across pods
  5. CPU per pod decreases
  6. System reaches equilibrium

Scale-Down:

  • After load decreases, HPA waits 5 minutes (default)
  • Gradually reduces replicas to minReplicas

🚀 How to Run the Project

Prerequisites

# Ubuntu/Linux VM or WSL2
Operating System: Ubuntu 20.04+

# MicroK8s Kubernetes cluster
sudo snap install microk8s --classic
microk8s enable dns storage registry

# Helm package manager
sudo snap install helm --classic

# kubectl CLI
sudo snap install kubectl --classic

# Docker (for building load generator)
sudo apt install docker.io

Step-by-Step Execution

Task 1: Dashboard & Application

# 1. Deploy Kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# 2. Create admin user (RBAC)
kubectl apply -f Task\ 1/dashboard-adminuser.yaml

# 3. Get access token
kubectl -n kubernetes-dashboard create token admin-user

# 4. Access dashboard
# Open browser: http://localhost:30000
# Paste token from step 3

# 5. Deploy Java application via Helm
cd Task\ 1/javabenchmarkapp
helm install javabenchmarkapp .

# 6. Test application
curl http://localhost:30000/primecheck

Task 2: Monitoring Stack

# 1. Add Helm repositories
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

# 2. Install monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack

# 3. Expose Grafana
kubectl apply -f Task\ 2/grafana-nodeport.yaml

# 4. Get Grafana password
kubectl get secret monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode

# 5. Access Grafana
# Open browser: http://localhost:32500
# Login: admin / [password from step 4]

# 6. Explore pre-built dashboards
# Navigate to: Dashboards → Browse
# Select: "Kubernetes / Compute Resources / Pod"

Task 3: Load Generation

# 1. Build load generator image
cd Task\ 3
docker build -t load-generator:latest .

# 2. Push to MicroK8s registry
docker tag load-generator:latest localhost:32000/load-generator:latest
docker push localhost:32000/load-generator:latest

# 3. Deploy load generator
kubectl apply -f load-generator.yaml

# 4. Monitor logs
kubectl logs -f deployment/load-generator

# 5. Watch metrics in Grafana
# Open Grafana dashboard
# Observe CPU/Memory increase for javabenchmarkapp pod

Task 4: Auto-scaling (Optional)

# 1. Enable metrics-server
microk8s enable metrics-server

# 2. Create HPA
kubectl apply -f Task\ 4/hpa.yaml

# 3. Watch HPA
kubectl get hpa -w

# 4. Increase load (edit load-generator FREQUENCY to 10)
kubectl edit deployment load-generator

# 5. Observe scaling
kubectl get pods -w

📊 Project Structure

Cloud Computing/
├── Task 1/
│   ├── dashboard-adminuser.yaml           # RBAC for K8s Dashboard
│   ├── javabenchmarkapp/                  # Helm chart
│   │   ├── Chart.yaml                     # Chart metadata
│   │   ├── values.yaml                    # Configuration values
│   │   └── templates/
│   │       ├── deployment.yaml            # Pod spec
│   │       ├── service.yaml               # NodePort service
│   │       ├── _helpers.tpl               # Template functions
│   │       └── NOTES.txt                  # Install notes
│   └── javabenchmarkapp-deployment.yaml   # Alternative deployment
├── Task 2/
│   └── grafana-nodeport.yaml              # Expose Grafana
├── Task 3/
│   ├── Dockerfile                         # Load generator image
│   ├── load_generator.py                  # Python load script
│   └── load-generator.yaml                # K8s deployment
├── Task 4/
│   └── load-generator.yaml                # Updated with higher frequency
├── CSC8110_Report_AniketNalawade.pdf      # Technical documentation
└── README.md                              # This file

💡 Key Kubernetes Concepts Demonstrated

Container Orchestration

Pod Management - Smallest deployable unit in Kubernetes
Deployments - Declarative updates for pods
ReplicaSets - Maintains desired pod count
Services - Stable network endpoint for pods
DaemonSets - One pod per node (Node Exporter)

Helm Package Management

Charts - Kubernetes resource templates
Values - Configuration separation from templates
Templating - Go template language for dynamic configs
Releases - Versioned deployments

Service Types

ClusterIP - Internal cluster communication (default)
NodePort - External access via node port (30000-32767)
LoadBalancer - Cloud provider integration (not used)

Monitoring & Observability

Prometheus - Metrics collection and time-series DB
Grafana - Visualization and dashboards
Node Exporter - Host-level metrics
Kube-State-Metrics - Kubernetes resource metrics

Security & Access Control

RBAC - Role-Based Access Control
ServiceAccounts - Pod identity
ClusterRoleBinding - Permissions assignment
Secrets - Sensitive data storage (Grafana password)

Scaling & Performance

Horizontal Pod Autoscaler (HPA) - CPU-based scaling
Metrics Server - Resource metrics API
Load Testing - Performance validation


📈 Metrics & Observability

Prometheus Metrics Collected

Node-Level Metrics (Node Exporter):

# CPU usage per core
node_cpu_seconds_total

# Memory usage
node_memory_MemAvailable_bytes
node_memory_MemTotal_bytes

# Disk I/O
node_disk_io_time_seconds_total

# Network traffic
node_network_receive_bytes_total
node_network_transmit_bytes_total

Pod-Level Metrics:

# CPU usage
container_cpu_usage_seconds_total

# Memory usage
container_memory_usage_bytes
container_memory_working_set_bytes

# Network I/O
container_network_receive_bytes_total
container_network_transmit_bytes_total

Kubernetes Metrics (Kube-State-Metrics):

# Pod status
kube_pod_status_phase

# Deployment replicas
kube_deployment_status_replicas
kube_deployment_status_replicas_available

# Node status
kube_node_status_condition

Grafana Dashboard Examples

1. Kubernetes Cluster Overview:

  • Total nodes, pods, deployments
  • Cluster CPU/memory utilization
  • Pod distribution across nodes

2. Pod Resource Usage:

  • CPU usage per pod (line chart)
  • Memory consumption per pod (area chart)
  • Network I/O per pod (rate)

3. Node Exporter Metrics:

  • CPU utilization per core
  • Memory usage breakdown (used, cached, buffered)
  • Disk space utilization
  • Network throughput

4. Custom Dashboard (Load Testing):

  • Request rate to javabenchmarkapp
  • Response time percentiles (p50, p95, p99)
  • Error rate (if instrumented)
  • Pod count over time (HPA scaling events)

🎓 Skills Demonstrated

Cloud Computing

Kubernetes Administration - Cluster setup and management
Container Orchestration - Deploying and scaling microservices
Cloud-Native Architecture - 12-factor app principles
Infrastructure as Code - YAML manifests for reproducibility

DevOps Practices

CI/CD Readiness - Helm charts for automated deployment
Monitoring & Alerting - Prometheus + Grafana stack
Load Testing - Performance validation under stress
Auto-scaling - Horizontal Pod Autoscaler configuration

System Administration

Linux CLI - kubectl, helm, docker commands
YAML Configuration - Kubernetes resource manifests
Networking - Service discovery, NodePort, ClusterIP
Security - RBAC, ServiceAccounts, Secrets

Troubleshooting

Log Analysis - kubectl logs, describe, events
Resource Debugging - kubectl top, get, describe
Metrics Analysis - Prometheus queries (PromQL)
Dashboard Interpretation - Grafana visualization


🔬 Challenges Solved

1. RBAC Configuration for Dashboard

Problem: Kubernetes Dashboard requires authenticated access
Solution: Created ServiceAccount with ClusterRoleBinding to cluster-admin
Learning: Understanding Kubernetes RBAC model

2. Exposing Services Externally

Problem: Internal services not accessible from outside cluster
Solution: NodePort services on ports 30000 (app) and 32500 (Grafana)
Learning: Kubernetes service types and port mapping

3. Prometheus Target Discovery

Problem: Prometheus needs to discover monitoring targets
Solution: kube-prometheus-stack auto-configures ServiceMonitors
Learning: Service discovery and scrape configurations

4. Load Generator Containerization

Problem: Python script needs to run inside Kubernetes
Solution: Dockerized script, pushed to MicroK8s registry
Learning: Container image building and local registry usage

5. Metrics-Based Autoscaling

Problem: HPA requires metrics-server for CPU/memory data
Solution: Enabled MicroK8s metrics-server addon
Learning: Kubernetes metrics API and HPA configuration


📊 Performance Results

Baseline (No Load)

  • CPU Usage: ~5-10% (idle Spring Boot app)
  • Memory: ~250MB (JVM heap)
  • Pods: 1 replica

Under Load (2 req/s)

  • CPU Usage: ~30-40%
  • Memory: ~300MB (stable)
  • Response Time: 120-180ms avg
  • Pods: 1 replica (below HPA threshold)

Under Heavy Load (10 req/s)

  • CPU Usage: 70-80% initially
  • HPA Trigger: Scaled from 1 to 3 replicas
  • CPU After Scaling: ~25-30% per pod
  • Response Time: 100-150ms avg (improved distribution)

Scaling Behavior

  • Scale-Up Time: ~60 seconds (pod creation + readiness)
  • Scale-Down Delay: 5 minutes (HPA stabilization)
  • Max Replicas Tested: 10 (from HPA config)

🔗 Real-World Applications

This Kubernetes deployment demonstrates skills directly applicable to:

Industries

  • Cloud Providers - AWS EKS, Azure AKS, Google GKE management
  • SaaS Companies - Multi-tenant application deployment
  • Financial Services - Highly available trading platforms
  • E-commerce - Black Friday auto-scaling infrastructure
  • Media Streaming - Content delivery platform orchestration

Technologies

  • Managed Kubernetes - EKS, AKS, GKE
  • Service Mesh - Istio, Linkerd (next step)
  • GitOps - ArgoCD, Flux for CD
  • Observability - Datadog, New Relic, Elastic Stack
  • Policy Enforcement - OPA (Open Policy Agent)

Roles

  • Cloud Engineer / DevOps Engineer
  • Site Reliability Engineer (SRE)
  • Kubernetes Administrator
  • Platform Engineer
  • Cloud Architect

🚀 Future Enhancements

Production-ready extensions:

  • Ingress Controller - NGINX for L7 routing
  • TLS/SSL - Cert-manager for HTTPS
  • Persistent Storage - StatefulSets for databases
  • Secret Management - Sealed Secrets, Vault integration
  • Service Mesh - Istio for advanced traffic management
  • CI/CD Pipeline - GitHub Actions + ArgoCD
  • Multi-Cluster - Federation for disaster recovery
  • Cost Optimization - Cluster autoscaler, pod preemption
  • Advanced Monitoring - Distributed tracing (Jaeger)
  • Chaos Engineering - Chaos Mesh for resilience testing

📄 Technical Report

Full documentation available: CSC8110_Report_AniketNalawade.pdf

The report includes:

  • Detailed architecture diagrams
  • Step-by-step deployment procedures
  • Grafana dashboard screenshots
  • Performance analysis graphs
  • RBAC configuration explanations
  • Troubleshooting methodology

📞 About This Project

Author: Aniket Vinod Nalawade
Student ID: 250535354
Institution: Newcastle University
Program: MSc Advanced Computer Science
Module: CSC8110 - Cloud Computing
Academic Year: 2025/2026


🏆 Project Highlights

☸️ Full Kubernetes Stack - Dashboard, monitoring, auto-scaling
📦 Helm Packaging - Production-ready application deployment
📊 Complete Observability - Prometheus + Grafana integration
🐍 Custom Load Generator - Python-based performance testing
🔐 RBAC Security - Proper authentication and authorization
⚖️ Auto-scaling - Horizontal Pod Autoscaler demonstration
🎯 Production Patterns - Industry-standard cloud-native practices


Built with ☸️ Kubernetes, 📦 Helm, and 📊 Prometheus
Cloud-native infrastructure for scalable microservices

About

Kubernetes orchestration with Helm, Prometheus monitoring, Grafana dashboards, and automated scaling | MicroK8s + Helm + Prometheus + Grafana

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors