This project demonstrates how to evolve a standard HTTP microservice architecture into a zero-trust, mTLS-secured architecture on Kubernetes. It is broken into three distinct cases.
| Case | Description | Cert Rotation | Code Change |
|---|---|---|---|
| Case 1 | Local apps, no Kubernetes | ❌ | ❌ |
| Case 2 | Kubernetes mTLS (cert-manager + CSI) | ✅ (Restart needed) | ❌ |
| Case 3 | Kubernetes mTLS with Auto-Reload & Toggling | ✅ (No restart) | ✅ (Small change) |
This repository demonstrates three phases of application development:
-
Phase 1: Local Baseline (Case 1)
- Standard local Python development.
- The frontend and backend services communicate over plain, unencrypted HTTP.
- This serves as our non-secure baseline.
-
Phase 2: Lift-and-Shift to mTLS (Case 2)
- The exact same application code is containerized and deployed to Kubernetes.
- mTLS is enforced externally by
cert-managerand thecert-manager-csi-driver, which injects certificates directly into the pod's filesystem. - Key Behavior: The application is unaware of mTLS and must be manually restarted (
kubectl rollout restart) to pick up newly rotated certificates. This demonstrates a "lift-and-shift" approach with zero code changes.
-
Phase 3: Cloud-Native Auto-Reload & Toggling (Case 3)
- A minimal, cloud-native code change is introduced.
- The Python applications are enhanced to:
- Watch the certificate files on disk and reload them without a restart.
- Read a
USE_MTLSenvironment variable to toggle between mTLS and plain HTTP, allowing for final-state validation.
- Minikube
- Docker
- kubectl
- Helm
- Python 3.9+
- Wireshark (for final validation)
-
Start Minikube
minikube start
-
Point Docker to Minikube This ensures your
docker buildcommands build images inside Minikube's runtime.For PowerShell:
minikube docker-env | Invoke-ExpressionFor macOS/Linux (bash/zsh):
eval $(minikube docker-env)
This runs the apps locally as standard Python scripts.
-
Run Backend (in Terminal 1)
cd backend python app.py -
Run Frontend (in Terminal 2)
cd frontend python app.py -
Run Test Client (in Terminal 3) From the project root directory:
python test_client.py
Expected Output: You should see logs showing successful plaintext HTTP communication.
<img width="851" height="202" alt="image" src="https://github.com/user-attachments/assets/180638a3-1ce0-488b-beae-6885edd79b14" />
This deploys the original, unchanged application to Kubernetes and enforces mTLS.
-
Install cert-manager & CSI Driver
helm repo add jetstack [https://charts.jetstack.io](https://charts.jetstack.io) helm repo update # Install cert-manager helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --set installCRDs=true # Install the CSI driver helm install cert-manager-csi-driver jetstack/cert-manager-csi-driver \ --namespace cert-manager
-
Verify cert-manager Pods Wait until all pods are in a
Runningstate.kubectl get pods -n cert-manager
<img width="647" height="93" alt="image" src="https://github.com/user-attachments/assets/397c411f-0ca7-47cb-9527-f07deb80ed33" /> -
Build Docker Images (Case 2) (Ensure you are in the project root)
docker build -t py-backend:gunicorn . docker build -t py-mtls-frontend:phase1-persistent .
-
Deploy mTLS Infrastructure & Apps
kubectl apply -f ca-issuer.yaml kubectl apply -f backend.yaml kubectl apply -f frontend.yaml
(Note: Ensure the YAMLs point to the
py-backend:gunicornandpy-frontend:phase1images) -
Test the Application In a new terminal, start port forwarding:
kubectl port-forward svc/frontend-svc 8080:8080
In another terminal, run the client:
python test_client.py
Expected Output: You should see successful mTLS communication.
-
Demonstrate Certificate Rotation (Manual Restart) To force the pods to pick up new certs, you must restart them:
kubectl rollout restart deployment frontend-deployment backend-deployment
<img width="893" height="279" alt="image" src="https://github.com/user-attachments/assets/ed851272-294a-4c90-977e-e409ea157dea" /><img width="887" height="314" alt="image" src="https://github.com/user-attachments/assets/90e32b1c-1918-47d3-ac08-c1caaca9cbff" />
This deploys the updated application code that can reload certs and be toggled for validation.
-
Reset Environment
kubectl delete deployment frontend-deployment --ignore-not-found=true kubectl delete deployment backend-deployment --ignore-not-found=true kubectl delete service frontend-svc --ignore-not-found=true kubectl delete service backend-svc --ignore-not-found=true kubectl delete certificate demo-ca --ignore-not-found=true kubectl delete issuer demo-ca --ignore-not-found=true kubectl delete clusterissuer selfsigned-ca --ignore-not-found=true
-
Build Updated Auto-Reload Images (Case 3) (Ensure you are in the project root)
docker build --no-cache -t py-mtls-frontend:case3 . docker build --no-cache -t py-mtls-backend:case3 .
-
Deploy Applications (Note: Ensure your
frontend.yamlandbackend.yamlfiles are updated to use the:case3image tags)kubectl apply -f ca-issuer.yaml kubectl apply -f backend.yaml kubectl apply -f frontend.yaml
-
Final Proof: Packet Capture (mTLS vs. HTTP)
Step 4.1: Get Backend Service IP Copy the IP address returned by this command. You will need it for the capture filters.
kubectl get svc backend-svc -o jsonpath='{.spec.clusterIP}'Step 4.2: Capture Plain HTTP (mTLS=false) First, we set the apps to
mTLS=falsemode.kubectl set env deploy/frontend-deployment USE_MTLS=false kubectl set env deploy/backend-deployment USE_MTLS=false kubectl rollout restart deploy/frontend-deployment kubectl rollout restart deploy/backend-deployment
Wait 20-30 seconds for the pods to restart. In a new terminal, start the capture (replace
<BACKEND_IP>):minikube ssh -- sudo tcpdump -i any -w ~/plain.pcap "host <BACKEND_IP> and port 8080"
While
tcpdumpis running, generate traffic: In a new terminal, start port forwarding:kubectl port-forward svc/frontend-svc 8080:8080
python test_client.py
Let it run for 5-10 seconds, then stop both the
tcpdumpand thetest_client(Ctrl+C).Step 4.3: Capture Encrypted mTLS (mTLS=true) Now, we flip the flag to
mTLS=true.kubectl set env deploy/frontend-deployment USE_MTLS=true kubectl set env deploy/backend-deployment USE_MTLS=true kubectl rollout restart deploy/frontend-deployment kubectl rollout restart deploy/backend-deployment
Wait 20-30 seconds. In your
tcpdumpterminal, start a new capture:minikube ssh -- sudo tcpdump -i any -w ~/mtls.pcap "host <BACKEND_IP> and port 8080"
While
tcpdumpis running, generate traffic again: In a new terminal, start port forwarding:kubectl port-forward svc/frontend-svc 8080:8080
python test_client.py
Let it run for 5-10 seconds, then stop both
tcpdumpand thetest_client.Step 4.4: Copy Capture Files Copy the
.pcapfiles from the Minikube node to your local machine:minikube cp minikube:/home/docker/plain.pcap plain.pcap minikube cp minikube:/home/docker/mtls.pcap mtls.pcap
Step 4.5: Analyze in Wireshark Open both files in Wireshark. For
mtls.pcap, you must Right-click a packet -> Decode As... -> Set Port 8080 to "SSL" to see the TLS handshake.
| Mode | plain.pcap |
mtls.pcap |
|---|---|---|
| Filter | http |
tls |
| Result | Readable HTTP POST requests. |
Client Hello, Server Hello, CertificateRequest. |
| Payload | JSON is visible: {"request_id": "1"} |
Application Data (Encrypted). |
<img width="1920" height="740" alt="image" src="https://github.com/user-attachments/assets/d86f329e-285a-439a-b5cb-91d931ab9935" />
<img width="1920" height="797" alt="image" src="https://github.com/user-attachments/assets/577029a6-d253-4c02-a7d7-1c1d1ace01fd" />
These steps are not required for the demo but are useful for debugging and proving the system is working as expected.
This test proves that our mTLS setup enforces a "zero-trust" policy by blocking unauthorized clients.
Prerequisite: This test must be run while CASE 3 is deployed and running with USE_MTLS=true.
-
Deploy the Rogue Pod This pod does not have the CSI driver and therefore has no client certificates.
kubectl run rogue-client --image=alpine --rm -it -- sh
-
Install curl (Inside the Pod Shell) Once you are inside the pod's shell (
/ #), installcurl:apk update && apk add curl -
Attempt to Access the Backend Try to communicate directly with the
backend-svc. (Note: Our backend service is running on port8080)curl -v -k https://backend-svc:8080
-
Analyze the Result (Proof of Failure) The command will fail during the SSL/TLS handshake. This is the correct and desired behavior. The
backend-svc(running Gunicorn in mTLS mode) sent aCertificateRequest, but our "rogue-client" had no certificate to provide.This proves that only pods with valid, CA-signed client certificates (like our
frontendpod) are authorized to communicate.Type
exitto close the rogue pod's shell.
You can pull the certificates directly from the running pods to inspect their details (like expiration time, common name, etc.).
-
Get the Exact Pod Names Run this and copy the full names of your frontend and backend pods.
kubectl get pods
-
Extract Certificates Paste the full pod names into the commands below.
Extract backend cert:
kubectl exec <YOUR_BACKEND_POD_NAME_HERE> -- cat /var/run/secrets/mtls/tls.crt > backend-cert.crt
Extract frontend cert:
kubectl exec <YOUR_FRONTEND_POD_NAME_HERE> -- cat /etc/tls/tls.crt > frontend-cert.crt
-
Inspect with OpenSSL (Optional) You can now read the contents of the saved certificates. This is useful to confirm the
Common Nameor check theNot Aftertimestamp to verify rotation.openssl x509 -in backend-cert.crt -text -noout openssl x509 -in frontend-cert.crt -text -noout