- Section 04: Orchestrating Collections of Services with Kubernetes
- Table of Contents
- Installing Kubernetes
- Explain Kubernetes to me like I'm Five
- A Kubernetes Tour
- Important Kubernetes Terminology
- Notes on Config Files
- Creating a Pod
- Understanding a Pod Spec
- Common Kubectl Commands
- Introducing Deployments
- Creating a Deployment
- Common Commands Around Deployments
- Updating Deployments
- Preferred Method for Updating Deployments
- Networking With Services
- Creating a NodePort Service
- Accessing NodePort Services
- Setting Up Cluster IP Services
- Building a Deployment for the Event Bus
- Adding ClusterIP Services
- How to Communicate Between Services
- Updating Service Addresses
- Verifying Communication
- Adding Query, Moderation and Comments
- Testing Communication
- Load Balancer Services
- Load Balancers and Ingress
- Installing Ingress-Nginx
- Writing Ingress Config Files
- Hosts File Tweak
- Quick Note
- Deploying the React App
- Unique Route Paths
- Final Route Config
- Introducing Skaffold
- Skaffold Setup
- First Time Skaffold Startup
- A Few Notes on Skaffold
Kubernetes Setup
- Running Docker for Mac/Windows? Yay, so easy

- Running Docker-Toolbox or Linux? kubernetes.io/docs/tasks/tools/install-minikube/
Kubernetes - Explained Like You're Five
Docker images: think of them as blueprints, for example a blueprint for creating a cow.
Docker daemon: think of it as corral for letting the cows run wild.
Docker swarm (and Kubernetes): think of it as a rancher that manages the cows.
Let's say you create many cows (docker containers) with the same blueprint (docker image) and let the cows do their thing in the corral (docker daemon).
You have all the dairy cows in one place but it's getting pretty crowded and they're eating all the stuff around them (resources) and you need to redistribute them to other areas or they will die.
You hire the rancher named Kubernetes and tell him of all the other corrals (nodes). The rancher checks each corrals capacities (resources) that they can handle. The rancher will take care of moving the cows around when the corrals are low on food to more abundant areas and the rancher will also take care of creating new cows for you if cows die for any reason.
The rancher is responsible optimizing your cattle ranch as efficient as possible and making it scale as long as you tell him of all the locations that he's allowed to move the cows to. You can also tell him to only grow the ranch to a certain size or to dynamically scale larger to produce more milk based on the dairy consumption demand by the population (auto-scaling).
kubectl versionCreate Docker Image
| Keyword | Meaning |
|---|---|
| Kubernetes Cluster | A collections of nodes + a master to manage them |
| Node | A virtual machine that will run our containers |
| Pod | More or less a running container. Technically, a pod can run multiple containers (we won't do this) |
| Deployment | Monitors a set of pods, make sure they are running and restarts them if they crash |
| Service | Provides an easy-to-remember URL to access a running container |
- Tells Kubernetes about the different Deployments, Pods, and Services (referred to as 'Objects') that we want to create
- Written in YAML syntax
- Always store these files with our project source code - they are documentation!
- We can create Objects without config files - do not do this. Config files provide a precise definition of what your cluster is running.
- Kubernetes docs will tell you to run direct commands to create objects - only do this for testing purposes
- Blog posts will tell you to run direct commands to create objects - close the blog post!
apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: chesterheng/posts:0.0.1cd section-04/blog/posts/
docker build -t chesterheng/posts:0.0.1 .
cd ../infra/k8s/
kubectl apply -f posts.yaml
kubectl get pods| Configuration Parameters | Notes |
|---|---|
| apiVersion: v1 | K8s is extensible - we can add in our own custom objects. This specifies the set of objects we want K8s to look at |
| kind: Pod | The type of object we want to create |
| metadata: | Config options for the object we are about to create |
| name: posts | When the pod is created, give it a name of 'posts' |
| spec: | The exact attributes we want to apply to the object we are about to create |
| containers: | We can create many containers in a single pod |
| - name: posts | Make a container with a name of 'posts' |
| image: chesterheng/posts:0.0.1 | The exact image we want to use |
| Docker World | K8s World |
|---|---|
| docker ps | kubectl get pods |
| docker exec -it [container id] [cmd] | kubectl exec -it [pod_name] [cmd] |
| docker logs [container id] | kubectl logs [pod_name] |
| K8s Commands | Explanation |
|---|---|
| kubectl get pods | Print out information about all of the running pods |
| kubectl exec -it [pod_name] [cmd] | Execute the given command in a running pod |
| kubectl logs [pod_name] | Print out logs from the given pod |
| kubectl delete pod [pod_name] | Deletes the given pod |
| kubectl apply -f [config file name] | Tells kubernetes to process the config |
| kubectl describe pod [pod_name] | Print out some information about the running pod |
cd section-04/blog/infra/k8s/
kubectl apply -f posts.yaml
kubectl get pods
kubectl exec -it posts sh
kubectl logs posts
kubectl delete pod posts
kubectl get pods
kubectl apply -f posts.yaml
kubectl get pods
kubectl describe pod postsWhenever I create a kubernetes deployment, it will auto download image from docker hub?
- It depends on the ImagePullPolicy of the Pod
- The default pull policy is IfNotPresent
- It will try to download the image if it’s not already present on the node
- If your image is qualified with a custom registry it will try to download it from this custom registry and may use an imagePullSecret to do so
- Refer to Images for more info.
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: chesterheng/posts:0.0.1cd section-04/blog/infra/k8s/
kubectl apply -f posts-depl.yaml| Deployment Commands | Explanation |
|---|---|
| kubectl get deployments | List all the running deployments |
| kubectl describe deployment [depl name] | Print out details about a specific deployment |
| kubectl apply -f [config file name] | Create a deployment out of a config file |
| kubectl delete deployment [depl_name] | Delete a deployment |
| kubectl rollout restart deployment [depl_name] | Get a deployment to restart all pods. Will use latest version of an image if the pod spec has a tag of 'latest' |
cd section-04/blog/infra/k8s/
kubectl get deployments
kubectl get pods
kubectl delete pods posts-depl-75767489d-jrzxw
kubectl describe deployment posts-depl
kubectl delete deployment posts-depl
kubectl apply -f posts-depl.yaml
kubectl get deployments
kubectl get podsUpdating the Image Used By a Deployment - Method #1
- Step 1 - Make a change to your project code
- Step 2 - Rebuild the image, specifying a new image version
cd section-04/blog/posts/
docker build -t chesterheng/posts:0.0.5 .- Step 3 - In the deployment config file, update the version of the image
- Step 4 - Run the command: kubectl apply -f [depl file name]
cd ../infra/k8s
kubectl apply -f posts-depl.yaml
kubectl get deployments
kubectl get pods
kubectl logs posts-depl-cf87458cd-lrn6fUpdating the Image Used By a Deployment - Method #2
- Step 1 - The deployment must be using the 'latest' tag in the pod spec section
- image: chesterheng/posts:latest or
- image: chesterheng/posts
cd section-04/blog/infra/k8s
kubectl apply -f posts-depl.yaml
kubectl get deployments- Step 2 - Make an update to your code
- Step 3 - Build the image
cd section-04/blog/posts
docker build -t chesterheng/posts .- Step 4 - Push the image to docker hub
docker login
docker push chesterheng/posts- Step 5 - Run the command: kubectl rollout restart deployment [depl_name]
kubectl get deployments
kubectl rollout restart deployment posts-depl
kubectl get deployments
kubectl get pods
kubectl logs posts-depl-6947b4f9c-t5zx5kubectl get pods
kubectl logs posts-depl-6947b4f9c-t5zx5| Types of Services | Explanation |
|---|---|
| Cluster IP | Sets up an easy-to-remember URL to access a pod. Only exposes pods in the cluster |
| Node Port | Makes a pod accessible from outside the cluster. Usually only used for dev purposes |
| Load Balancer | Makes a pod accessible from outside the cluster. This is the right way to expose a pod to the outside world |
| External Name | Redirects an in-cluster request to a CNAME url.....don't worry about this one.... |
Cluster IP - node to node communication
Makes a pod accessible from outside the cluster (outside to node communication)
apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- name: posts
protocol: TCP
port: 4000
targetPort: 4000cd section-04/blog/infra/k8s/
kubectl apply -f posts-srv.yaml
kubectl get services
kubectl describe service posts-srvkubectl get pods
kubectl get svc -A
curl localhost:30692/postsError
- curl: (7) Failed to connect to localhost port 30692: Connection refused
How to troubleshoot?
#1: Test to check if your container expose your app on http://localhost:4000 ?
kubectl exec -it posts-depl-6947b4f9c-qbn4h sh
/app # apk add curl
/app # curl localhost:4000/posts
/app # curl http://posts-srv:4000/posts#2: Test to check if your kubernetes service is ok ?
kubectl exec -it posts-depl-6947b4f9c-qbn4h sh
/app # apk add curl
/app # curl http://posts-srv:4000/posts#3: Check if the problem is coming from kube-proxy?
kubectl get pods -n kube-system
kubectl describe pod vpnkit-controller -n kube-systemThe node was low on resource: ephemeral-storage. Container vpnkit-controller was using 84Ki, which exceeds its request of 0.
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-b74mq 1/1 Running 0 21d
coredns-5644d7b6d9-vrjfp 1/1 Running 0 21d
etcd-docker-desktop 1/1 Running 0 21d
kube-apiserver-docker-desktop 1/1 Running 0 21d
kube-controller-manager-docker-desktop 1/1 Running 0 21d
kube-proxy-k9wcv 1/1 Running 0 21d
kube-scheduler-docker-desktop 1/1 Running 0 21d
storage-provisioner 0/1 Evicted 0 21d
vpnkit-controller 0/1 Evicted 0 21d#4: Restart the evicted vpnkit-controller and storage-provisioner and resolve the docker-desktop bug
<!-- check why vpnkit-controller is evicted? -->
kubectl describe pod vpnkit-controller -n kube-system
kubectl delete po storage-provisioner vpnkit-controller -n kube-system- Disable and re-enable kubernetes integration (otherwise the pods are not being redeployed)
- If this doesn’t work, you can still hit the big red button “Reset Kubernetes Cluster” but you’ll have to redeploy your descriptors (deployment and service)
Goals Moving Forward
- Build an image for the Event Bus
cd section-04/blog/event-bus
docker build -t chesterheng/event-bus .- Push the image to Docker Hub
cd section-04/blog/event-bus
docker push chesterheng/event-bus- Create a deployment for Event Bus
cd section-04/blog/infra/k8s/
kubectl apply -f event-bus-depl.yaml
kubectl get pods- Create a Cluster IP service for Event Bus and Posts
- Wire it all up!
Goals Moving Forward
- Build an image for the Event Bus
- Push the image to Docker Hub
- Create a deployment for Event Bus
- Create a Cluster IP service for Event Bus and Posts
cd section-04/blog/infra/k8s/
kubectl apply -f event-bus-depl.yaml
kubectl apply -f posts-depl.yaml
kubectl get services- Wire it all up!
Goals Moving Forward
- Build an image for the Event Bus
- Push the image to Docker Hub
- Create a deployment for Event Bus
- Create a Cluster IP service for Event Bus and Posts
- Wire it all up! 




















