Welcome to this collection of Kubernetes commands and learnings. This README contains a compilation of useful commands for managing Kubernetes clusters and understanding its core concepts.
- Get all nodes in the cluster:
kubectl get nodes - Check Kubernetes version:
kubectl version - Get detailed information about nodes:
kubectl get nodes -o wide
- List all pods:
kubectl get pods - Get information about a specific pod:
kubectl get pods <pod> - Create a new pod with an image:
kubectl run nginx --image=nginx - Describe a pod:
kubectl describe pod <pod>-<ID> - Check image used in a pod:
kubectl describe pod <podWithID> | grep -i image - Delete a pod:
kubectl delete pod <pod> - Apply or create pod from a YAML file:
kubectl apply -f <file>
- Create a replication controller or replicaset:
kubectl create -f <definition-file-yml> - Get replicaset:
kubectl get replicaset - Get version for the replicaset:
kubectl explain replicaset | grep VERSION - Replace or update existing definition:
kubectl replace -f <definition-file-yaml> - Scale replica using command line:
kubectl scale --replicas={count} <definition-file> - Delete replica set and underlying pods:
kubectl delete replicaset <replicaset-name> - Describe replicaset:
kubectl describe replicaset <replicaset-name>
- Get deployments:
kubectl get deployments - Describe a deployment:
kubectl describe deployment <deployment-name> - Check deployment rollout status:
kubectl rollout status deployment/<deployment-name> - View deployment rollout history:
kubectl rollout history deployment/<deployment-name> - Change image of a deployment:
kubectl set image deployment/<deployment-name> nginx=nginx:1.9.1 - Undo deployment changes:
kubectl rollout undo deployment/myapp-deployment
- Get services:
kubectl get services - Describe a service:
kubectl describe service <service-name> - Expose a pod as a service:
kubectl expose pod <pod> --port=6379 --name=redis-service --dry-run=client -o yaml
- Create a namespace:
kubectl create namespace <name> - Set a specific namespace as default:
kubectl config set-context $(kubectl config current-context) --namespace=<namespace-name> - Get pods in a specific namespace:
kubectl get pods --namespace=<namespace-name> - Get all pods in all namespaces:
kubectl get pods --all-namespaces
- Filter pods with labels:
kubectl get pods --selector app=App1
- Taint a node:
kubectl taint nodes node-name key=value:taint-effect - Add tolerations to a pod: Include in pod's spec
- Label a node:
kubectl label nodes node-name label-key=label-value
- Define node affinity in pod's spec
- Get daemon sets:
kubectl get daemonsets - Describe daemon sets:
kubectl describe daemonset <daemonset-name>
- Start metrics server on Minikube:
minikube addons enable metrics-server - Deploy metrics server on other environments
- Get logs:
kubectl logs -f <log-pod-name> <container-name>
- Check rollout status:
kubectl rollout status deployment/<deployment-name> - View rollout history:
kubectl rollout history deployment/<deployment-name> - Undo a change:
kubectl rollout undo deployment/<deployment-name>
- Specify commands and arguments in a pod's spec
- Set environment variables: In pod's spec, use
envorenvFrom
- Create config map:
kubectl create configmap <config-name> --from-literal=<key>=<value> - Use config map in pod's spec
- Create secret:
kubectl create secret generic <secret-name> --from-literal=<key>=<value> - Use secrets in pod's spec
- Create namespace:
kubectl create namespace <name> - Set a specific namespace as default:
kubectl config set-context $(kubectl config current-context) --namespace=<namespace-name> - Get pods in a specific namespace:
kubectl get pods --namespace=<namespace-name> - Get all pods in all namespaces:
kubectl get pods --all-namespaces
This section provides an overview of Kubernetes maintenance, security practices, and TLS certificate management. Be sure to refer to the original script for detailed commands and explanations.
During maintenance, you may need to move all the pods from a specific node to other nodes. Here are the steps:
-
Empty the node:
kubectl drain <node-name> --ignore-daemonsetsThe
--ignore-daemonsetsflag is used to ignore daemonsets and makes the node unschedulable, ensuring no new pods are scheduled on it. This command might not work if the pods are not managed by specific controllers.If the command doesn't work, you can use
--force, but be cautious as it may result in permanent loss of pods. -
Make a node un-schedulable:
kubectl cordon <node-name> -
Make a node available again:
kubectl uncordon <node-name>
To upgrade your Kubernetes cluster, follow these steps:
-
Upgrade Master Node:
- Update package list:
apt update - Install kubeadm with the desired version:
apt install kubeadm=1.20.0-00 - Upgrade Kubernetes control plane:
kubeadm upgrade apply v1.20.0 - Update kubelet version:
apt install kubelet=1.20.0-00 - Restart kubelet:
systemctl restart kubelet
- Update package list:
-
Upgrade Worker Node:
- Drain the node and make it unschedulable:
kubectl drain <node-name> - SSH into the node
- Update package list:
apt update - Upgrade kubelet:
apt install kubeadm=1.20.0-00 - Upgrade node configuration:
kubeadm upgrade node - Update kubelet version:
apt install kubelet=1.20.0-00 - Restart kubelet after upgrade:
systemctl restart kubelet - Logout or exit
- Make the node schedulable:
kubectl uncordon <node-name>
- Drain the node and make it unschedulable:
To take a backup of all resources into a definition file:
kubectl get all --all-namespaces -o yaml > all-deploy-services.yamlFor more advanced backup and restore, consider using tools like Velero to back up your Kubernetes cluster.
Etcd is crucial for storing the state of your cluster. To take an Etcd snapshot:
etcdctl snapshot save snapshot.dbTo restore your cluster from a snapshot:
- Stop kube-apiserver:
service kube-apiserver stop - Restore snapshot:
etcdctl snapshot restore snapshot.db --data-dir /var/lib/etcd-from-backup - Reload service daemon:
systemctl daemon-reload - Restart etcd service:
service etcd restart - Start kube-apiserver:
service kube-apiserver start
Make sure to set ETCDCTL_API=3 to use etcdctl for backup and restore tasks.
To pass basic username and password to kube-apiserver for authentication, use the --basic-auth-file flag or the --token-auth-file flag for static token files. However, using basic auth is not recommended.
Kubernetes uses TLS certificates to secure communication. You can generate CA certificates, client certificates, and server certificates for secure communication within the cluster.
For detailed steps on generating and managing certificates, refer to the original script.
You can create and manage certificate signing requests (CSR) to authenticate new users and distribute certificates.
For detailed steps on working with CSR, refer to the original script.
The kubeconfig file contains information about clusters, users, and contexts. It's used to configure access to Kubernetes clusters. The default kubeconfig file is located at $HOME/.kube/config.
To switch between configurations, you can use kubectl config use-context.
Remember to secure your TLS certificates and kubeconfig files to ensure the security of your Kubernetes cluster.
All resources are grouped into two categories: core and named.
Named API Groups -> Resources -> Verbs
To get API groups:
curl http://localhost:6443 -kTo extract group names:
curl http://localhost:6443 -k | grep "name"To create a proxy server to access the kube-api server using certificates:
kubectl proxyUse the port of the proxy server to access the API:
curl http://localhost:<proxy-server-port> -kAuthorization modes include Node, ABAC, RBAC, Webhook, AlwaysAllow, and AlwaysDeny.
Modes are set in the kube-api server:
--authorization-mode=Node,RBAC,WebhookInspect the environment and identify the authorization modes configured on the cluster:
kubectl describe pod kube-apiserver-controlplane -n kube-systemAnd check the authorization mode:
--authorization-modeTo create roles and role bindings:
- Create a role using a definition file:
kubectl create -f role-definition-file.yamlOr using the command:
kubectl create role <role-name> --namespace=<namespace-name> --verb=list,create,delete --resource=pods- Link users to the role by creating a role binding definition file:
kubectl create -f devuser-developer-binding.yamlOr using the command:
kubectl create rolebinding <role-binding-name> --namespace=<namespace-name> --role=<role-name> --user=<user-name>Use the namespace in metadata to set it as default.
To check authorization of resources:
kubectl auth can-i <verbs> <resources>For example:
kubectl auth can-i create deployments
kubectl auth can-i delete nodesAs an admin, you can also check other users' permissions:
kubectl auth can-i create deployment --as dev-user
kubectl auth can-i create pod --as john --namespace prodApply security context at the pod and container level:
# Pod level
securityContext:
runAsUser: 1000
# Container level
securityContext:
runAsUser: 1000
capabilities:
add: ["MAC_ADMIN"]Use the kubectl exec command to execute commands within a pod:
kubectl exec <pod-name> -- <command>Network policies control ingress and egress traffic within the cluster.
Key solutions supporting network policies include Kube-router, Calico, Romana, and Weave-net.
To use network policies:
- Apply labels to pods:
kubectl label pods <pod-name> name=payroll- Create network policies:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-payroll
spec:
podSelector:
matchLabels:
name: payroll
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
name: hrThink from the perspective of the pod to understand network policies.
To manage storage, you can use Docker volumes, bind mounting, and more.
To create a Docker volume:
docker volume create volume_namethis will create a volume in /var/lib/docker/volumes/volume_name
To mount a volume to a Docker container: This is also called This is called Volume Mount - Type 1
docker run -v volume_name:/var/lib/mysql mysql
docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysqlhere we are mounting the container's internal data volume to our persistent volume. /var/lib/mysql container's default volume where it writes the data.
Layered Structure will be:
- Top: Read Write - Container
- Mid: Persistent Volumes
- Base: Read Only - Image Layer
To mount diffrerent folder than default /var/lib/docker then use the full path to the folder. This is called Bind Mounting - Type 2
docker run -v /data/mysql:/var/lib/mysql mysqlNew and Recommended way to mount volume:
docker run --mount type=bind,source=/data/mysql,targe=/var/lib/mysql mysql- source = location on the host
- target = location on the container
Storage drivers help manage storage on images and containers.
Volumes are not handled by storage drivers, volumes are managed by volume plugins like Local, Azure File Storage, Convoy, gce-docker, rexray/ebs etc.
docker run -it --name mysql --volume-driver rexray/ebs --mount src=ebs-vol,target=/var/lib/mysql mysqlThe Container Storage Interface (CSI) supports multiple storage solutions. It's recommended for dynamic provisioning.
Persistent Volumes (PV) are administrator-created resources.
To configure the volume to existing pod there are 2 ways-
-
Get its yaml file and add
spec.volumesandspec.containers.volumeMountsin that file.kubectl get po webapp -o yaml > webapp.yaml -
Create a new yaml with the same image and add above properties.
--dry-run=client -o yaml
With the PV - Static Provisioning
User creates a PVC defining a definition file, where Administrator creates a PV.
-
When pvc gets deleted pv is remained.
-
To delete pv along with the pvc set
persistentVolumeReclaimPolicy: Delete
Once you create a PVC use it in a POD definition file by specifying the PVC Claim name under persistentVolumeClaim section in the volumes section.
The same is true for ReplicaSets or Deployments. Add this to the pod template section of a Deployment on ReplicaSet. After deleting a PVC of Retain policy the status of the PV becomes Released
- Used for the dynamic provisioning of the storage.
- If there is no provisioner, its not a dynamic.
Use Storage Classes for dynamic provisioning:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sc-name
provisioner: example.com/aws-ebsCreate Persistent Volume Claims (PVCs):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: sc-nameUse PVCs in Pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myclaim
containers:
- name: mycontainer
image: busybox
volumeMounts:
- mountPath: /data
name: myvolumeRemember, storage class, behind the scene still creates a pv but automatically.
volumeBindingMode - this feild controls when volume binding and dyncamic provisioning should occur.
To run a command on a container running in the pod
kubectl exec <pod-name> -- <command>
kubectl exec webapp -- cat /log/app.logIn case of multiple containers
kubectl exec my-pod cotainer1 -- ls-
Create new network namespaces on a Linux host:
ip netns add namespace-name
-
View network resources on the host:
ip link
-
Add a container to a specific namespace:
ip link set <cid> netns <namespace>
-
Identify network interface configured for cluster connectivity:
ip a | grep -B2 <IP>
-
Identify the interface/bridge created by a container runtime:
ip link | grep <container-runtime>
-
Check MAC address of a worker node in kubeadm:
arp node-name
-
Get IP address of the Default Gateway:
ip route show default
-
Inspect logs of Kubernetes components:
netstat -nplt netstat -nplt | grep kube-schedulerNote: 2379 is the port of ETCD to which all control plane components connect to. 2380 is only for etcd peer-to-peer connectivity.
-
To check the cni script name in the cni config directory -> by kubelet
--cni-conf-dir=/etc/cni/net.d
-
To execute the cni script -> by kubelet
./net-script.sh add <container> <namespace>
-
Inspect the kubelet service and identify the network plugin:
ps aux | grep kubelet --network-plugin -
View configured CNI network plugins:
ls /opt/cni/bin
-
Check which CNI plugin is configured for use:
ls /etc/cni/net.d/
-
View all plugin information:
cat /etc/cni/net.d/<cni-plugin>
-
Identify the name of the bridge network/interface created by Weave:
ip link
-
View POD IP address range configured by Weave:
ip add
-
Check the default gateway configured on the POD:
ip route
-
Get IP address of Weave:
ip add show weave
Set kube-proxy proxy mode:
kube-proxy --proxy-mode [userspace | iptables | ipvs]See the service IP range:
kube-api-server --service-cluster-ip-range ipNetSee all rules in iptables:
iptables -L -t nat
iptables -L -t nat | grep db-serviceInspect kube-proxy log:
cat /var/log/kube-proxy.logKube-DNS table structure:
| Hostname | Namespaces | Type | Root | IP Address |
|---|---|---|---|---|
| web-service | apps | svc | cluster.local | 10.107.37.188 |
| 10-244-2-5 | apps | pod | cluster.local | 10.244.2.5 |
DNS server name: CoreDNS
To see the CoreDNS Corefile:
cat /etc/coredns/CorefileTo find the CoreDNS file in the cluster if it's not in the default location:
kubectl -n kube-system describe deployments.apps coredns | grep -A2 Args | grep CorefileTip: While troubleshooting, check namespaces or FQDN (Fully Qualified Domain Names) to the environment variable values.
From a pod1, perform an nslookup on the mysql service and redirect the output to a file:
kubectl exec -it pod1 -- nslookup mysql.payroll > /root/CKA/nslookup.outIn this example, mysql.payroll is a service name.
The Corefile configuration is passed as a ConfigMap object:
kubectl get configmap -n kube-systemThe IP address of the CoreDNS service is passed to /etc/resolv.conf as the nameserver and search field. The kubelet handles these configurations:
cat /var/lib/kubelet/config.yamlTo check the FQDN for a service:
host <service-name>To see the configuration file for the CoreDNS service, use:
kubectl describe pod coredns-pod-nameHow Ingress works:
-
Deploy - Deploy a solution like nginx, traefik, haproxy, or Istio as an 'Ingress Controller'.
-
Configure - Create Ingress Resources.
Ingress controller is not enabled by default.
To create an Ingress resource imperatively:
kubectl create ingress <ingress-name> --rule="host/path=service:port"For example:
kubectl create ingress ingress-test --rule="wear.my-online-store.com/wear*=wear-service:80"To see the logs for failed pods:
kubectl logs <pod-name> -fTo see the logs of the previous pod:
kubectl logs <pod-name> -f --previousTo see the previous logs of a pod:
kubectl logs pod-name -f --previousWhen a worker node stops communicating with the master, it may be due to a crash, and the status is set to Unknown.
-
To describe a node and check status:
kubectl describe node <node-name>
-
To check CPU and processes:
top
-
To check disks:
df -h
-
To check the status of the kubelet:
service kubelet status
-
If the kubelet is stopped, start it:
service kubelet start
-
To check kubelet logs:
sudo journalctl -u kubelet
-
To check kubelet certificates:
openssl x509 -in /var/lib/kubelet/worker-1.crt -text
-
Kubelet files directory:
/var/lib/kubelet/config.yaml
-
To check the master server and its port defined in the worker node config:
/etc/kubernetes/kubelet.conf
This README provides an overview of various Kubernetes commands and concepts, helping you navigate and manage your Kubernetes clusters effectively.