Q1. Task: Given an existing Kubernetes cluster running version 1.18.8, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.19.0. You are also expected to upgrade kubelet and kubectl on the master node.
kubectl config use-context mk8sNote: Be sure to drain the master node before upgrading it and uncordon it after the upgrade. Do not upgrade the worker nodes, etcd, the container manager, the CNI plugin, the DNS service or any other addons.
Answer
kubectl drain <node-to-drain> --ignore-daemonsets
root@controlplane:~# apt update
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.22.x-00
kubeadm version
kubeadm upgrade plan
kubeadm upgrade apply v1.20.0
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.22.x-00 kubectl=1.22.x-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon <node-to-drain>Q2. Task: From the pod label name=cpu-user, find pods running high CPU workloads and write the name of the pod consuiming most CPU to the file /opt/KUTROO401/KUTROO401.txt (which already exists).
kubectl config use-context k8sAnswer
kubectl top -l name=cpu-utilizer –A echo 'pod name' >>/opt/KUT00401/KUT00401.txt
kubectl top pod --sort-by='cpu' --no-headers | head -1
or
kubectl top pods -l name=name-cpu-loader --sort-by=cpu
echo ‘top pod name' >>/opt/KUTR00401/KUTR00401.txt
or
kubectl top node --sort-by='cpu' --no-headers | head -1
kubectl top pod --sort-by='memory' --no-headers | head -1
kubectl top pod --sort-by='cpu' --no-headers | tail -1Q3. Task: Check to see how many nodes are ready (not including nodes tainted NoSchedule) and wrtie the number to /opt/KUSC00402/kusc00402.txt
kubectl config use-context k8sAnswer
kubectl get nodes
kubectl get node | grep -i ready |wc -l
kubectl describe nodes | grep ready | wc -l
kubectl describe nodes | grep -i taint | grep -i noschedule | wc -l
echo 3 > /opt/KUSC00402/kusc00402.txt
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" > /opt/KUSC00402/kusc00402.txt- Extract log lines corresponding to error
unable-to-access-website - Write them to
/opt/KUTR00101/foobar
kubectl config use-context k8sAnswer
kubectl logs foobar | grep 'unable-to-access-website' > /opt/KULM00201/foobar- Name:
nginx-kusc00401 - Image:
nginx - Node selector:
disk=ssd
kubectl config use-context k8sAnswer
vi pod.yamlapiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00101
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssdkubectl create –f pod.yaml \
kubectl get podsQ6. Task: List all persistent volumes sorted by capacity, saving the full kubectl output to /opt/KUCC00102/volume_list. Use kubectl 's own functionality for sorting the output, and do not manipulate it any further.
Answer
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storagekubectl get pv --sort-by = .spec.capacity.storage > /opt/ KUCC00102/volume_listQ7. Task: Create a file: /opt/KUCC00302/kucc00302.txt that lists all pods that implement service baz in namespace development. The format of the file should be one pod name per line.
Note: selector: name=foo
Answer
kubectl describe service baz –n developmentkubectl get pods –l name=foo –n development –o NAME > /opt/KUCC00302/kucc00302.txt- Name:
non-persistent-redis - container Image:
redis - Volume with name:
cache-control - Mount path:
/data/redis - The pod should launch in the staging namespace be persistent.
Answer
vi volume.yamlapiVersion: v1
kind: Pod
metadata:
name: non-persistent-redis
namespace: staging
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: cache-control
mountPath: /data/redis
volumes:
- name: cache-control
emptyDir: {}kubectl create –f volume.yamlkubectl get pods -n stagingAnswer
# List PersistentVolumes sorted by capacity
kubectl get pods -o widekubectl delete pods nginx-dev nginx-prodQ10. Task: A Kubernetes worker node, named wk8s-node-0 is in state NotReady, investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
kubectl config use-context wk8sYou can
sshto the failed node using:
ssh wk8s-node-0 You can assume elevated privileges on the node with the following command:
sudo -iAnswer
kubectl get nodes
kubectl describe node wk8s-node-0 #view faulty node info
ssh wk8s-node-0 #enter node01
ps –aux | grep kubelet
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubeletQ11. Task: Given a partially-functioning Kubernetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service, and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.
You can
sshto the relevant I nodes:
[student@node-1]$ ssh <nodename>You can assume elevated privileges on the node with the following command:
[student@nodename]$ sudo –iAnswer
cat /var/lib/kubelet/config.yamlstaticpodpath: /etc/kubernates/manifests
systemctl restart kublet
systemctl enable kublet[student@node-1] > ssh ek8skubectl config use-context ek8sAnswer
```bash kubectl cordon ek8s-node-1 kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force ```Q13. Task: Configure the kubelet systemd-managed service, on the node labelled with name=wk8s-node-1, to launch a pod containing a single container of image httpd named webtool automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.
You can
sshto the appropriate node using:
[student@node-1]$ ssh wk8s-node-1You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1]$ sudo –iAnswer
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --forceQ14. Task: Create a pod that echo hello world and then exists. Have the pod deleted automatically when it's completed
You can
sshto the appropriate node using:
[student@node-1]$ ssh wk8s-node-1You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1]$ sudo –iAnswer
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --forceAnswer
```bash kubectl get pods --all-namespaces > /opt/pods-list.yaml ```Q16. Task: Create a pod named kucc4 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified)
nginx + redis + memcached + consul
kubectl config use-context k8sAnswer
Multipod question: vi kucc4.yamlkubectl run kucc8 --image=nginx --dry-run -o yaml > kucc4.yamlapiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc4
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consulkubectl create -f kucc8.yamlQ17. Task: You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
- Create a new
ClusterRolenameddeployment-clusterrole, which only allows to create the following resource types:DeploymentStatefulSetDaemonSet
- Create a new
ServiceAccountnamecicd-tokenin the existing namespaceapp-team1. - Bind the new ClusterRole
deployment-clusterroleto the new ServiceAccountcicd-token, limited to the namespaceapp-team1.
kubectl config use-context ek8sAnswer
kubectl create namespace app-team1kubectl create serviceaccount cicd-token -n app-team1 #create service accountkubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset #create cluster rolekubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=default:cicd-token --namespace=app-team1kubectl create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token –n app-team1