@@ -43,12 +43,12 @@ kind: ClusterRole
43
43
metadata :
44
44
name : kubeadm:get-nodes
45
45
rules :
46
- - apiGroups :
47
- - " "
48
- resources :
49
- - nodes
50
- verbs :
51
- - get
46
+ - apiGroups :
47
+ - " "
48
+ resources :
49
+ - nodes
50
+ verbs :
51
+ - get
52
52
---
53
53
apiVersion : rbac.authorization.k8s.io/v1
54
54
kind : ClusterRoleBinding
@@ -59,16 +59,16 @@ roleRef:
59
59
kind : ClusterRole
60
60
name : kubeadm:get-nodes
61
61
subjects :
62
- - apiGroup : rbac.authorization.k8s.io
63
- kind : Group
64
- name : system:bootstrappers:kubeadm:default-node-token
62
+ - apiGroup : rbac.authorization.k8s.io
63
+ kind : Group
64
+ name : system:bootstrappers:kubeadm:default-node-token
65
65
` ` `
66
66
67
67
## ` ebtables` or some similar executable not found during installation
68
68
69
69
If you see the following warnings while running `kubeadm init`
70
70
71
- ` ` ` sh
71
+ ` ` ` console
72
72
[preflight] WARNING: ebtables not found in system path
73
73
[preflight] WARNING: ethtool not found in system path
74
74
` ` `
@@ -82,18 +82,18 @@ Then you may be missing `ebtables`, `ethtool` or a similar executable on your no
82
82
83
83
If you notice that `kubeadm init` hangs after printing out the following line :
84
84
85
- ` ` ` sh
85
+ ` ` ` console
86
86
[apiclient] Created API client, waiting for the control plane to become ready
87
87
` ` `
88
88
89
89
This may be caused by a number of problems. The most common are :
90
90
91
91
- network connection problems. Check that your machine has full network connectivity before continuing.
92
92
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
93
- configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
93
+ configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
94
94
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
95
- and investigating each container by running `docker logs`. For other container runtime see
96
- [Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
95
+ and investigating each container by running `docker logs`. For other container runtime see
96
+ [Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
97
97
98
98
# # kubeadm blocks when removing managed containers
99
99
@@ -204,21 +204,21 @@ in kube-apiserver logs. To fix the issue you must follow these steps:
204
204
205
205
1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node.
206
206
1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute
207
- ` kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf` .
208
- ` $NODE` must be set to the name of the existing failed node in the cluster.
209
- Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
210
- or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
211
- the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
207
+ ` kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf` .
208
+ ` $NODE` must be set to the name of the existing failed node in the cluster.
209
+ Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
210
+ or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
211
+ the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
212
212
1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node.
213
213
1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for
214
- ` /var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated.
214
+ ` /var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated.
215
215
1. Manually edit the `kubelet.conf` to point to the rotated kubelet client certificates, by replacing
216
- `client-certificate-data` and `client-key-data` with :
216
+ `client-certificate-data` and `client-key-data` with :
217
217
218
- ` ` ` yaml
219
- client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
220
- client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
221
- ` ` `
218
+ ` ` ` yaml
219
+ client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
220
+ client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
221
+ ` ` `
222
222
223
223
1. Restart the kubelet.
224
224
1. Make sure the node becomes `Ready`.
@@ -241,7 +241,7 @@ Error from server (NotFound): the server could not find the requested resource
241
241
242
242
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster :
243
243
244
- ` ` ` sh
244
+ ` ` ` console
245
245
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
246
246
` ` `
247
247
@@ -306,15 +306,17 @@ This version of Docker can prevent the kubelet from executing into the etcd cont
306
306
To work around the issue, choose one of these options :
307
307
308
308
- Roll back to an earlier version of Docker, such as 1.13.1-75
309
- ` ` `
310
- yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
311
- ` ` `
309
+
310
+ ` ` `
311
+ yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
312
+ ` ` `
312
313
313
314
- Install one of the more recent recommended versions, such as 18.06 :
314
- ` ` ` bash
315
- sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
316
- yum install docker-ce-18.06.1.ce-3.el7.x86_64
317
- ` ` `
315
+
316
+ ` ` ` bash
317
+ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
318
+ yum install docker-ce-18.06.1.ce-3.el7.x86_64
319
+ ` ` `
318
320
319
321
# # Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag
320
322
0 commit comments