You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
150
-
services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
151
+
If your network provider does not support the portmap CNI plugin, you may need to use the
152
+
[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
153
+
or use `HostNetwork=true`.
151
154
152
155
## Pods are not accessible via their Service IP
153
156
@@ -157,9 +160,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
157
160
add-on provider to get the latest status of their support for hairpin mode.
158
161
159
162
- If you are using VirtualBox (directly or via Vagrant), you will need to
160
-
ensure that `hostname -i` returns a routable IP address. By default the first
163
+
ensure that `hostname -i` returns a routable IP address. By default, the first
161
164
interface is connected to a non-routable host-only network. A work around
162
-
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
206
+
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
207
+
`/var/lib/kubelet/pki/kubelet-client-current.pem`symlink specified in `/etc/kubernetes/kubelet.conf`.
202
208
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
203
209
in kube-apiserver logs. To fix the issue you must follow these steps:
204
210
@@ -231,24 +237,34 @@ The following error might indicate that something was wrong in the pod network:
231
237
Error from server (NotFound): the server could not find the requested resource
232
238
```
233
239
234
-
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
240
+
- If you're using flannel as the pod network inside Vagrant, then you will have to
241
+
specify the default interface name for flannel.
235
242
236
-
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
243
+
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
244
+
are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
237
245
238
-
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
246
+
This may lead to problems with flannel, which defaults to the first interface on a host.
247
+
This leads to all hosts thinking they have the same public IP address. To prevent this,
248
+
pass the `--iface eth1` flag to flannel so that the second interface is chosen.
239
249
240
250
## Non-public IP used for containers
241
251
242
-
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
252
+
In some situations `kubectl logs` and `kubectl run` commands may return with the
253
+
following errors in an otherwise functional cluster:
243
254
244
255
```console
245
256
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
246
257
```
247
258
248
-
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
249
-
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
259
+
- This may be due to Kubernetes using an IP that can not communicate with other IPs on
260
+
the seemingly same subnet, possibly by policy of the machine provider.
261
+
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
262
+
as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
263
+
`InternalIP`instead of the public one.
250
264
251
-
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
265
+
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
266
+
not display the offending alias IP address. Alternatively an API endpoint specific to
267
+
DigitalOcean allows to query for the anchor IP from the droplet:
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
304
+
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
305
+
[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
288
306
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
289
307
290
308
{{< warning >}}
@@ -300,7 +318,7 @@ If you encounter the following error:
300
318
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\""
301
319
```
302
320
303
-
this issue appears if you run CentOS 7 with Docker 1.13.1.84.
321
+
This issue appears if you run CentOS 7 with Docker 1.13.1.84.
304
322
This version of Docker can prevent the kubelet from executing into the etcd container.
305
323
306
324
To work around the issue, choose one of these options:
@@ -344,6 +362,7 @@ to pick up the node's IP address properly and has knock-on effects to the proxy
344
362
load balancers.
345
363
346
364
The following error can be seen in kube-proxy Pods:
365
+
347
366
```
348
367
server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []
349
368
proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
@@ -352,8 +371,26 @@ proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
352
371
A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane
353
372
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
On the primary control-plane Node (created using `kubeadm init`) pass the following
413
+
On the primary control-plane Node (created using `kubeadm init`), pass the following
374
414
file using `--config`:
375
415
376
416
```yaml
@@ -402,7 +442,10 @@ be advised that this is modifying a design principle of the Linux distribution.
402
442
403
443
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
404
444
405
-
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
445
+
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
446
+
the case of running an external etcd. This is not a critical bug and happens because
447
+
older versions of kubeadm perform a version check on the external etcd cluster.
448
+
You can proceed with `kubeadm upgrade apply ...`.
406
449
407
450
This issue is fixed as of version 1.19.
408
451
@@ -422,6 +465,7 @@ can be used insecurely by passing the `--kubelet-insecure-tls` to it. This is no
422
465
If you want to use TLS between the metrics-server and the kubelet there is a problem,
423
466
since kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors
424
467
on the side of the metrics-server:
468
+
425
469
```
426
470
x509: certificate signed by unknown authority
427
471
x509: certificate is valid for IP-foo not IP-bar
@@ -438,6 +482,7 @@ Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3
438
482
where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.
439
483
440
484
Here is the error message you may encounter:
485
+
441
486
```
442
487
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
443
488
[upgrade/etcd] Waiting for previous etcd to become available
0 commit comments