You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
246
-
services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
247
+
If your network provider does not support the portmap CNI plugin, you may need to use the
248
+
[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
249
+
or use `HostNetwork=true`.
247
250
-->
248
251
## `HostPort` 服务无法工作
249
252
@@ -267,9 +270,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
267
270
add-on provider to get the latest status of their support for hairpin mode.
268
271
269
272
- If you are using VirtualBox (directly or via Vagrant), you will need to
270
-
ensure that `hostname -i` returns a routable IP address. By default the first
273
+
ensure that `hostname -i` returns a routable IP address. By default, the first
271
274
interface is connected to a non-routable host-only network. A work around
272
-
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
346
+
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
347
+
`/var/lib/kubelet/pki/kubelet-client-current.pem`symlink specified in `/etc/kubernetes/kubelet.conf`.
341
348
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
342
349
in kube-apiserver logs. To fix the issue you must follow these steps:
343
350
-->
@@ -401,11 +408,15 @@ Error from server (NotFound): the server could not find the requested resource
401
408
```
402
409
403
410
<!--
404
-
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
411
+
- If you're using flannel as the pod network inside Vagrant, then you will have to
412
+
specify the default interface name for flannel.
405
413
406
-
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
414
+
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
415
+
are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
407
416
408
-
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
417
+
This may lead to problems with flannel, which defaults to the first interface on a host.
418
+
This leads to all hosts thinking they have the same public IP address. To prevent this,
419
+
pass the `--iface eth1` flag to flannel so that the second interface is chosen.
409
420
-->
410
421
- 如果你正在 Vagrant 中使用 flannel 作为 Pod 网络,则必须指定 flannel 的默认接口名称。
411
422
@@ -417,7 +428,8 @@ Error from server (NotFound): the server could not find the requested resource
417
428
<!--
418
429
## Non-public IP used for containers
419
430
420
-
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
431
+
In some situations `kubectl logs` and `kubectl run` commands may return with the
432
+
following errors in an otherwise functional cluster:
421
433
-->
422
434
## 容器使用的非公共 IP
423
435
@@ -428,10 +440,15 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
428
440
```
429
441
430
442
<!--
431
-
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
432
-
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
443
+
- This may be due to Kubernetes using an IP that can not communicate with other IPs on
444
+
the seemingly same subnet, possibly by policy of the machine provider.
445
+
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
446
+
as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
447
+
`InternalIP`instead of the public one.
433
448
434
-
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
449
+
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
450
+
not display the offending alias IP address. Alternatively an API endpoint specific to
451
+
DigitalOcean allows to query for the anchor IP from the droplet:
435
452
-->
436
453
- 这或许是由于 Kubernetes 使用的 IP 无法与看似相同的子网上的其他 IP 进行通信的缘故,
437
454
可能是由机器提供商的政策所导致的。
@@ -471,8 +488,8 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
471
488
<!--
472
489
## `coredns` pods have `CrashLoopBackOff` or `Error` state
473
490
474
-
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
475
-
where the `coredns` pods are not starting. To solve that you can try one of the following options:
491
+
If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario
492
+
where the `coredns` pods are not starting. To solve that, you can try one of the following options:
476
493
477
494
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
517
+
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
518
+
[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
501
519
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
502
520
-->
503
521
CoreDNS 处于 `CrashLoopBackOff` 时的另一个原因是当 Kubernetes 中部署的 CoreDNS Pod 检测到环路时。
@@ -700,7 +740,10 @@ be advised that this is modifying a design principle of the Linux distribution.
700
740
<!--
701
741
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
702
742
703
-
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
743
+
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
744
+
the case of running an external etcd. This is not a critical bug and happens because
745
+
older versions of kubeadm perform a version check on the external etcd cluster.
0 commit comments