You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ card:
21
21
22
22
<!--
23
23
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">This page shows how to install the `kubeadm` toolbox.
24
-
For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
24
+
For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
108
-
[Configure cgroup driver used by kubelet on control-plane node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node)
109
-
110
-
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
94
+
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
95
+
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
96
+
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
97
+
and investigating each container by running `docker logs`. For other container runtime see
98
+
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug-application-cluster/crictl/).
## kubeadm blocks when removing managed containers
@@ -273,7 +248,7 @@ services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetw
273
248
<!--
274
249
## Pods are not accessible via their Service IP
275
250
276
-
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
251
+
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-fails-to-reach-itself-via-the-service-ip)
277
252
which allows pods to access themselves via their Service IP. This is an issue related to
278
253
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
279
254
add-on provider to get the latest status of their support for hairpin mode.
@@ -286,7 +261,7 @@ services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetw
该模式允许 Pod 通过其服务 IP 进行访问。这是与 [CNI](https://github.com/containernetworking/cni/issues/476) 有关的问题。
291
266
请与网络附加组件提供商联系,以获取他们所提供的 hairpin 模式的最新状态。
292
267
@@ -378,6 +353,51 @@ Error from server (NotFound): the server could not find the requested resource
378
353
379
354
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `-iface eth1` flag to flannel so that the second interface is chosen.
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
361
+
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
362
+
in kube-apiserver logs. To fix the issue you must follow these steps:
The workaround is to tell `kubelet` which IP to use using `-node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) can be used for this.
415
-
434
+
The workaround is to tell `kubelet` which IP to use using `--node-ip`.
435
+
When using DigitalOcean, it can be the public one (assigned to `eth0`) or
436
+
the private one (assigned to `eth1`) should you want to use the optional
437
+
private network. The `kubeletExtraArgs` section of the kubeadm
## The NodeRegistration.Taints field is omitted when marshalling kubeadm configuration
638
-
639
-
*Note: This [issue](https://github.com/kubernetes/kubeadm/issues/1358) only applies to tools that marshal kubeadm types (e.g. to a YAML configuration file). It will be fixed in kubeadm API v1beta2.*
640
-
641
-
By default, kubeadm applies the `node-role.kubernetes.io/master:NoSchedule` taint to control-plane nodes.
642
-
If you prefer kubeadm to not taint the control-plane node, and set `InitConfiguration.NodeRegistration.Taints` to an empty slice,
643
-
the field will be omitted when marshalling. When the field is omitted, kubeadm applies the default taint.
644
-
645
-
There are at least two workarounds:
646
-
647
-
1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity.
0 commit comments