@@ -149,14 +149,18 @@ For self-registration, the kubelet is started with the following options:
149
149
150
150
<!--
151
151
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
152
- - `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
152
+ - `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}}
153
+ to read metadata about itself.
153
154
- `--register-node` - Automatically register with the API server.
154
- - `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
155
+ - `--register-with-taints` - Register the node with the given list of
156
+ {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
155
157
156
- No-op if `register-node` is false.
158
+ No-op if `register-node` is false.
157
159
- `--node-ip` - IP address of the node.
158
- - `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
159
- - `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
160
+ - `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
161
+ in the cluster (see label restrictions enforced by the
162
+ [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
163
+ - `--node-status-update-frequency` - Specifies how often kubelet posts its node status to the API server.
160
164
-->
161
165
- ` --kubeconfig ` - 用于向 API 服务器执行身份认证所用的凭据的路径。
162
166
- ` --cloud-provider ` - 与某{{< glossary_tooltip text="云驱动" term_id="cloud-provider" >}}
@@ -167,16 +171,16 @@ For self-registration, the kubelet is started with the following options:
167
171
- ` --node-ip ` - 节点 IP 地址。
168
172
- ` --node-labels ` - 在集群中注册节点时要添加的{{< glossary_tooltip text="标签" term_id="label" >}}。
169
173
(参见 [ NodeRestriction 准入控制插件] ( /zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction ) 所实施的标签限制)。
170
- - ` --node-status-update-frequency ` - 指定 kubelet 向控制面发送状态的频率 。
174
+ - ` --node-status-update-frequency ` - 指定 kubelet 向 API 服务器发送其节点状态的频率 。
171
175
172
176
<!--
173
177
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
174
- [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
175
- kubelets are only authorized to create/modify their own Node resource.
178
+ [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
179
+ are enabled, kubelets are only authorized to create/modify their own Node resource.
176
180
-->
177
- 启用 [ Node 鉴权模式] ( /zh-cn/docs/reference/access-authn-authz/node/ ) 和
178
- [ NodeRestriction 准入插件] ( /zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction ) 时 ,
179
- 仅授权 ` kubelet ` 创建或修改其自己的节点资源 。
181
+ 当 [ Node 鉴权模式] ( /zh-cn/docs/reference/access-authn-authz/node/ ) 和
182
+ [ NodeRestriction 准入插件] ( /zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction ) 被启用后 ,
183
+ 仅授权 kubelet 创建/修改自己的 Node 资源 。
180
184
181
185
{{< note >}}
182
186
<!--
@@ -297,8 +301,10 @@ You can use `kubectl` to view a Node's status and other details:
297
301
kubectl describe node < 节点名称>
298
302
```
299
303
300
- <!-- Each section is described in detail below. -->
301
- 下面对每个部分进行详细描述。
304
+ <!--
305
+ Each section of the output is described below.
306
+ -->
307
+ 下面对输出的每个部分进行详细描述。
302
308
303
309
<!--
304
310
### Addresses
@@ -310,9 +316,11 @@ The usage of these fields varies depending on your cloud provider or bare metal
310
316
这些字段的用法取决于你的云服务商或者物理机配置。
311
317
312
318
<!--
313
- * HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `-hostname-override` parameter.
314
- * ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
315
- * InternalIP: Typichostnameally the IP address of the node that is routable only within the cluster.
319
+ * HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
320
+ `--hostname-override` parameter.
321
+ * ExternalIP: Typically the IP address of the node that is externally routable (available from
322
+ outside the cluster).
323
+ * InternalIP: Typically the IP address of the node that is routable only within the cluster.
316
324
-->
317
325
* HostName:由节点的内核报告。可以通过 kubelet 的 ` --hostname-override ` 参数覆盖。
318
326
* ExternalIP:通常是节点的可外部路由(从集群外可访问)的 IP 地址。
@@ -443,7 +451,7 @@ for more details.
443
451
<!--
444
452
### Capacity and Allocatable {#capacity}
445
453
446
- Describes the resources available on the node: CPU, memory and the maximum
454
+ Describes the resources available on the node: CPU, memory, and the maximum
447
455
number of pods that can be scheduled onto the node.
448
456
-->
449
457
### 容量(Capacity)与可分配(Allocatable) {#capacity}
@@ -632,7 +640,7 @@ the same time:
632
640
633
641
<!--
634
642
The reason these policies are implemented per availability zone is because one
635
- availability zone might become partitioned from the master while the others remain
643
+ availability zone might become partitioned from the control plane while the others remain
636
644
connected. If your cluster does not span multiple cloud provider availability zones,
637
645
then the eviction mechanism does not take per-zone unavailability into account.
638
646
-->
@@ -675,8 +683,8 @@ that the scheduler won't place Pods onto unhealthy nodes.
675
683
<!--
676
684
## Resource capacity tracking {#node-capacity}
677
685
678
- Node objects track information about the Node's resource capacity ( for example: the amount
679
- of memory available, and the number of CPUs) .
686
+ Node objects track information about the Node's resource capacity: for example, the amount
687
+ of memory available and the number of CPUs.
680
688
Nodes that [self register](#self-registration-of-nodes) report their capacity during
681
689
registration. If you [manually](#manual-node-administration) add a Node, then
682
690
you need to set the node's capacity information when you add it.
@@ -690,11 +698,11 @@ Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量
690
698
691
699
<!--
692
700
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
693
- there are enough resources for all the pods on a node. The scheduler checks that the sum
694
- of the requests of containers on the node is no greater than the node capacity.
695
- The sum of requests includes all containers started by the kubelet, but excludes any
701
+ there are enough resources for all the Pods on a Node. The scheduler checks that the sum
702
+ of the requests of containers on the node is no greater than the node's capacity.
703
+ That sum of requests includes all containers managed by the kubelet, but excludes any
696
704
containers started directly by the container runtime, and also excludes any
697
- process running outside of the kubelet's control.
705
+ processes running outside of the kubelet's control.
698
706
-->
699
707
Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
700
708
保证节点上有足够的资源供其上的所有 Pod 使用。
@@ -704,7 +712,7 @@ Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
704
712
705
713
{{< note >}}
706
714
<!--
707
- If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
715
+ If you want to explicitly reserve resources for non-Pod processes, see
708
716
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
709
717
-->
710
718
如果要为非 Pod 进程显式保留资源。
@@ -749,7 +757,7 @@ kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的所
749
757
[ Pod 终止流程] ( /zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination ) 。
750
758
751
759
<!--
752
- The graceful node shutdown feature depends on systemd since it takes advantage of
760
+ The Graceful node shutdown feature depends on systemd since it takes advantage of
753
761
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
754
762
delay the node shutdown with a given duration.
755
763
-->
@@ -768,9 +776,10 @@ enabled by default in 1.21.
768
776
769
777
<!--
770
778
Note that by default, both configuration options described below,
771
- `ShutdownGracePeriod ` and `ShutdownGracePeriodCriticalPods ` are set to zero,
779
+ `shutdownGracePeriod ` and `shutdownGracePeriodCriticalPods ` are set to zero,
772
780
thus not activating the graceful node shutdown functionality.
773
- To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
781
+ To activate the feature, the two kubelet config settings should be configured appropriately and
782
+ set to non-zero values.
774
783
-->
775
784
注意,默认情况下,下面描述的两个配置选项,` shutdownGracePeriod ` 和
776
785
` shutdownGracePeriodCriticalPods ` 都是被设置为 0 的,因此不会激活节点体面关闭功能。
@@ -780,19 +789,25 @@ To activate the feature, the two kubelet config settings should be configured ap
780
789
During a graceful shutdown, kubelet terminates pods in two phases:
781
790
782
791
1. Terminate regular pods running on the node.
783
- 2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
792
+ 2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
793
+ running on the node.
784
794
-->
785
795
在体面关闭节点过程中,kubelet 分两个阶段来终止 Pod:
786
796
787
797
1 . 终止在节点上运行的常规 Pod。
788
798
2 . 终止在节点上运行的[ 关键 Pod] ( /zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical ) 。
789
799
790
800
<!--
791
- Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
792
- * `ShutdownGracePeriod`:
793
- * Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
794
- * `ShutdownGracePeriodCriticalPods`:
795
- * Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
801
+ Graceful node shutdown feature is configured with two
802
+ [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
803
+ * `shutdownGracePeriod`:
804
+ * Specifies the total duration that the node should delay the shutdown by. This is the total
805
+ grace period for pod termination for both regular and
806
+ [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
807
+ * `shutdownGracePeriodCriticalPods`:
808
+ * Specifies the duration used to terminate
809
+ [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
810
+ during a node shutdown. This value should be less than `shutdownGracePeriod`.
796
811
-->
797
812
节点体面关闭的特性对应两个
798
813
[ ` KubeletConfiguration ` ] ( /zh-cn/docs/tasks/administer-cluster/kubelet-config-file/ ) 选项:
@@ -805,8 +820,8 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
805
820
的持续时间。该值应小于 ` shutdownGracePeriod ` 。
806
821
807
822
<!--
808
- For example, if `ShutdownGracePeriod =30s`, and
809
- `ShutdownGracePeriodCriticalPods =10s`, kubelet will delay the node shutdown by
823
+ For example, if `shutdownGracePeriod =30s`, and
824
+ `shutdownGracePeriodCriticalPods =10s`, kubelet will delay the node shutdown by
810
825
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
811
826
for gracefully terminating normal pods, and the last 10 seconds would be
812
827
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
@@ -877,13 +892,11 @@ these pods will be stuck in terminating status on the shutdown node forever.
877
892
<!--
878
893
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
879
894
or `NoSchedule` effect to a Node marking it out-of-service.
880
- If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/
881
- command-line-tools-reference/feature-gates/) is enabled on
882
- `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
883
- pods on the node will be forcefully deleted if there are no matching tolerations on
884
- it and volume detach operations for the pods terminating on the node will happen
885
- immediately. This allows the Pods on the out-of-service node to recover quickly on a
886
- different node.
895
+ If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
896
+ is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
897
+ pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
898
+ detach operations for the pods terminating on the node will happen immediately. This allows the
899
+ Pods on the out-of-service node to recover quickly on a different node.
887
900
-->
888
901
为了缓解上述情况,用户可以手动将具有 ` NoExecute ` 或 ` NoSchedule ` 效果的
889
902
` node.kubernetes.io/out-of-service ` 污点添加到节点上,标记其无法提供服务。
@@ -906,11 +919,11 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
906
919
<!--
907
920
{{< note >}}
908
921
- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
909
- that the node is already in shutdown or power off state (not in the middle of
910
- restarting).
922
+ that the node is already in shutdown or power off state (not in the middle of
923
+ restarting).
911
924
- The user is required to manually remove the out-of-service taint after the pods are
912
- moved to a new node and the user has checked that the shutdown node has been
913
- recovered since the user was the one who originally added the taint.
925
+ moved to a new node and the user has checked that the shutdown node has been
926
+ recovered since the user was the one who originally added the taint.
914
927
{{< /note >}}
915
928
-->
916
929
{{< note >}}
@@ -1108,6 +1121,14 @@ must be set to false.
1108
1121
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
1109
1122
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。
1110
1123
1124
+ {{< warning >}}
1125
+ <!--
1126
+ When the memory swap feature is turned on, Kubernetes data such as the content
1127
+ of Secret objects that were written to tmpfs now could be swapped to disk.
1128
+ -->
1129
+ 当内存交换功能被启用后,Kubernetes 数据(如写入 tmpfs 的 Secret 对象的内容)可以被交换到磁盘。
1130
+ {{< /warning >}}
1131
+
1111
1132
<!--
1112
1133
A user can also optionally configure `memorySwap.swapBehavior` in order to
1113
1134
specify how a node will use swap memory. For example,
0 commit comments