Skip to content

Commit 1a9b798

Browse files
authored
Merge pull request #37643 from chengxiangdong/sync_nodes
[zh] Update nodes.md
2 parents adf6dda + 09281a0 commit 1a9b798

File tree

1 file changed

+68
-47
lines changed
  • content/zh-cn/docs/concepts/architecture

1 file changed

+68
-47
lines changed

content/zh-cn/docs/concepts/architecture/nodes.md

Lines changed: 68 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -149,14 +149,18 @@ For self-registration, the kubelet is started with the following options:
149149

150150
<!--
151151
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
152-
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
152+
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}}
153+
to read metadata about itself.
153154
- `--register-node` - Automatically register with the API server.
154-
- `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
155+
- `--register-with-taints` - Register the node with the given list of
156+
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
155157
156-
No-op if `register-node` is false.
158+
No-op if `register-node` is false.
157159
- `--node-ip` - IP address of the node.
158-
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
159-
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
160+
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
161+
in the cluster (see label restrictions enforced by the
162+
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
163+
- `--node-status-update-frequency` - Specifies how often kubelet posts its node status to the API server.
160164
-->
161165
- `--kubeconfig` - 用于向 API 服务器执行身份认证所用的凭据的路径。
162166
- `--cloud-provider` - 与某{{< glossary_tooltip text="云驱动" term_id="cloud-provider" >}}
@@ -167,16 +171,16 @@ For self-registration, the kubelet is started with the following options:
167171
- `--node-ip` - 节点 IP 地址。
168172
- `--node-labels` - 在集群中注册节点时要添加的{{< glossary_tooltip text="标签" term_id="label" >}}。
169173
(参见 [NodeRestriction 准入控制插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。
170-
- `--node-status-update-frequency` - 指定 kubelet 向控制面发送状态的频率
174+
- `--node-status-update-frequency` - 指定 kubelet 向 API 服务器发送其节点状态的频率
171175

172176
<!--
173177
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
174-
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
175-
kubelets are only authorized to create/modify their own Node resource.
178+
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
179+
are enabled, kubelets are only authorized to create/modify their own Node resource.
176180
-->
177-
启用 [Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)
178-
[NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
179-
仅授权 `kubelet` 创建或修改其自己的节点资源
181+
[Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)
182+
[NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)被启用后
183+
仅授权 kubelet 创建/修改自己的 Node 资源
180184

181185
{{< note >}}
182186
<!--
@@ -297,8 +301,10 @@ You can use `kubectl` to view a Node's status and other details:
297301
kubectl describe node <节点名称>
298302
```
299303

300-
<!-- Each section is described in detail below. -->
301-
下面对每个部分进行详细描述。
304+
<!--
305+
Each section of the output is described below.
306+
-->
307+
下面对输出的每个部分进行详细描述。
302308

303309
<!--
304310
### Addresses
@@ -310,9 +316,11 @@ The usage of these fields varies depending on your cloud provider or bare metal
310316
这些字段的用法取决于你的云服务商或者物理机配置。
311317

312318
<!--
313-
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `-hostname-override` parameter.
314-
* ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
315-
* InternalIP: Typichostnameally the IP address of the node that is routable only within the cluster.
319+
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
320+
`--hostname-override` parameter.
321+
* ExternalIP: Typically the IP address of the node that is externally routable (available from
322+
outside the cluster).
323+
* InternalIP: Typically the IP address of the node that is routable only within the cluster.
316324
-->
317325
* HostName:由节点的内核报告。可以通过 kubelet 的 `--hostname-override` 参数覆盖。
318326
* ExternalIP:通常是节点的可外部路由(从集群外可访问)的 IP 地址。
@@ -443,7 +451,7 @@ for more details.
443451
<!--
444452
### Capacity and Allocatable {#capacity}
445453
446-
Describes the resources available on the node: CPU, memory and the maximum
454+
Describes the resources available on the node: CPU, memory, and the maximum
447455
number of pods that can be scheduled onto the node.
448456
-->
449457
### 容量(Capacity)与可分配(Allocatable) {#capacity}
@@ -632,7 +640,7 @@ the same time:
632640

633641
<!--
634642
The reason these policies are implemented per availability zone is because one
635-
availability zone might become partitioned from the master while the others remain
643+
availability zone might become partitioned from the control plane while the others remain
636644
connected. If your cluster does not span multiple cloud provider availability zones,
637645
then the eviction mechanism does not take per-zone unavailability into account.
638646
-->
@@ -675,8 +683,8 @@ that the scheduler won't place Pods onto unhealthy nodes.
675683
<!--
676684
## Resource capacity tracking {#node-capacity}
677685
678-
Node objects track information about the Node's resource capacity (for example: the amount
679-
of memory available, and the number of CPUs).
686+
Node objects track information about the Node's resource capacity: for example, the amount
687+
of memory available and the number of CPUs.
680688
Nodes that [self register](#self-registration-of-nodes) report their capacity during
681689
registration. If you [manually](#manual-node-administration) add a Node, then
682690
you need to set the node's capacity information when you add it.
@@ -690,11 +698,11 @@ Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量
690698

691699
<!--
692700
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
693-
there are enough resources for all the pods on a node. The scheduler checks that the sum
694-
of the requests of containers on the node is no greater than the node capacity.
695-
The sum of requests includes all containers started by the kubelet, but excludes any
701+
there are enough resources for all the Pods on a Node. The scheduler checks that the sum
702+
of the requests of containers on the node is no greater than the node's capacity.
703+
That sum of requests includes all containers managed by the kubelet, but excludes any
696704
containers started directly by the container runtime, and also excludes any
697-
process running outside of the kubelet's control.
705+
processes running outside of the kubelet's control.
698706
-->
699707
Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
700708
保证节点上有足够的资源供其上的所有 Pod 使用。
@@ -704,7 +712,7 @@ Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
704712

705713
{{< note >}}
706714
<!--
707-
If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
715+
If you want to explicitly reserve resources for non-Pod processes, see
708716
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
709717
-->
710718
如果要为非 Pod 进程显式保留资源。
@@ -749,7 +757,7 @@ kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的所
749757
[Pod 终止流程](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
750758

751759
<!--
752-
The graceful node shutdown feature depends on systemd since it takes advantage of
760+
The Graceful node shutdown feature depends on systemd since it takes advantage of
753761
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
754762
delay the node shutdown with a given duration.
755763
-->
@@ -768,9 +776,10 @@ enabled by default in 1.21.
768776

769777
<!--
770778
Note that by default, both configuration options described below,
771-
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
779+
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
772780
thus not activating the graceful node shutdown functionality.
773-
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
781+
To activate the feature, the two kubelet config settings should be configured appropriately and
782+
set to non-zero values.
774783
-->
775784
注意,默认情况下,下面描述的两个配置选项,`shutdownGracePeriod`
776785
`shutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活节点体面关闭功能。
@@ -780,19 +789,25 @@ To activate the feature, the two kubelet config settings should be configured ap
780789
During a graceful shutdown, kubelet terminates pods in two phases:
781790
782791
1. Terminate regular pods running on the node.
783-
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
792+
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
793+
running on the node.
784794
-->
785795
在体面关闭节点过程中,kubelet 分两个阶段来终止 Pod:
786796

787797
1. 终止在节点上运行的常规 Pod。
788798
2. 终止在节点上运行的[关键 Pod](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
789799

790800
<!--
791-
Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
792-
* `ShutdownGracePeriod`:
793-
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
794-
* `ShutdownGracePeriodCriticalPods`:
795-
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
801+
Graceful node shutdown feature is configured with two
802+
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
803+
* `shutdownGracePeriod`:
804+
* Specifies the total duration that the node should delay the shutdown by. This is the total
805+
grace period for pod termination for both regular and
806+
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
807+
* `shutdownGracePeriodCriticalPods`:
808+
* Specifies the duration used to terminate
809+
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
810+
during a node shutdown. This value should be less than `shutdownGracePeriod`.
796811
-->
797812
节点体面关闭的特性对应两个
798813
[`KubeletConfiguration`](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
@@ -805,8 +820,8 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
805820
的持续时间。该值应小于 `shutdownGracePeriod`
806821

807822
<!--
808-
For example, if `ShutdownGracePeriod=30s`, and
809-
`ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
823+
For example, if `shutdownGracePeriod=30s`, and
824+
`shutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
810825
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
811826
for gracefully terminating normal pods, and the last 10 seconds would be
812827
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
@@ -877,13 +892,11 @@ these pods will be stuck in terminating status on the shutdown node forever.
877892
<!--
878893
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
879894
or `NoSchedule` effect to a Node marking it out-of-service.
880-
If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/
881-
command-line-tools-reference/feature-gates/) is enabled on
882-
`kube-controller-manager`, and a Node is marked out-of-service with this taint, the
883-
pods on the node will be forcefully deleted if there are no matching tolerations on
884-
it and volume detach operations for the pods terminating on the node will happen
885-
immediately. This allows the Pods on the out-of-service node to recover quickly on a
886-
different node.
895+
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
896+
is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
897+
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
898+
detach operations for the pods terminating on the node will happen immediately. This allows the
899+
Pods on the out-of-service node to recover quickly on a different node.
887900
-->
888901
为了缓解上述情况,用户可以手动将具有 `NoExecute``NoSchedule` 效果的
889902
`node.kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。
@@ -906,11 +919,11 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
906919
<!--
907920
{{< note >}}
908921
- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
909-
that the node is already in shutdown or power off state (not in the middle of
910-
restarting).
922+
that the node is already in shutdown or power off state (not in the middle of
923+
restarting).
911924
- The user is required to manually remove the out-of-service taint after the pods are
912-
moved to a new node and the user has checked that the shutdown node has been
913-
recovered since the user was the one who originally added the taint.
925+
moved to a new node and the user has checked that the shutdown node has been
926+
recovered since the user was the one who originally added the taint.
914927
{{< /note >}}
915928
-->
916929
{{< note >}}
@@ -1108,6 +1121,14 @@ must be set to false.
11081121
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
11091122
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。
11101123

1124+
{{< warning >}}
1125+
<!--
1126+
When the memory swap feature is turned on, Kubernetes data such as the content
1127+
of Secret objects that were written to tmpfs now could be swapped to disk.
1128+
-->
1129+
当内存交换功能被启用后,Kubernetes 数据(如写入 tmpfs 的 Secret 对象的内容)可以被交换到磁盘。
1130+
{{< /warning >}}
1131+
11111132
<!--
11121133
A user can also optionally configure `memorySwap.swapBehavior` in order to
11131134
specify how a node will use swap memory. For example,

0 commit comments

Comments
 (0)