@@ -570,26 +570,29 @@ controller deletes the node from its list of nodes.
570
570
The third is monitoring the nodes' health. The node controller is
571
571
responsible for:
572
572
573
- - In the case that a node becomes unreachable, updating the NodeReady condition
574
- of within the Node's `.status`. In this case the node controller sets the
575
- NodeReady condition to `ConditionUnknown `.
573
+ - In the case that a node becomes unreachable, updating the `Ready` condition
574
+ in the Node's `.status` field . In this case the node controller sets the
575
+ `Ready` condition to `Unknown `.
576
576
- If a node remains unreachable: triggering
577
577
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
578
578
for all of the Pods on the unreachable node. By default, the node controller
579
- waits 5 minutes between marking the node as `ConditionUnknown ` and submitting
579
+ waits 5 minutes between marking the node as `Unknown ` and submitting
580
580
the first eviction request.
581
581
582
- The node controller checks the state of each node every `-node-monitor-period` seconds.
582
+ By default, the node controller checks the state of each node every 5 seconds.
583
+ This period can be configured using the `--node-monitor-period` flag on the
584
+ `kube-controller-manager` component.
583
585
-->
584
586
第三个是监控节点的健康状况。节点控制器负责:
585
587
586
- - 在节点不可达的情况下,在 Node 的 ` .status ` 中更新 NodeReady 状况。
588
+ - 在节点不可达的情况下,在 Node 的 ` .status ` 中更新 ` Ready ` 状况。
587
589
在这种情况下,节点控制器将 NodeReady 状况更新为 ` Unknown ` 。
588
590
- 如果节点仍然无法访问:对于不可达节点上的所有 Pod 触发
589
591
[ API-发起的逐出] ( /zh/docs/concepts/scheduling-eviction/api-eviction/ ) 。
590
592
默认情况下,节点控制器在将节点标记为 ` Unknown ` 后等待 5 分钟提交第一个驱逐请求。
591
593
592
- 节点控制器每隔 ` --node-monitor-period ` 秒检查每个节点的状态。
594
+ 默认情况下,节点控制器每 5 秒检查一次节点状态,可以使用 ` kube-controller-manager `
595
+ 组件上的 ` --node-monitor-period ` 参数来配置周期。
593
596
594
597
<!--
595
598
### Rate limits on eviction
@@ -606,11 +609,11 @@ from more than 1 node per 10 seconds.
606
609
<!--
607
610
The node eviction behavior changes when a node in a given availability zone
608
611
becomes unhealthy. The node controller checks what percentage of nodes in the zone
609
- are unhealthy (NodeReady condition is `ConditionUnknown ` or `ConditionFalse `) at
612
+ are unhealthy (the `Ready` condition is `Unknown ` or `False `) at
610
613
the same time:
611
614
-->
612
615
当一个可用区域(Availability Zone)中的节点变为不健康时,节点的驱逐行为将发生改变。
613
- 节点控制器会同时检查可用区域中不健康(NodeReady 状况为 ` Unknown ` 或 ` False ` )
616
+ 节点控制器会同时检查可用区域中不健康(` Ready ` 状况为 ` Unknown ` 或 ` False ` )
614
617
的节点的百分比:
615
618
616
619
<!--
@@ -713,7 +716,7 @@ If you want to explicitly reserve resources for non-Pod processes, follow this t
713
716
-->
714
717
## 节点拓扑 {#node-topology}
715
718
716
- {{< feature-state state="alpha " for_k8s_version="v1.16 " >}}
719
+ {{< feature-state state="beta " for_k8s_version="v1.18 " >}}
717
720
718
721
<!--
719
722
If you have enabled the `TopologyManager`
@@ -766,7 +769,7 @@ enabled by default in 1.21.
766
769
<!--
767
770
Note that by default, both configuration options described below,
768
771
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
769
- thus not activating Graceful node shutdown functionality.
772
+ thus not activating the graceful node shutdown functionality.
770
773
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
771
774
-->
772
775
注意,默认情况下,下面描述的两个配置选项,` ShutdownGracePeriod ` 和
0 commit comments