You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
1011
-
either because the command does not trigger the inhibitor locks mechanism used by
1012
-
kubelet or because of a user error, i.e., the ShutdownGracePeriod and
1013
-
ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
1055
+
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
1056
+
either because the command does not trigger the inhibitor locks mechanism used by
1057
+
kubelet or because of a user error, i.e., the ShutdownGracePeriod and
1058
+
ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
1014
1059
section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
1015
1060
-->
1016
1061
节点关闭的操作可能无法被 kubelet 的节点关闭管理器检测到,
@@ -1019,15 +1064,15 @@ section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
1019
1064
请参考以上[节点体面关闭](#graceful-node-shutdown)部分了解更多详细信息。
1020
1065
1021
1066
<!--
1022
-
When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
1023
-
that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on
1024
-
the shutdown node and cannot move to a new running node. This is because kubelet on
1025
-
the shutdown node is not available to delete the pods so the StatefulSet cannot
1026
-
create a new pod with the same name. If there are volumes used by the pods, the
1027
-
VolumeAttachments will not be deleted from the original shutdown node so the volumes
1028
-
used by these pods cannot be attached to a new running node. As a result, the
1029
-
application running on the StatefulSet cannot function properly. If the original
1030
-
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
1067
+
When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
1068
+
that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on
1069
+
the shutdown node and cannot move to a new running node. This is because kubelet on
1070
+
the shutdown node is not available to delete the pods so the StatefulSet cannot
1071
+
create a new pod with the same name. If there are volumes used by the pods, the
1072
+
VolumeAttachments will not be deleted from the original shutdown node so the volumes
1073
+
used by these pods cannot be attached to a new running node. As a result, the
1074
+
application running on the StatefulSet cannot function properly. If the original
1075
+
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
1031
1076
created on a different running node. If the original shutdown node does not come up,
1032
1077
these pods will be stuck in terminating status on the shutdown node forever.
1033
1078
-->
@@ -1043,13 +1088,13 @@ these pods will be stuck in terminating status on the shutdown node forever.
1043
1088
如果原来的已关闭节点没有被恢复,那些在已关闭节点上的 Pod 将永远滞留在终止状态。
1044
1089
1045
1090
<!--
1046
-
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
1047
-
or `NoSchedule` effect to a Node marking it out-of-service.
1091
+
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
1092
+
or `NoSchedule` effect to a Node marking it out-of-service.
1048
1093
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
1049
1094
is enabled on {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}, and a Node is marked out-of-service with this taint, the
1050
1095
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
1051
1096
detach operations for the pods terminating on the node will happen immediately. This allows the
1052
-
Pods on the out-of-service node to recover quickly on a different node.
1097
+
Pods on the out-of-service node to recover quickly on a different node.
0 commit comments