Skip to content

Commit 7d0c697

Browse files
authored
Merge pull request #42406 from my-git9/blog100
[zh-cn] sync 2023-08-15-non-graceful-node-shutdown-to-ga.md
2 parents 06147cb + 0351f01 commit 7d0c697

File tree

1 file changed

+192
-0
lines changed

1 file changed

+192
-0
lines changed
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
---
2+
layout: blog
3+
title: "Kubernetes 1.28: 节点非体面关闭进入 GA 阶段(正式发布)"
4+
date: 2023-08-15T10:00:00-08:00
5+
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
6+
---
7+
8+
<!--
9+
layout: blog
10+
title: "Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA"
11+
date: 2023-08-15T10:00:00-08:00
12+
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
13+
-->
14+
15+
<!--
16+
**Authors:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
17+
-->
18+
**作者:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
19+
20+
**译者:** Xin Li (Daocloud)
21+
22+
<!--
23+
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
24+
It was introduced as
25+
[alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
26+
in Kubernetes v1.24, and promoted to
27+
[beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
28+
in Kubernetes v1.26.
29+
This feature allows stateful workloads to restart on a different node if the
30+
original node is shutdown unexpectedly or ends up in a non-recoverable state
31+
such as the hardware failure or unresponsive OS.
32+
-->
33+
Kubernetes 节点非体面关闭特性现已在 Kubernetes v1.28 中正式发布。
34+
35+
此特性在 Kubernetes v1.24 中作为 [Alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
36+
特性引入,并在 Kubernetes v1.26 中转入 [Beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
37+
阶段。如果原始节点意外关闭或最终处于不可恢复状态(例如硬件故障或操作系统无响应),
38+
此特性允许有状态工作负载在不同节点上重新启动。
39+
40+
<!--
41+
## What is a Non-Graceful Node Shutdown
42+
43+
In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
44+
unexpectedly because of reasons such as power outage or something else external.
45+
A node shutdown could lead to workload failure if the node is not drained
46+
before the shutdown. A node shutdown can be either graceful or non-graceful.
47+
-->
48+
## 什么是节点非体面关闭
49+
50+
在 Kubernetes 集群中,节点可能会按计划正常关闭,也可能由于断电或其他外部原因而意外关闭。
51+
如果节点在关闭之前未腾空,则节点关闭可能会导致工作负载失败。节点关闭可以是正常关闭,也可以是非正常关闭。
52+
53+
<!--
54+
The [Graceful Node Shutdown](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)
55+
feature allows Kubelet to detect a node shutdown event, properly terminate the pods,
56+
and release resources, before the actual shutdown.
57+
-->
58+
[节点体面关闭](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)特性允许
59+
kubelet 在实际关闭之前检测节点关闭事件、正确终止该节点上的 Pod 并释放资源。
60+
61+
<!--
62+
When a node is shutdown but not detected by Kubelet's Node Shutdown Manager,
63+
this becomes a non-graceful node shutdown.
64+
Non-graceful node shutdown is usually not a problem for stateless apps, however,
65+
it is a problem for stateful apps.
66+
The stateful application cannot function properly if the pods are stuck on the
67+
shutdown node and are not restarting on a running node.
68+
-->
69+
当节点关闭但 kubelet 的节点关闭管理器未检测到时,将造成节点非体面关闭。
70+
对于无状态应用程序来说,节点非体面关闭通常不是问题,但是对于有状态应用程序来说,这是一个问题。
71+
如果 Pod 停留在关闭节点上并且未在正在运行的节点上重新启动,则有状态应用程序将无法正常运行。
72+
73+
<!--
74+
In the case of a non-graceful node shutdown, you can manually add an `out-of-service` taint on the Node.
75+
-->
76+
在节点非体面关闭的情况下,你可以在 Node 上手动添加 `out-of-service` 污点。
77+
78+
```
79+
kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
80+
```
81+
82+
<!--
83+
This taint triggers pods on the node to be forcefully deleted if there are no
84+
matching tolerations on the pods. Persistent volumes attached to the shutdown node
85+
will be detached, and new pods will be created successfully on a different running
86+
node.
87+
-->
88+
如果 Pod 上没有与之匹配的容忍规则,则此污点会触发节点上的 Pod 被强制删除。
89+
挂接到关闭中的节点的持久卷将被解除挂接,新的 Pod 将在不同的运行节点上成功创建。
90+
91+
<!--
92+
**Note:** Before applying the out-of-service taint, you must verify that a node is
93+
already in shutdown or power-off state (not in the middle of restarting).
94+
95+
Once all the workload pods that are linked to the out-of-service node are moved to
96+
a new running node, and the shutdown node has been recovered, you should remove that
97+
taint on the affected node after the node is recovered.
98+
-->
99+
**注意:**在应用 out-of-service 污点之前,你必须验证节点是否已经处于关闭或断电状态(而不是在重新启动中)。
100+
101+
与 out-of-service 节点有关联的所有工作负载的 Pod 都被移动到新的运行节点,
102+
并且所关闭的节点已恢复之后,你应该删除受影响节点上的污点。
103+
104+
<!--
105+
## What’s new in stable
106+
107+
With the promotion of the Non-Graceful Node Shutdown feature to stable, the
108+
feature gate `NodeOutOfServiceVolumeDetach` is locked to true on
109+
`kube-controller-manager` and cannot be disabled.
110+
-->
111+
## 稳定版中有哪些新内容
112+
113+
随着非正常节点关闭功能提升到稳定状态,特性门控
114+
`NodeOutOfServiceVolumeDetach``kube-controller-manager` 上被锁定为 true,并且无法禁用。
115+
116+
<!--
117+
Metrics `force_delete_pods_total` and `force_delete_pod_errors_total` in the
118+
Pod GC Controller are enhanced to account for all forceful pods deletion.
119+
A reason is added to the metric to indicate whether the pod is forcefully deleted
120+
because it is terminated, orphaned, terminating with the `out-of-service` taint,
121+
or terminating and unscheduled.
122+
-->
123+
Pod GC 控制器中的指标 `force_delete_pods_total``force_delete_pod_errors_total`
124+
得到增强,以考虑所有 Pod 的强制删除情况。
125+
指标中会添加一个 "reason",以指示 Pod 是否因终止、孤儿、因 `out-of-service`
126+
污点而终止或因未计划终止而被强制删除。
127+
128+
<!--
129+
A "reason" is also added to the metric `attachdetach_controller_forced_detaches`
130+
in the Attach Detach Controller to indicate whether the force detach is caused by
131+
the `out-of-service` taint or a timeout.
132+
-->
133+
Attach Detach Controller 中的指标 `attachdetach_controller_forced_detaches`
134+
中还会添加一个 "reason",以指示强制解除挂接是由 `out-of-service` 污点还是超时引起的。
135+
136+
<!--
137+
## What’s next?
138+
139+
This feature requires a user to manually add a taint to the node to trigger
140+
workloads failover and remove the taint after the node is recovered.
141+
In the future, we plan to find ways to automatically detect and fence nodes
142+
that are shutdown/failed and automatically failover workloads to another node.
143+
-->
144+
## 接下来
145+
146+
此特性要求用户手动向节点添加污点以触发工作负载故障转移,并在节点恢复后删除污点。
147+
未来,我们计划找到方法来自动检测和隔离关闭/失败的节点,并自动将工作负载故障转移到另一个节点。
148+
149+
<!--
150+
## How can I learn more?
151+
152+
Check out additional documentation on this feature
153+
[here](https://kubernetes.io/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
154+
-->
155+
## 如何了解更多?
156+
157+
[此处](/zh-cn/docs/concepts/architecture/nodes/#non-graceful-node-shutdown)可以查看有关此特性的其他文档。
158+
159+
<!--
160+
## How to get involved?
161+
162+
We offer a huge thank you to all the contributors who helped with design,
163+
implementation, and review of this feature and helped move it from alpha, beta, to stable:
164+
-->
165+
我们非常感谢所有帮助设计、实现和审查此功能并帮助其从 Alpha、Beta 到稳定版的贡献者:
166+
167+
* Michelle Au ([msau42](https://github.com/msau42))
168+
* Derek Carr ([derekwaynecarr](https://github.com/derekwaynecarr))
169+
* Danielle Endocrimes ([endocrimes](https://github.com/endocrimes))
170+
* Baofa Fan ([carlory](https://github.com/carlory))
171+
* Tim Hockin ([thockin](https://github.com/thockin))
172+
* Ashutosh Kumar ([sonasingh46](https://github.com/sonasingh46))
173+
* Hemant Kumar ([gnufied](https://github.com/gnufied))
174+
* Yuiko Mouri ([YuikoTakada](https://github.com/YuikoTakada))
175+
* Mrunal Patel ([mrunalp](https://github.com/mrunalp))
176+
* David Porter ([bobbypage](https://github.com/bobbypage))
177+
* Yassine Tijani ([yastij](https://github.com/yastij))
178+
* Jing Xu ([jingxu97](https://github.com/jingxu97))
179+
* Xing Yang ([xing-yang](https://github.com/xing-yang))
180+
181+
<!
182+
This feature is a collaboration between SIG Storage and SIG Node.
183+
For those interested in getting involved with the design and development of any
184+
part of the Kubernetes Storage system, join the Kubernetes Storage Special
185+
Interest Group (SIG).
186+
For those interested in getting involved with the design and development of the
187+
components that support the controlled interactions between pods and host
188+
resources, join the Kubernetes Node SIG.
189+
-->
190+
此特性是 SIG Storage 和 SIG Node 之间的协作。对于那些有兴趣参与 Kubernetes
191+
存储系统任何部分的设计和开发的人,请加入 Kubernetes 存储特别兴趣小组(SIG)。
192+
对于那些有兴趣参与支持 Pod 和主机资源之间受控交互的组件的设计和开发,请加入 Kubernetes Node SIG。

0 commit comments

Comments
 (0)