You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/architecture/nodes.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -476,8 +476,8 @@ shutdown node comes up, the pods will be deleted by kubelet and new pods will be
476
476
created on a different running node. If the original shutdown node does not come up,
477
477
these pods will be stuck in terminating status on the shutdown node forever.
478
478
479
-
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to
480
-
a Node marking it out-of-service.
479
+
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
480
+
or `NoSchedule` effect to a Node marking it out-of-service.
481
481
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
482
482
is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
483
483
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
0 commit comments