You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/kubernetes/how-to/migrate-node-pool.mdx
-5Lines changed: 0 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,17 +31,12 @@ Migrating workloads can be required to change the commercial type of Instance fo
31
31
<Messagetype="tip">
32
32
Ensure that the new node pool is properly labeled if necessary.
33
33
</Message>
34
-
35
34
2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state.
36
-
37
35
3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>`
38
-
39
36
4. Drain the nodes to evict the pods gracefully.
40
37
- For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data`
41
38
- The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them.
42
39
- The `--delete-emptydir-data` flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes.
43
40
- Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for further information.
44
-
45
41
5. Run `kubectl get pods -o wide` after draining, to verify that the pods have been rescheduled to the new node pool.
46
-
47
42
6.[Delete the old node pool](/kubernetes/how-to/delete-node-pool/) once you confirm that all workloads are running smoothly on the new node pool.
0 commit comments