|
| 1 | +--- |
| 2 | +meta: |
| 3 | + title: Migrating Kubetnetes workloads to a new node pool |
| 4 | + description: Learn how to migrate existing Kubernetes workloads to a new node pool. |
| 5 | +content: |
| 6 | + h1: Migrating Kubetnetes workloads to a new node pools |
| 7 | + paragraph: Learn how to migrate existing Kubernetes workloads to a new node pool. |
| 8 | +tags: kubernetes kapsule kosmos |
| 9 | +dates: |
| 10 | + validation: 2025-06-23 |
| 11 | + posted: 2025-06-23 |
| 12 | +categories: |
| 13 | + - containers |
| 14 | +--- |
| 15 | + |
| 16 | +This documentation provides step-by-step instructions on how to migrate Kubernetes workloads from one node pool to another within a Kubernetes Kapsule cluster. |
| 17 | +Migrating workloads can be required to change the commercial type of Instance for your pool, or to scale your infrastructure. |
| 18 | + |
| 19 | +<Macro id="requirements" /> |
| 20 | + |
| 21 | +- A Scaleway account logged into the [console](https://console.scaleway.com) |
| 22 | +- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization |
| 23 | +- Created a [Kubernetes Kapsule cluster](/kubernetes/how-to/create-cluster/) |
| 24 | +- Have an existing node pool that you want to migrate |
| 25 | + |
| 26 | +<Message type="important"> |
| 27 | + Always ensure that your **data is backed up** before performing any operations that could affect it. |
| 28 | +</Message> |
| 29 | + |
| 30 | +1. Create the new node pool with the desired configuration either [from the console](/kubernetes/how-to/create-node-pool/) or by using `kubectl`. |
| 31 | + <Message type="tip"> |
| 32 | + Ensure that the new node pool is properly labeled if necessary. |
| 33 | + </Message> |
| 34 | + |
| 35 | +2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state. |
| 36 | + |
| 37 | +3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>` |
| 38 | + |
| 39 | +4. Drain the nodes to evict the pods gracefully. |
| 40 | + - For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data` |
| 41 | + - The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them. |
| 42 | + - The `--delete-emptydir-data` flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes. |
| 43 | + - Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for furher information. |
| 44 | + |
| 45 | +5. After draining, verify that the pods have been rescheduled to the new node pool. Run `kubectl get pods -o wide` after daraining, to verify that the pods have been rescheduled to the new node pool |
| 46 | + |
| 47 | +6. Delete the old node pool once you confirm that all workloads are running smoothly on the new node pool, |
0 commit comments