You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/kubernetes/how-to/manage-node-pools.mdx
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
---
2
2
meta:
3
-
title: How to manage Kubetnetes Kapsule node pools
3
+
title: How to manage Kubernetes Kapsule node pools
4
4
description: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
5
5
content:
6
-
h1: How to migrate Kubetnetes workloads to a new node pools
6
+
h1: How to manage Kubernetes node pools
7
7
paragraph: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
8
8
tags: kubernetes kapsule kosmos
9
9
dates:
@@ -46,11 +46,11 @@ This documentation provides step-by-step instructions on how to manage Kubernete
46
46
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
47
47
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
48
48
3. Click the **Pools** tab to display the pool configuration of the cluster.
49
-
4. Click <Iconname="more" /> > **Delete** next to the node pool you want to edit.
49
+
4. Click <Iconname="more" /> > **Edit** next to the node pool you want to edit.
50
50
5. Configure the pool:
51
51
- Update pool tags
52
52
- Configure autoscaling
53
-
- Enable or disable the [autoheal feature](/kubernetes/concepts/#autoheal)
53
+
- Enable or disable the [autoheal feature](/kubernetes/reference-content/using-kapsule-autoheal-feature/)
54
54
6. Click **Update pool** to update the pool configuration.
55
55
56
56
## How to migrate existing workloads to a new Kubernets Kapsule node pool
@@ -59,12 +59,15 @@ This documentation provides step-by-step instructions on how to manage Kubernete
59
59
Always ensure that your **data is backed up** before performing any operations that could affect it.
60
60
</Message>
61
61
62
-
1. Create the new node pool with the desired configuration either [from the console](/kubernetes/how-to/create-node-pool/) or by using `kubectl`.
62
+
1. Create the new node pool with the desired configuration either [from the console](#how-to-create-a-new-kubernetes-kapsule-node-pool) or by using `scw`.
63
63
<Messagetype="tip">
64
64
Ensure that the new node pool is properly labeled if necessary.
65
65
</Message>
66
66
2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state.
67
67
3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>`
68
+
<Messagetype="tip">
69
+
You can use a selector on the pool name label to cordon or drain multiple nodes at the same time if your app allows it (ex. `kubectl cordon -l k8s.scaleway.com/pool-name=mypoolname`)
70
+
</Message>
68
71
4. Drain the nodes to evict the pods gracefully.
69
72
- For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data`
70
73
- The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them.
0 commit comments