Skip to content

Commit 4d1245d

Browse files
bene2k1nox-404
andauthored
Apply suggestions from code review
Co-authored-by: Nox <[email protected]>
1 parent d015d80 commit 4d1245d

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

pages/kubernetes/how-to/manage-node-pools.mdx

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
meta:
3-
title: How to manage Kubetnetes Kapsule node pools
3+
title: How to manage Kubernetes Kapsule node pools
44
description: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
55
content:
6-
h1: How to migrate Kubetnetes workloads to a new node pools
6+
h1: How to manage Kubernetes node pools
77
paragraph: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
88
tags: kubernetes kapsule kosmos
99
dates:
@@ -46,11 +46,11 @@ This documentation provides step-by-step instructions on how to manage Kubernete
4646
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
4747
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
4848
3. Click the **Pools** tab to display the pool configuration of the cluster.
49-
4. Click <Icon name="more" /> > **Delete** next to the node pool you want to edit.
49+
4. Click <Icon name="more" /> > **Edit** next to the node pool you want to edit.
5050
5. Configure the pool:
5151
- Update pool tags
5252
- Configure autoscaling
53-
- Enable or disable the [autoheal feature](/kubernetes/concepts/#autoheal)
53+
- Enable or disable the [autoheal feature](/kubernetes/reference-content/using-kapsule-autoheal-feature/)
5454
6. Click **Update pool** to update the pool configuration.
5555

5656
## How to migrate existing workloads to a new Kubernets Kapsule node pool
@@ -59,12 +59,15 @@ This documentation provides step-by-step instructions on how to manage Kubernete
5959
Always ensure that your **data is backed up** before performing any operations that could affect it.
6060
</Message>
6161

62-
1. Create the new node pool with the desired configuration either [from the console](/kubernetes/how-to/create-node-pool/) or by using `kubectl`.
62+
1. Create the new node pool with the desired configuration either [from the console](#how-to-create-a-new-kubernetes-kapsule-node-pool) or by using `scw`.
6363
<Message type="tip">
6464
Ensure that the new node pool is properly labeled if necessary.
6565
</Message>
6666
2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state.
6767
3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>`
68+
<Message type="tip">
69+
You can use a selector on the pool name label to cordon or drain multiple nodes at the same time if your app allows it (ex. `kubectl cordon -l k8s.scaleway.com/pool-name=mypoolname`)
70+
</Message>
6871
4. Drain the nodes to evict the pods gracefully.
6972
- For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data`
7073
- The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them.

0 commit comments

Comments
 (0)