Skip to content

Commit 7243cbd

Browse files
bene2k1jcirinosclwynox-404
authored
docs(ins): add warning for kapsule nodes (#5166)
* docs(ins): add warning for kapsule nodes * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * feat(k8s): add delete pool * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * feat(k8s): update doc * docs(k8s): update * feat(k8s): update navigation * Apply suggestions from code review * Apply suggestions from code review * feat(k8s): update wording * Apply suggestions from code review Co-authored-by: Nox <[email protected]> * feat(k8s): add links to cli * feat(k8s): update wording --------- Co-authored-by: Jessica <[email protected]> Co-authored-by: Nox <[email protected]>
1 parent 45a007e commit 7243cbd

File tree

3 files changed

+108
-0
lines changed

3 files changed

+108
-0
lines changed

menu/navigation.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1875,6 +1875,10 @@
18751875
"label": "Connect to a cluster with kubectl",
18761876
"slug": "connect-cluster-kubectl"
18771877
},
1878+
{
1879+
"label": "Manage Kapsule node pools",
1880+
"slug": "manage-node-pools"
1881+
},
18781882
{
18791883
"label": "Deploy an image from Container Registry",
18801884
"slug": "deploy-image-from-container-registry"

pages/instances/api-cli/migrating-instances.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ To do so, you need the Instance’s ID and a valid API key.
2626
Network interface names may vary across commercial families (e.g. ENT1 vs. POP2). Ensure that any hardcoded interface names in your configurations or scripts are updated to avoid migration issues.
2727
</Message>
2828

29+
<Message type="important">
30+
Do **not** manually change the commercial type of **Kubernetes Kapsule nodes** using the API or CLI. Kubernetes Kapsule nodes **must be managed** through Kubernetes. Modifying node types outside of the recommended method can lead to instability or unexpected behavior.
31+
To change the commercial type of your nodes, create a new node pool with the desired Instance type and [migrate your workloads](/kubernetes/how-to/manage-node-pools/#how-to-migrate-existing-workloads-to-a-new-kubernets-kapsule-node-pool) to the new pool.
32+
</Message>
2933
<Tabs id="updateinstance">
3034
<TabsTab label="CLI">
3135

Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
---
2+
meta:
3+
title: How to manage Kubernetes Kapsule node pools
4+
description: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
5+
content:
6+
h1: How to manage Kubernetes node pools
7+
paragraph: Learn how to manage Kubernetes Kapsule node pools from the Scaleway console.
8+
tags: kubernetes kapsule kosmos
9+
dates:
10+
validation: 2025-06-23
11+
posted: 2025-06-23
12+
categories:
13+
- containers
14+
---
15+
16+
This documentation provides step-by-step instructions on how to manage Kubernetes Kapsule node pools using the Scaleway console.
17+
18+
<Macro id="requirements" />
19+
20+
- A Scaleway account logged into the [console](https://console.scaleway.com)
21+
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
22+
- Created a [Kubernetes Kapsule cluster](/kubernetes/how-to/create-cluster/)
23+
24+
## How to create a new Kubernetes Kapsule node pool
25+
26+
<Message type="tip">
27+
Kubernetes Kapsule supports using both **fully isolated** and **controlled isolation** node pools within the same cluster. [Learn more.](/kubernetes/reference-content/secure-cluster-with-private-network/#what-is-the-difference-between-controlled-isolation-and-full-isolation)
28+
</Message>
29+
30+
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
31+
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
32+
3. Click the **Pools** tab to display the pool configuration of the cluster.
33+
4. Click **Add pool** to launch the pool creation wizard.
34+
5. Configure the pool:
35+
- Choose the **Availability Zone** for the pool.
36+
- Choose the commercial type of Instance for the pool.
37+
- Configure the system volume.
38+
- Configure pool options.
39+
- Enter the pool's details.
40+
6. Click **Add pool**. The pool gets added to your basket. Repeat the steps above to configure additional pools.
41+
7. Click **Review** once you have configured the desired pools. A summary of your configuration displays.
42+
8. Verify your configuration and click **Submit** to add the pool(s) to your Kapsule cluster.
43+
44+
<Message type="note">
45+
Alternatively, you can use the Scaleway CLI to [create node pools](https://cli.scaleway.com/k8s/#create-a-new-pool-in-a-cluster).
46+
</Message>
47+
48+
## How to edit an existing Kubernetes Kapsule node pool
49+
50+
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
51+
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
52+
3. Click the **Pools** tab to display the pool configuration of the cluster.
53+
4. Click <Icon name="more" /> > **Edit** next to the node pool you want to edit.
54+
5. Configure the pool:
55+
- Update pool tags
56+
- Configure autoscaling
57+
- Enable or disable the [autoheal feature](/kubernetes/reference-content/using-kapsule-autoheal-feature/)
58+
6. Click **Update pool** to update the pool configuration.
59+
60+
<Message type="note">
61+
Alternatively, you can use the Scaleway CLI to [update a node pool](https://cli.scaleway.com/k8s/#update-a-pool-in-a-cluster).
62+
</Message>
63+
64+
## How to migrate existing workloads to a new Kubernets Kapsule node pool
65+
66+
<Message type="important">
67+
Always ensure that your **data is backed up** before performing any operations that could affect it.
68+
</Message>
69+
70+
1. Create the new node pool with the desired configuration either [from the console](#how-to-create-a-new-kubernetes-kapsule-node-pool) or by using the Scaleway CLI tool `scw`.
71+
<Message type="tip">
72+
Ensure that the new node pool is properly labeled if necessary.
73+
</Message>
74+
2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state.
75+
3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>`
76+
<Message type="tip">
77+
You can use a selector on the pool name label to cordon or drain multiple nodes at the same time if your app allows it (ex. `kubectl cordon -l k8s.scaleway.com/pool-name=mypoolname`)
78+
</Message>
79+
4. Drain the nodes to evict the pods gracefully.
80+
- For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data`
81+
- The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them.
82+
- The `--delete-emptydir-data` flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes.
83+
- Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for further information.
84+
5. Run `kubectl get pods -o wide` after draining, to verify that the pods have been rescheduled to the new node pool.
85+
6. [Delete the old node pool](#how-to-delete-an-existing-kubernetes-kapsule-node-pool) once you confirm that all workloads are running smoothly on the new node pool.
86+
87+
## How to delete an existing Kubernetes Kapsule node pool
88+
89+
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
90+
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
91+
3. Click the **Pools** tab to display the pool configuration of the cluster.
92+
4. Click <Icon name="more" /> > **Delete** next to the node pool you want to delete.
93+
5. Click **Delete pool** in the pop-up to confirm deletion of the pool.
94+
<Message type="important">
95+
This action will permanently destroy your pool and all its data.
96+
</Message>
97+
98+
<Message type="note">
99+
Alternatively, you can use the Scaleway CLI to [delete a node pool](https://cli.scaleway.com/k8s/#delete-a-pool-in-a-cluster).
100+
</Message>

0 commit comments

Comments
 (0)