Skip to content

Commit 58ac1b3

Browse files
committed
docs(ins): add warning for kapsule nodes
1 parent af8b3f1 commit 58ac1b3

File tree

4 files changed

+95
-0
lines changed

4 files changed

+95
-0
lines changed

menu/navigation.json

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1875,6 +1875,14 @@
18751875
"label": "Connect to a cluster with kubectl",
18761876
"slug": "connect-cluster-kubectl"
18771877
},
1878+
{
1879+
"label": "Create a new node Kapsule node pool",
1880+
"slug": "create-node-pool"
1881+
},
1882+
{
1883+
"label": "Migrate a Kapsule node pool",
1884+
"slug": "migrate-node-pool"
1885+
},
18781886
{
18791887
"label": "Deploy an image from Container Registry",
18801888
"slug": "deploy-image-from-container-registry"

pages/instances/api-cli/migrating-instances.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ To do so, you need the Instance’s ID and a valid API key.
2626
Network interface names may vary across commercial families (e.g. ENT1 vs. POP2). Ensure that any hardcoded interface names in your configurations or scripts are updated to avoid migration issues.
2727
</Message>
2828

29+
<Message type="important">
30+
Do **not** manually change the commercial type of **Kubernetes Kapsule nodes** using the API or CLI. Kubernetes Kapsule nodes **must be managed** through Kubernetes. Modifying node types outside of the recommended method can lead to instability or unexpected behavior.
31+
To change the commercial type of your nodes, create a new node pool with the desired Instance type and [migrate your workloads](/kubernets/how-to/migrate-node-pool/) to the new pool.
32+
</Message>
2933
<Tabs id="updateinstance">
3034
<TabsTab label="CLI">
3135

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
meta:
3+
title: Create a new Kubernetes Kapsule node pool
4+
description: Learn how to add a new node pool to an existing Kubernetes Kapsule cluster.
5+
content:
6+
h1: Create a new Kubernetes Kapsule node pool
7+
paragraph: Learn how to add a new node pool to an existing Kubernetes Kapsule cluster.
8+
tags: kubernetes kapsule kosmos
9+
dates:
10+
validation: 2025-06-23
11+
posted: 2025-06-23
12+
categories:
13+
- containers
14+
---
15+
16+
This documentation provides step-by-step instructions on how to create a new node pool for an existing Kubernetes Kapsule cluster.
17+
18+
<Macro id="requirements" />
19+
20+
- A Scaleway account logged into the [console](https://console.scaleway.com)
21+
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
22+
- Created a [Kubernetes Kapsule cluster](/kubernetes/how-to/create-cluster/)
23+
24+
1. Navigate to **Kubernetes** under the **Containers** section of the [Scaleway console](https://console.scaleway.com/) side menu. The Kubernetes dashboard displays.
25+
2. Click the Kapsule cluster name you want to manage. The cluster information page displays.
26+
3. Click the **Pools** tab to display the pool configuration of the cluster.
27+
4. Click **Add pool** to launch the pool creation wizard.
28+
5. Configure the pool:
29+
- Choose the **Availability Zone** for the pool.
30+
- Choose the commercial type of Instance for the pool.
31+
- Configure the system volume.
32+
- Configure pool options.
33+
- Enter the pool's details.
34+
6. Click **Add pool**. The pool gets added to your basket. Repeat the steps above to configure addional pools.
35+
7. Click **Review** once you have configured the desired pools. A summary of your configuration displays.
36+
8. Verify your configuration and click **Submit** to add the pool(s) to your Kapsule cluster.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
meta:
3+
title: Migrating Kubetnetes workloads to a new node pool
4+
description: Learn how to migrate existing Kubernetes workloads to a new node pool.
5+
content:
6+
h1: Migrating Kubetnetes workloads to a new node pools
7+
paragraph: Learn how to migrate existing Kubernetes workloads to a new node pool.
8+
tags: kubernetes kapsule kosmos
9+
dates:
10+
validation: 2025-06-23
11+
posted: 2025-06-23
12+
categories:
13+
- containers
14+
---
15+
16+
This documentation provides step-by-step instructions on how to migrate Kubernetes workloads from one node pool to another within a Kubernetes Kapsule cluster.
17+
Migrating workloads can be required to change the commercial type of Instance for your pool, or to scale your infrastructure.
18+
19+
<Macro id="requirements" />
20+
21+
- A Scaleway account logged into the [console](https://console.scaleway.com)
22+
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
23+
- Created a [Kubernetes Kapsule cluster](/kubernetes/how-to/create-cluster/)
24+
- Have an existing node pool that you want to migrate
25+
26+
<Message type="important">
27+
Always ensure that your **data is backed up** before performing any operations that could affect it.
28+
</Message>
29+
30+
1. Create the new node pool with the desired configuration either [from the console](/kubernetes/how-to/create-node-pool/) or by using `kubectl`.
31+
<Message type="tip">
32+
Ensure that the new node pool is properly labeled if necessary.
33+
</Message>
34+
35+
2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state.
36+
37+
3. Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run: `kubectl cordon <node-name>`
38+
39+
4. Drain the nodes to evict the pods gracefully.
40+
- For each node, run: `kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data`
41+
- The `--ignore-daemonsets` flag is used because daemon sets manage pods across all nodes and will automatically reschedule them.
42+
- The `--delete-emptydir-data` flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes.
43+
- Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for furher information.
44+
45+
5. After draining, verify that the pods have been rescheduled to the new node pool. Run `kubectl get pods -o wide` after daraining, to verify that the pods have been rescheduled to the new node pool
46+
47+
6. Delete the old node pool once you confirm that all workloads are running smoothly on the new node pool,

0 commit comments

Comments
 (0)