Skip to content

Commit 71f1ece

Browse files
committed
feat(k8s): tuto
1 parent 654879a commit 71f1ece

File tree

1 file changed

+18
-6
lines changed
  • tutorials/migrate-k8s-persistent-volumes-to-multi-az

1 file changed

+18
-6
lines changed

tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ meta:
33
title: Migrating persistent volumes in a multi-zone Scaleway Kapsule cluster
44
description: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance.
55
content:
6-
h1: Migrating persistent volumes in a multi-zone Scaleway Kapsul cluster
6+
h1: Migrating persistent volumes in a multi-zone Scaleway Kapsule cluster
77
paragraph: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance.
88
tags: kapsule elastic-metal migration persistent-volumes
99
categories:
@@ -29,7 +29,7 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P
2929

3030
<Message type="important">
3131
**Backing up your data is crucial before making any changes.**
32-
Ensure you have a backup strategy in place. You can use tools like Velero for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding.
32+
Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding.
3333
</Message>
3434

3535
## Identify existing Persistent Volumes
@@ -50,6 +50,13 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P
5050
scw instance volume list
5151
```
5252

53+
3. To find the `VOLUME_ID` associated with a PV, correlate it with the output of the following command:
54+
55+
```sh
56+
scw instance volume list
57+
```
58+
Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`.
59+
5360
## Create snapshots of your existing Persistent Volumes
5461

5562
Use the Scaleway CLI to create snapshots of your volumes.
@@ -91,12 +98,17 @@ Repeat this for each zone required.
9198

9299
Modify your `PersistentVolumeClaims` to reference the newly created volumes.
93100

94-
1. Delete the existing PVC (PVCs are immutable and cannot be updated directly):
101+
1. Before deleting the existing PVC, scale down your application to prevent data loss:
102+
```sh
103+
kubectl scale statefulset my-app --replicas=0
104+
```
105+
106+
2. Delete the existing PVC (PVCs are immutable and cannot be updated directly):
95107
```sh
96108
kubectl delete pvc my-app-pvc
97109
```
98110

99-
2. Create a new PVC with a multi-zone compatible `StorageClass`:
111+
3. Create a new PVC with a multi-zone compatible `StorageClass`:
100112
```yaml
101113
apiVersion: v1
102114
kind: PersistentVolumeClaim
@@ -111,15 +123,15 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes.
111123
storage: 10Gi
112124
```
113125
114-
3. Apply the updated PVCs:
126+
4. Apply the updated PVCs:
115127
```sh
116128
kubectl apply -f my-app-pvc.yaml
117129
```
118130

119131
## Reconfigure the StatefulSet to use multi-zone volumes
120132

121133
1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims.
122-
Example `StatefulSet` configuration:
134+
Example configuration:
123135

124136
```yaml
125137
apiVersion: apps/v1

0 commit comments

Comments
 (0)