You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx
+32-49Lines changed: 32 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,9 @@ dates:
13
13
posted: 2025-01-30
14
14
---
15
15
16
-
Historically, Scaleway Kapsule clusters were single-zone, meaning workloads and their associated storage were confined to a single location. With the introduction of multi-zone support, distributing workloads across multiple zones can enhance availability and fault tolerance.
17
-
18
-
This tutorial provides a generalized approach to migrating Persistent Volumes (PVs) from one zone to another in a Scaleway Kapsule multi-zone cluster, applicable to various applications.
16
+
Historically, Scaleway Kapsule clusters were single-zone, meaning workloads and their associated storage were confined to a single location. With the introduction of multi-zone support, distributing workloads across multiple zones can enhance availability and fault tolerance. This tutorial provides a generalized approach to migrating Persistent Volumes (PVs) from one zone to another in a Scaleway Kapsule multi-zone cluster, applicable to various applications.
19
17
20
18
<Macroid="requirements" />
21
-
22
19
- A Scaleway account logged into the [console](https://console.scaleway.com)
23
20
-[Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
24
21
-[Created a Kapsule cluster](/kubernetes/how-to/create-cluster/) with multi-zone support enabled
@@ -28,34 +25,28 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P
28
25
- Familiarity with Kubernetes Persistent Volumes, `StatefulSets`, and Storage Classes.
29
26
30
27
<Messagetype="important">
31
-
**Backing up your data is crucial before making any changes.**
32
-
Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding.
28
+
**Backing up your data is crucial before making any changes.**
29
+
Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding.
33
30
</Message>
34
31
35
32
## Identify existing Persistent Volumes
36
33
37
34
1. Use `kubectl` to interact with your cluster and list the Persistent Volumes in your cluster:
38
-
```sh
39
-
kubectl get pv
40
-
```
41
-
42
-
2. Identify the volumes attached to your StatefulSet and note their `PersistentVolumeClaim` (PVC) names and `StorageClass`.
43
-
Example output:
44
-
```plaintext
45
-
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS ZONE
To find the `VOLUME_ID`, correlate this with the scw command output:
49
-
```
50
-
scw instance volume list
51
-
```
52
-
53
-
3. To find the `VOLUME_ID` associated with a PV, correlate it with the output of the following command:
35
+
```sh
36
+
kubectl get pv
37
+
```
38
+
2. Identify the volumes attached to your StatefulSet and note their `PersistentVolumeClaim` (PVC) names and `StorageClass`. To find the `VOLUME_ID` associated with a PV, correlate the PV's details with the output of the following command:
39
+
```sh
40
+
scw instance volume list
41
+
```
42
+
3. Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`.
54
43
55
-
```sh
56
-
scw instance volume list
57
-
```
58
-
Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`.
44
+
**Example output:**
45
+
```plaintext
46
+
NAMECAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIMSTORAGECLASS ZONE
Choose zones based on your distribution strategy. Check Scaleway's [zone availability](/account/reference-content/products-availability/) for optimal placement.
78
+
<Messagetype="tip">
79
+
Choose zones based on your distribution strategy. Check Scaleway's [zone availability](/account/reference-content/products-availability/) for optimal placement.
91
80
</Message>
92
81
93
82
## Update Persistent Volume Claims (PVCs)
94
83
95
84
<Messagetype="important">
96
-
Deleting a PVC can lead to data loss if not managed correctly. Ensure your application is scaled down or data is backed up.
85
+
Deleting a PVC can lead to data loss if not managed correctly. Ensure your application is scaled down or data is backed up.
97
86
</Message>
98
87
99
88
Modify your `PersistentVolumeClaims` to reference the newly created volumes.
@@ -102,13 +91,11 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes.
102
91
```sh
103
92
kubectl scale statefulset my-app --replicas=0
104
93
```
105
-
106
-
2. Delete the existing PVC (PVCs are immutable and cannot be updated directly):
94
+
2. Delete the existing PVC:
107
95
```sh
108
96
kubectl delete pvc my-app-pvc
109
97
```
110
-
111
-
3. Create a new PVC with a multi-zone compatible `StorageClass`:
98
+
3. Create a new PVC with a multi-zone compatible `StorageClass`. Here is an example YAML file:
112
99
```yaml
113
100
apiVersion: v1
114
101
kind: PersistentVolumeClaim
@@ -122,17 +109,14 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes.
122
109
requests:
123
110
storage: 10Gi
124
111
```
125
-
126
112
4. Apply the updated PVCs:
127
113
```sh
128
114
kubectl apply -f my-app-pvc.yaml
129
115
```
130
116
131
117
## Reconfigure the StatefulSet to use multi-zone volumes
132
118
133
-
1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims.
134
-
Example configuration:
135
-
119
+
1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims. Here is an example configuration:
136
120
```yaml
137
121
apiVersion: apps/v1
138
122
kind: StatefulSet
@@ -150,7 +134,6 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes.
150
134
requests:
151
135
storage: 10Gi
152
136
```
153
-
154
137
2. Apply the `StatefulSet` changes:
155
138
```sh
156
139
kubectl apply -f my-app-statefulset.yaml
@@ -162,37 +145,37 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes.
162
145
```sh
163
146
kubectl get pods -o wide
164
147
```
165
-
166
148
2. Ensure that the new Persistent Volumes are bound and correctly distributed across the zones:
167
149
```sh
168
150
kubectl get pv
169
151
```
170
152
171
153
## Considerations for volume expansion
172
154
173
-
If you need to **resize the Persistent Volume**, ensure that the `StorageClass` supports volume expansion.
155
+
If you need to **resize the Persistent Volume**, ensure that the `StorageClass` supports volume expansion.
174
156
175
157
1. Check if the feature is enabled:
176
158
```sh
177
159
kubectl get storageclass scw-bssd-multi-zone -o yaml | grep allowVolumeExpansion
178
160
```
179
-
180
161
2. If `allowVolumeExpansion: true` is present, you can modify your PVC:
181
162
```yaml
182
163
spec:
183
-
resources:
164
+
resources:
184
165
requests:
185
-
storage: 20Gi
166
+
storage: 20Gi
186
167
```
187
-
188
168
3. Then apply the change:
189
169
```sh
190
170
kubectl apply -f my-app-pvc.yaml
191
171
```
192
172
193
-
## Conclusion
173
+
## Troubleshooting
194
174
195
-
You have successfully migrated your Persistent Volumes to a multi-zone Kapsule setup. Your `StatefulSet` is now distributed across multiple zones, improving resilience and availability.
175
+
- **Persistent Volume not bound:** Ensure that the `StorageClass` and zone settings are correct.
176
+
- **Application not scaling:** Check the StatefulSet configuration and PVC settings.
177
+
- **Data integrity issues:** Verify the integrity of your backups before proceeding with any changes.
196
178
197
-
For further optimization, consider implementing [Pod anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules to ensure an even distribution of workloads across zones.
179
+
## Conclusion
198
180
181
+
You have successfully migrated your Persistent Volumes to a multi-zone Kapsule setup. Your `StatefulSet` is now distributed across multiple zones, improving resilience and availability. For further optimization, consider implementing [Pod anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules to ensure an even distribution of workloads across zones.
0 commit comments