Skip to content

Commit bd610ae

Browse files
committed
Review updates for StatefulSet StartOrdinal blog post
1 parent 0043f19 commit bd610ae

File tree

1 file changed

+94
-92
lines changed

1 file changed

+94
-92
lines changed

content/en/blog/_posts/2022-12-16-statefulset-migration.md

Lines changed: 94 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
layout: blog
3-
title: "Kubernetes 1.26: StatefulSet Migration"
4-
date: 2022-12-16
3+
title: "Kubernetes 1.26: StatefulSet Start Ordinal Simplifies Migration"
4+
date: 2023-01-03
55
slug: statefulset-migration
66
---
77

@@ -16,13 +16,13 @@ used.
1616
## Background
1717

1818
StatefulSets ordinals provide sequential identities for pod replicas. When using
19-
[OrderedReady Pod Management](/docs/tutorials/stateful-application/basic-stateful-set/#orderedready-pod-management),
19+
[`OrderedReady` Pod management](/docs/tutorials/stateful-application/basic-stateful-set/#orderedready-pod-management),
2020
Pods are created from ordinal index `0` up to `N-1`.
2121

2222
With Kubernetes today, orchestrating a StatefulSet migration across clusters is
2323
challenging. Backup and restore solutions exist, but these require the
2424
application to be scaled down to zero replicas prior to migration. In today's
25-
fully connected world, planned downtime and unavailability may not allow you to
25+
fully connected world, even planned application downtime may not allow you to
2626
meet your business goals. You could use
2727
[Cascading Delete](/docs/tutorials/stateful-application/basic-stateful-set/#cascading-delete)
2828
or
@@ -31,8 +31,9 @@ to migrate individual pods, however this is error prone and tedious to manage.
3131
You lose the self-healing benefit of the StatefulSet controller when your Pods
3232
fail or are evicted.
3333

34-
This feature enables a StatefulSet to be responsible for a range of ordinals
35-
within a logical range of `[0, N)`. With it, you can scale down a range
34+
Kubernetes v1.26 enables a StatefulSet to be responsible for a range of ordinals
35+
within a half-open interval `[0, N)` (the ordinals 0, 1, ... N-1).
36+
With it, you can scale down a range
3637
(`[0, k)`) in a source cluster, and scale up the complementary range (`[k, N)`)
3738
in a destination cluster, while maintaining application availability. This
3839
enables you to retain *at most one* semantics and
@@ -63,7 +64,7 @@ StatefulSet with a customized `.spec.ordinals.start`.
6364
## Try it for yourself
6465

6566
In this demo, you'll use the `StatefulSetStartOrdinal` feature to migrate a
66-
StatefulSet from one cluster to another. For this demo, the
67+
StatefulSet from one Kubernetes cluster to another. For this demo, the
6768
[redis-cluster](https://github.com/bitnami/charts/tree/main/bitnami/redis-cluster)
6869
Bitnami Helm chart is used to install Redis.
6970

@@ -77,124 +78,125 @@ Pre-requisites: Two clusters named `source` and `destination`.
7778
support is enabled
7879
* The same default `StorageClass` is installed on both clusters. This
7980
`StorageClass` should provision underlying storage that is accessible from
80-
both clusters
81+
both clusters.
8182

8283
1. Create a demo namespace on both clusters.
8384

84-
```
85-
kubectl create ns kep-3335
86-
```
85+
```
86+
kubectl create ns kep-3335
87+
```
8788

8889
2. Deploy a `ServiceExport` on both clusters.
8990

90-
```
91-
kind: ServiceExport
92-
apiVersion: multicluster.x-k8s.io/v1alpha1
93-
metadata:
94-
namespace: kep-3335
95-
name: redis-redis-cluster-headless
96-
```
91+
```
92+
kind: ServiceExport
93+
apiVersion: multicluster.x-k8s.io/v1alpha1
94+
metadata:
95+
namespace: kep-3335
96+
name: redis-redis-cluster-headless
97+
```
9798

9899
3. Deploy a Redis cluster on `source`.
99100

100-
```
101-
helm repo add bitnami https://charts.bitnami.com/bitnami
102-
helm install redis --namespace kep-3335 \
103-
bitnami/redis-cluster \
104-
--set persistence.size=1Gi
105-
```
101+
```
102+
helm repo add bitnami https://charts.bitnami.com/bitnami
103+
helm install redis --namespace kep-3335 \
104+
bitnami/redis-cluster \
105+
--set persistence.size=1Gi
106+
```
106107

107108
4. On `source`, check the replication status.
108109

109-
```
110-
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
111-
"redis-cli -c -h redis-redis-cluster -a $(kubectl get secret redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) CLUSTER NODES;"
112-
```
110+
```
111+
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
112+
"redis-cli -c -h redis-redis-cluster -a $(kubectl get secret redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) CLUSTER NODES;"
113+
```
113114

114-
```
115-
2ce30362c188aabc06f3eee5d92892d95b1da5c3 10.104.0.14:6379@16379 myself,master - 0 1669764411000 3 connected 10923-16383
116-
7743661f60b6b17b5c71d083260419588b4f2451 10.104.0.16:6379@16379 slave 2ce30362c188aabc06f3eee5d92892d95b1da5c3 0 1669764410000 3 connected
117-
961f35e37c4eea507cfe12f96e3bfd694b9c21d4 10.104.0.18:6379@16379 slave a8765caed08f3e185cef22bd09edf409dc2bcc61 0 1669764411000 1 connected
118-
7136e37d8864db983f334b85d2b094be47c830e5 10.104.0.15:6379@16379 slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669764412595 2 connected
119-
a8765caed08f3e185cef22bd09edf409dc2bcc61 10.104.0.19:6379@16379 master - 0 1669764411592 1 connected 0-5460
120-
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 master - 0 1669764410000 2 connected 5461-10922
121-
```
115+
```
116+
2ce30362c188aabc06f3eee5d92892d95b1da5c3 10.104.0.14:6379@16379 myself,master - 0 1669764411000 3 connected 10923-16383
117+
7743661f60b6b17b5c71d083260419588b4f2451 10.104.0.16:6379@16379 slave 2ce30362c188aabc06f3eee5d92892d95b1da5c3 0 1669764410000 3 connected
118+
961f35e37c4eea507cfe12f96e3bfd694b9c21d4 10.104.0.18:6379@16379 slave a8765caed08f3e185cef22bd09edf409dc2bcc61 0 1669764411000 1 connected
119+
7136e37d8864db983f334b85d2b094be47c830e5 10.104.0.15:6379@16379 slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669764412595 2 connected
120+
a8765caed08f3e185cef22bd09edf409dc2bcc61 10.104.0.19:6379@16379 master - 0 1669764411592 1 connected 0-5460
121+
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 master - 0 1669764410000 2 connected 5461-10922
122+
```
122123

123124
5. On `destination`, deploy Redis with zero replicas.
124125

125-
```
126-
helm install redis --namespace kep-3335 \
127-
bitnami/redis-cluster \
128-
--set persistence.size=1Gi \
129-
--set cluster.nodes=0 \
130-
--set redis.extraEnvVars\[0\].name=REDIS_NODES,redis.extraEnvVars\[0\].value="redis-redis-cluster-headless.kep-3335.svc.cluster.local" \
131-
--set existingSecret=redis-redis-cluster
132-
```
126+
```
127+
helm install redis --namespace kep-3335 \
128+
bitnami/redis-cluster \
129+
--set persistence.size=1Gi \
130+
--set cluster.nodes=0 \
131+
--set redis.extraEnvVars\[0\].name=REDIS_NODES,redis.extraEnvVars\[0\].value="redis-redis-cluster-headless.kep-3335.svc.cluster.local" \
132+
--set existingSecret=redis-redis-cluster
133+
```
133134

134135
6. Scale down replica `redis-redis-cluster-5` in the source cluster.
135136

136-
```
137-
kubectl patch sts redis-redis-cluster -p '{"spec": {"replicas": 5}}'
138-
```
137+
```
138+
kubectl patch sts redis-redis-cluster -p '{"spec": {"replicas": 5}}'
139+
```
139140

140141
7. Migrate dependencies from `source` to `destination`.
141142

142-
The following commands copy resources from `source` to `destionation`. Details
143-
that are not relevant in `destination` cluster are removed (eg: `uid`,
144-
`resourceVersion`, `status`).
143+
The following commands copy resources from `source` to `destionation`. Details
144+
that are not relevant in `destination` cluster are removed (eg: `uid`,
145+
`resourceVersion`, `status`).
145146

146-
Source Cluster
147+
#### Source Cluster
147148

148-
Note: If using a `StorageClass` with `reclaimPolicy: Delete` configured, you
149-
should patch the PVs in `source` with `reclaimPolicy: Retain` prior to
150-
deletion to retain the underlying storage used in `destination`. See
151-
[Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)
152-
for more details.
149+
Note: If using a `StorageClass` with `reclaimPolicy: Delete` configured, you
150+
should patch the PVs in `source` with `reclaimPolicy: Retain` prior to
151+
deletion to retain the underlying storage used in `destination`. See
152+
[Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)
153+
for more details.
153154

154-
```
155-
kubectl get pvc redis-data-redis-redis-cluster-5 -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion, .metadata.annotations, .metadata.finalizers, .status)' > /tmp/pvc-redis-data-redis-redis-cluster-5.yaml
156-
kubectl get pv $(yq '.spec.volumeName' /tmp/pvc-redis-data-redis-redis-cluster-5.yaml) -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion, .metadata.annotations, .metadata.finalizers, .spec.claimRef, .status)' > /tmp/pv-redis-data-redis-redis-cluster-5.yaml
157-
kubectl get secret redis-redis-cluster -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion)' > /tmp/secret-redis-redis-cluster.yaml
158-
```
155+
```
156+
kubectl get pvc redis-data-redis-redis-cluster-5 -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion, .metadata.annotations, .metadata.finalizers, .status)' > /tmp/pvc-redis-data-redis-redis-cluster-5.yaml
157+
kubectl get pv $(yq '.spec.volumeName' /tmp/pvc-redis-data-redis-redis-cluster-5.yaml) -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion, .metadata.annotations, .metadata.finalizers, .spec.claimRef, .status)' > /tmp/pv-redis-data-redis-redis-cluster-5.yaml
158+
kubectl get secret redis-redis-cluster -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion)' > /tmp/secret-redis-redis-cluster.yaml
159+
```
159160

160-
Destination Cluster
161+
#### Destination Cluster
161162

162-
Note: For the PV/PVC, this procedure only works if the underlying storage system
163-
that your PVs use can support being copied into `destination`. Storage
164-
that is associated with a specific node or topology may not be supported.
165-
Additionally, some storage systems may store addtional metadata about
166-
volumes outside of a PV object, and may require a more specialized
167-
sequence to import a volume.
163+
Note: For the PV/PVC, this procedure only works if the underlying storage system
164+
that your PVs use can support being copied into `destination`. Storage
165+
that is associated with a specific node or topology may not be supported.
166+
Additionally, some storage systems may store addtional metadata about
167+
volumes outside of a PV object, and may require a more specialized
168+
sequence to import a volume.
168169

169-
```
170-
kubectl create -f /tmp/pv-redis-data-redis-redis-cluster-5.yaml
171-
kubectl create -f /tmp/pvc-redis-data-redis-redis-cluster-5.yaml
172-
kubectl create -f /tmp/secret-redis-redis-cluster.yaml
173-
```
170+
```
171+
kubectl create -f /tmp/pv-redis-data-redis-redis-cluster-5.yaml
172+
kubectl create -f /tmp/pvc-redis-data-redis-redis-cluster-5.yaml
173+
kubectl create -f /tmp/secret-redis-redis-cluster.yaml
174+
```
174175

175-
8. Scale up replica `redis-redis-cluster-5` in the destination cluster.
176+
8. Scale up replica `redis-redis-cluster-5` in the destination cluster, with a
177+
start ordinal of 5:
176178

177-
```
178-
kubectl patch sts redis-redis-cluster -p '{"spec": {"ordinals": {"start": 5}, "replicas": 1}}'
179-
```
179+
```
180+
kubectl patch sts redis-redis-cluster -p '{"spec": {"ordinals": {"start": 5}, "replicas": 1}}'
181+
```
180182

181183
9. On the source cluster, check the replication status.
182184

183-
```
184-
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
185-
"redis-cli -c -h redis-redis-cluster -a $(kubectl get secret redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) CLUSTER NODES;"
186-
```
187-
188-
You should see that the new replica's address has joined the Redis cluster.
189-
190-
```
191-
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 myself,master - 0 1669766684000 2 connected 5461-10922
192-
7136e37d8864db983f334b85d2b094be47c830e5 10.108.0.22:6379@16379 slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669766685609 2 connected
193-
2ce30362c188aabc06f3eee5d92892d95b1da5c3 10.104.0.14:6379@16379 master - 0 1669766684000 3 connected 10923-16383
194-
961f35e37c4eea507cfe12f96e3bfd694b9c21d4 10.104.0.18:6379@16379 slave a8765caed08f3e185cef22bd09edf409dc2bcc61 0 1669766683600 1 connected
195-
a8765caed08f3e185cef22bd09edf409dc2bcc61 10.104.0.19:6379@16379 master - 0 1669766685000 1 connected 0-5460
196-
7743661f60b6b17b5c71d083260419588b4f2451 10.104.0.16:6379@16379 slave 2ce30362c188aabc06f3eee5d92892d95b1da5c3 0 1669766686613 3 connected
197-
```
185+
```
186+
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
187+
"redis-cli -c -h redis-redis-cluster -a $(kubectl get secret redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) CLUSTER NODES;"
188+
```
189+
190+
You should see that the new replica's address has joined the Redis cluster.
191+
192+
```
193+
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 myself,master - 0 1669766684000 2 connected 5461-10922
194+
7136e37d8864db983f334b85d2b094be47c830e5 10.108.0.22:6379@16379 slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669766685609 2 connected
195+
2ce30362c188aabc06f3eee5d92892d95b1da5c3 10.104.0.14:6379@16379 master - 0 1669766684000 3 connected 10923-16383
196+
961f35e37c4eea507cfe12f96e3bfd694b9c21d4 10.104.0.18:6379@16379 slave a8765caed08f3e185cef22bd09edf409dc2bcc61 0 1669766683600 1 connected
197+
a8765caed08f3e185cef22bd09edf409dc2bcc61 10.104.0.19:6379@16379 master - 0 1669766685000 1 connected 0-5460
198+
7743661f60b6b17b5c71d083260419588b4f2451 10.104.0.16:6379@16379 slave 2ce30362c188aabc06f3eee5d92892d95b1da5c3 0 1669766686613 3 connected
199+
```
198200

199201
## What's Next?
200202

0 commit comments

Comments
 (0)