You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*`operator: true` ensures that Portworx is deployed as an operator.
29
-
*`licenses:` requires a valid license activation code that includes DR (the trial license does not include DR!!).
30
-
*`DR_BUCKET` is the name of the bucket created earlier.
31
-
29
+
You will need to request a valid activation code if you do not have a DR licence. If the `$DR_BUCKET` bucket does not exist, it will be automatically created in us-east-1. If you create it manually, it can be in any region.
32
30
33
31
## Deploy the template
34
32
35
33
It is a best practice to use your initials or name as part of the name of the deployment in order to make it easier for others to see the ownership of the deployment in the AWS console.
2. Open a browser tab for each and go to http://<ip1:30333> and http://<ip2:30333> (don't worry, they will not work at this stage).
48
+
49
+
3. Connect to the deployment in two terminals, and in the second one connect to the second master:
50
+
51
+
```
52
+
ssh master-2
53
+
```
54
+
55
+
4. In each cluster, show they are independent clusters:
56
+
57
+
```
58
+
kubectl get nodes
59
+
pxctl status
60
+
```
61
+
62
+
5. In cluster 1, show the ClusterPair object:
63
+
64
+
```
65
+
kubectl get clusterpair -n kube-system
66
+
storkctl get clusterpair -n kube-system
67
+
kubectl describe clusterpair -n kube-system
68
+
kubectl edit clusterpair -n kube-system
69
+
:set nowrap
70
+
```
71
+
72
+
`storkctl get clusterpair` gives us a human-readable output of status of the ClusterPair. Talk about how this means that Kubernetes cluster 1 can authenticate with Kubernetes cluster 2, and Portworx cluster 1 can authenticate with Portworx cluster 2, and that with both of these things in place we are able to migrate both objects and volumes from cluster 1 to cluster 2. This means that we can migrate not just an application or its data, but both at the same time. Furthermore, we can migrate an entire namespace or list of namespaces, so we can migrate an entire application stack. `kubectl describe clusterpair` will give us additional debugging information if the pairing were to be unsuccessful.
73
+
74
+
6. In cluster 1, show that we have a SchedulePolicy and MigrationSchedule:
75
+
76
+
```
77
+
kubectl get schedulepolicy
78
+
kubectl get migrationschedule -n kube-system
79
+
storkctl get schedulepolicy
80
+
storkctl get migrationschedule -n kube-system
81
+
```
82
+
83
+
`storkctl get migrationschedule` gives us a more human-readable output.
84
+
85
+
7. Show the SchedulePolicy and MigrationSchedule YAML:
86
+
87
+
```
88
+
cat /assets/async-dr.yaml
89
+
```
90
+
91
+
Mention that the SchedulePolicy is globally-scoped, but the MigrationSchedule is in the `kube-system` namespace which means we can use it to migrate any namespace. If we were to create it in any other namespace, we would only be able to use it to migrate that namespace.
92
+
93
+
In the MigrationSchedule, note three main parameters:
94
+
95
+
*`clusterPair` - a reference to the ClusterPair object we just saw - defines **where** we are migrating
96
+
*`namespaces` - an array of namespaces to be migrated - defines **what** we are migrating
97
+
*`schedulePolicyName` - a reference to the SchedulePolicy - defines **when** we are migrating
98
+
99
+
Also mention the `startApplications` parameter - this will patch the application specs, eg Deployments, StatefulSets and operator-based applications, to prevent them from starting on the target cluster. However, they will be annotated with the original number of application replicas, as we shall see shortly.
100
+
101
+
8. In each cluster, show there are no apps running yet:
102
+
103
+
```
104
+
kubectl get ns
105
+
```
106
+
107
+
9. In cluster 1, provision the Petclinic app:
108
+
109
+
```
110
+
kubectl apply -f /assets/petclinic/petclinic.yml
111
+
```
112
+
113
+
Talk about how it is a stateless Java app backed by a MySQL data, which itself is backed by a Portworx volume:
114
+
115
+
```
116
+
kubectl get pvc,deploy -n petclinic
117
+
```
118
+
119
+
Wait for it to be ready (it takes a minute or so):
120
+
121
+
```
122
+
kubectl get pod -n petclinic
123
+
```
124
+
125
+
10. Refresh the first tab in your browser. Click Find Owners, Add Owner and populate the form with some dummy data and then click Add Owner. Click Find Owners and Find Owner, and show that there is the entry at the bottom of the list.
126
+
127
+
11. Refer back to the MigrationSchedule and how creating it will trigger the creation of a Migration object every 60 seconds (in our case). Show the Migration objects:
128
+
129
+
```
130
+
kubectl get migrations -n kube-system
131
+
storkctl get migrations -n kube-system
132
+
```
133
+
134
+
Do not continue until at least one Migration has started and succeeded since creating your dummy data.
135
+
136
+
12. We will now failover the application to the second cluster. It is recommended that instead of failing cluster 1, we just suspend the MigrationSchedule to prevent any further migrations from taking place:
137
+
138
+
```
139
+
storkctl suspend migrationschedule -n kube-system
140
+
```
141
+
142
+
Check there are no migrations currently in progress, or wait until the last one has completed:
143
+
144
+
```
145
+
storkctl get migrations -n kube-system
146
+
```
147
+
148
+
13. On cluster 2, show that the namespace and its contents have been migrated:
149
+
150
+
```
151
+
kubectl get all,pvc -n petclinic
152
+
```
153
+
154
+
Note that the Deployments have been migrated, but they are scaled down to 0. Take a look at them:
155
+
156
+
```
157
+
kubectl edit deploy -n petclinic
158
+
```
159
+
160
+
Show that the `replicas` parameter has been set to `0` as part of the migration. Show that the original number of replicas has been saved in the `migrationReplicas` annotation.
161
+
162
+
14. On cluster 2, scale up the application:
163
+
164
+
```
165
+
storkctl activate migration -n petclinic
166
+
```
167
+
168
+
Talk about how `storkctl` is going to find all the apps, ie Deployments, StatefulSets and operator-based applications, look for those annotations and then scale everything up to where they originally were.
169
+
170
+
15. On cluster 2, show the pods starting:
171
+
172
+
```
173
+
kubectl get pod -n petclinic
39
174
```
40
175
41
-
## Sample Demo Workflow
176
+
It will take another minute or so to start.
42
177
43
-
TBD
178
+
16. Refresh the browser tab for the second cluster. Click Find Owners and Find Owner and show the data is still there.
Deploys a cluster with Portworx, MinIO and Petclinic
4
+
5
+
# Supported Environments
6
+
7
+
* Any
8
+
9
+
# Requirements
10
+
11
+
## Deploy the template
12
+
13
+
It is a best practice to use your initials or name as part of the name of the deployment in order to make it easier for others to see the ownership of the deployment in the AWS console.
2. Open a browser tab fand go to http://<ip:30333>.
28
+
29
+
3. Connect to the deployment in a terminal.
30
+
31
+
4. Show it is a Kubernetes and Portworx cluster:
32
+
33
+
```
34
+
kubectl get nodes
35
+
pxctl status
36
+
```
37
+
38
+
5. Go to your browser. Click Find Owners, Add Owner and populate the form with some dummy data and then click Add Owner. Click Find Owners and Find Owner, and show that there is the entry at the bottom of the list.
39
+
40
+
6. In the terminal, show the BackupLocation YAML that is to be applied:
41
+
42
+
```
43
+
cat /assets/backup-restore/backupLocation.yml
44
+
```
45
+
46
+
Mention that the BackupLocation is in the `petclinic` namespace which means we can use it to backup only that namespace. If we were to create it in the `kube-system` namespace, we would be able to backup any namespace. Talk about it being an S3 target with standard S3 parameters. Note the `sync: true` parameter and say we will come back to it later.
8. In the terminal, show the ApplicationBackup YAML that is to be applied:
55
+
56
+
```
57
+
cat /assets/backup-restore/applicationBackup.yml
58
+
```
59
+
60
+
Mention that the ApplicationBackup is in the `petclinic` namespace which means we can use it to migrate only that namespace. If we were to create it in the `kube-system` namespace, we would be able to backup any namespace.
Watch for the ApplicationBackup objects to be recreated automatically:
93
+
94
+
```
95
+
watch storkctl get applicationbackups -n petclinic
96
+
```
97
+
98
+
Go back to the `sync: true` parameter we discussed earlier. This is triggering Stork to communicate with the S3 bucket defined in the BackupLocation to pull the metadata associcated with the backup that we took earlier. Once it has retrieved that metadata, it will create an ApplicationBackup object to abstract it. Wait for that object to appear in the output. Copy the name of the object to the clipboard.
99
+
100
+
13. Edit `/assets/backup-restore/applicationRestore.yml`. Talk about the `backupLocation` object referencing the BackupLocation we just created. Paste the name of the ApplicationBackup object we just found into the `backupName` parameter. Save and exit.
0 commit comments