Skip to content

Commit 3f88583

Browse files
authored
Merge pull request #640 from PureStorage-OpenConnect/peakdocs
Peakdocs
2 parents c494e38 + 8e7cb9a commit 3f88583

File tree

6 files changed

+411
-38
lines changed

6 files changed

+411
-38
lines changed

assets/app-migration.yml

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,8 @@ metadata:
44
name: appmigration
55
namespace: kube-system
66
spec:
7-
# This should be the name of the cluster pair created above
87
clusterPair: remotecluster
9-
# If set to false this will migrate only the Portworx volumes. No PVCs, apps, etc will be migrated
108
includeResources: true
11-
# If set to false, the deployments and stateful set replicas will be set to 0 on the destination.
12-
# There will be an annotation with "stork.openstorage.org/migrationReplicas" on the destinationto store the replica count from the source.
139
startApplications: true
14-
# List of namespaces to migrate
1510
namespaces:
16-
- default
11+
- petclinic

docs/templates/async-dr/README.md

Lines changed: 147 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1+
<!-- If you update this, you probably also want to update the Migration document -->
12
# Async-DR
23

3-
Deploys 2 clusters with Portworx, sets up and configures a cluster pairing, configures an async DR schedule with a loadbalancer in front of the setup.
4+
Deploys 2 clusters with Portworx, sets up and configures a ClusterPair, configures an async DR schedule with a loadbalancer in front of the setup.
45

56
# Supported Environments
67

@@ -12,32 +13,166 @@ No other enviroments are currently supported.
1213

1314
## Create a bucket for use by DR
1415

15-
You will need to create an S3 bucket for use by the DR migrations. You will add the name of this bucket to defaults.yml (per the below instructions.)
16+
Async-DR requires two things that need to be configured:
1617

17-
## Update defaults.yml
18+
* An S3 bucket
19+
* A DR licence (the trial licence does not include DR)
1820

19-
Update your `defaults.yml` with the following:
21+
These can both be specified in `defaults.yml`:
2022

2123
```
2224
env:
23-
operator: true
2425
licenses: "XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX"
2526
DR_BUCKET: "<YOUR BUCKET NAME>"
2627
```
2728

28-
* `operator: true` ensures that Portworx is deployed as an operator.
29-
* `licenses:` requires a valid license activation code that includes DR (the trial license does not include DR!!).
30-
* `DR_BUCKET` is the name of the bucket created earlier.
31-
29+
You will need to request a valid activation code if you do not have a DR licence. If the `$DR_BUCKET` bucket does not exist, it will be automatically created in us-east-1. If you create it manually, it can be in any region.
3230

3331
## Deploy the template
3432

3533
It is a best practice to use your initials or name as part of the name of the deployment in order to make it easier for others to see the ownership of the deployment in the AWS console.
3634

3735
```
38-
px-deploy create -C aws -t async-dr -n <my-deployment-name>
36+
px-deploy create -t async-dr -n <my-deployment-name>
37+
```
38+
39+
# Demo Workflow
40+
41+
1. Obtain the external IPs for each cluster:
42+
43+
```
44+
px-deploy status -n <my-deployment-name>
45+
```
46+
47+
2. Open a browser tab for each and go to http://<ip1:30333> and http://<ip2:30333> (don't worry, they will not work at this stage).
48+
49+
3. Connect to the deployment in two terminals, and in the second one connect to the second master:
50+
51+
```
52+
ssh master-2
53+
```
54+
55+
4. In each cluster, show they are independent clusters:
56+
57+
```
58+
kubectl get nodes
59+
pxctl status
60+
```
61+
62+
5. In cluster 1, show the ClusterPair object:
63+
64+
```
65+
kubectl get clusterpair -n kube-system
66+
storkctl get clusterpair -n kube-system
67+
kubectl describe clusterpair -n kube-system
68+
kubectl edit clusterpair -n kube-system
69+
:set nowrap
70+
```
71+
72+
`storkctl get clusterpair` gives us a human-readable output of status of the ClusterPair. Talk about how this means that Kubernetes cluster 1 can authenticate with Kubernetes cluster 2, and Portworx cluster 1 can authenticate with Portworx cluster 2, and that with both of these things in place we are able to migrate both objects and volumes from cluster 1 to cluster 2. This means that we can migrate not just an application or its data, but both at the same time. Furthermore, we can migrate an entire namespace or list of namespaces, so we can migrate an entire application stack. `kubectl describe clusterpair` will give us additional debugging information if the pairing were to be unsuccessful.
73+
74+
6. In cluster 1, show that we have a SchedulePolicy and MigrationSchedule:
75+
76+
```
77+
kubectl get schedulepolicy
78+
kubectl get migrationschedule -n kube-system
79+
storkctl get schedulepolicy
80+
storkctl get migrationschedule -n kube-system
81+
```
82+
83+
`storkctl get migrationschedule` gives us a more human-readable output.
84+
85+
7. Show the SchedulePolicy and MigrationSchedule YAML:
86+
87+
```
88+
cat /assets/async-dr.yaml
89+
```
90+
91+
Mention that the SchedulePolicy is globally-scoped, but the MigrationSchedule is in the `kube-system` namespace which means we can use it to migrate any namespace. If we were to create it in any other namespace, we would only be able to use it to migrate that namespace.
92+
93+
In the MigrationSchedule, note three main parameters:
94+
95+
* `clusterPair` - a reference to the ClusterPair object we just saw - defines **where** we are migrating
96+
* `namespaces` - an array of namespaces to be migrated - defines **what** we are migrating
97+
* `schedulePolicyName` - a reference to the SchedulePolicy - defines **when** we are migrating
98+
99+
Also mention the `startApplications` parameter - this will patch the application specs, eg Deployments, StatefulSets and operator-based applications, to prevent them from starting on the target cluster. However, they will be annotated with the original number of application replicas, as we shall see shortly.
100+
101+
8. In each cluster, show there are no apps running yet:
102+
103+
```
104+
kubectl get ns
105+
```
106+
107+
9. In cluster 1, provision the Petclinic app:
108+
109+
```
110+
kubectl apply -f /assets/petclinic/petclinic.yml
111+
```
112+
113+
Talk about how it is a stateless Java app backed by a MySQL data, which itself is backed by a Portworx volume:
114+
115+
```
116+
kubectl get pvc,deploy -n petclinic
117+
```
118+
119+
Wait for it to be ready (it takes a minute or so):
120+
121+
```
122+
kubectl get pod -n petclinic
123+
```
124+
125+
10. Refresh the first tab in your browser. Click Find Owners, Add Owner and populate the form with some dummy data and then click Add Owner. Click Find Owners and Find Owner, and show that there is the entry at the bottom of the list.
126+
127+
11. Refer back to the MigrationSchedule and how creating it will trigger the creation of a Migration object every 60 seconds (in our case). Show the Migration objects:
128+
129+
```
130+
kubectl get migrations -n kube-system
131+
storkctl get migrations -n kube-system
132+
```
133+
134+
Do not continue until at least one Migration has started and succeeded since creating your dummy data.
135+
136+
12. We will now failover the application to the second cluster. It is recommended that instead of failing cluster 1, we just suspend the MigrationSchedule to prevent any further migrations from taking place:
137+
138+
```
139+
storkctl suspend migrationschedule -n kube-system
140+
```
141+
142+
Check there are no migrations currently in progress, or wait until the last one has completed:
143+
144+
```
145+
storkctl get migrations -n kube-system
146+
```
147+
148+
13. On cluster 2, show that the namespace and its contents have been migrated:
149+
150+
```
151+
kubectl get all,pvc -n petclinic
152+
```
153+
154+
Note that the Deployments have been migrated, but they are scaled down to 0. Take a look at them:
155+
156+
```
157+
kubectl edit deploy -n petclinic
158+
```
159+
160+
Show that the `replicas` parameter has been set to `0` as part of the migration. Show that the original number of replicas has been saved in the `migrationReplicas` annotation.
161+
162+
14. On cluster 2, scale up the application:
163+
164+
```
165+
storkctl activate migration -n petclinic
166+
```
167+
168+
Talk about how `storkctl` is going to find all the apps, ie Deployments, StatefulSets and operator-based applications, look for those annotations and then scale everything up to where they originally were.
169+
170+
15. On cluster 2, show the pods starting:
171+
172+
```
173+
kubectl get pod -n petclinic
39174
```
40175

41-
## Sample Demo Workflow
176+
It will take another minute or so to start.
42177

43-
TBD
178+
16. Refresh the browser tab for the second cluster. Click Find Owners and Find Owner and show the data is still there.
Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
# Stork Backups
2+
3+
Deploys a cluster with Portworx, MinIO and Petclinic
4+
5+
# Supported Environments
6+
7+
* Any
8+
9+
# Requirements
10+
11+
## Deploy the template
12+
13+
It is a best practice to use your initials or name as part of the name of the deployment in order to make it easier for others to see the ownership of the deployment in the AWS console.
14+
15+
```
16+
px-deploy create -t backup-restore -n <my-deployment-name>
17+
```
18+
19+
# Demo Workflow
20+
21+
1. Obtain the external IP for the cluster:
22+
23+
```
24+
px-deploy status -n <my-deployment-name>
25+
```
26+
27+
2. Open a browser tab fand go to http://<ip:30333>.
28+
29+
3. Connect to the deployment in a terminal.
30+
31+
4. Show it is a Kubernetes and Portworx cluster:
32+
33+
```
34+
kubectl get nodes
35+
pxctl status
36+
```
37+
38+
5. Go to your browser. Click Find Owners, Add Owner and populate the form with some dummy data and then click Add Owner. Click Find Owners and Find Owner, and show that there is the entry at the bottom of the list.
39+
40+
6. In the terminal, show the BackupLocation YAML that is to be applied:
41+
42+
```
43+
cat /assets/backup-restore/backupLocation.yml
44+
```
45+
46+
Mention that the BackupLocation is in the `petclinic` namespace which means we can use it to backup only that namespace. If we were to create it in the `kube-system` namespace, we would be able to backup any namespace. Talk about it being an S3 target with standard S3 parameters. Note the `sync: true` parameter and say we will come back to it later.
47+
48+
7. Apply the BackupLocation object:
49+
50+
```
51+
kubectl apply -f /assets/backup-restore/backupLocation.yml
52+
```
53+
54+
8. In the terminal, show the ApplicationBackup YAML that is to be applied:
55+
56+
```
57+
cat /assets/backup-restore/applicationBackup.yml
58+
```
59+
60+
Mention that the ApplicationBackup is in the `petclinic` namespace which means we can use it to migrate only that namespace. If we were to create it in the `kube-system` namespace, we would be able to backup any namespace.
61+
62+
9. Apply the ApplicationBackup object:
63+
64+
```
65+
kubectl apply -f /assets/backup-restore/applicationBackup.yml
66+
```
67+
68+
10. Show the ApplicationBackup object:
69+
70+
```
71+
kubectl get applicationbackup -n petclinic
72+
storkctl get applicationbackup -n petclinic
73+
```
74+
75+
Do not continue until the ApplicationBackup has succeeded.
76+
77+
11. Delete the `petclinic` namespace:
78+
79+
```
80+
kubectl delete ns petclinic
81+
```
82+
83+
Refresh the browser to prove the application no longer exists.
84+
85+
12. Recreate the `petclinic` namespace, along with the BackupLocation object:
86+
87+
```
88+
kubectl create ns petclinic
89+
kubectl apply -f /assets/backup-restore/backupLocation.yml
90+
```
91+
92+
Watch for the ApplicationBackup objects to be recreated automatically:
93+
94+
```
95+
watch storkctl get applicationbackups -n petclinic
96+
```
97+
98+
Go back to the `sync: true` parameter we discussed earlier. This is triggering Stork to communicate with the S3 bucket defined in the BackupLocation to pull the metadata associcated with the backup that we took earlier. Once it has retrieved that metadata, it will create an ApplicationBackup object to abstract it. Wait for that object to appear in the output. Copy the name of the object to the clipboard.
99+
100+
13. Edit `/assets/backup-restore/applicationRestore.yml`. Talk about the `backupLocation` object referencing the BackupLocation we just created. Paste the name of the ApplicationBackup object we just found into the `backupName` parameter. Save and exit.
101+
102+
14. Apply the ApplicationRestore object:
103+
104+
```
105+
kubectl apply -f /assets/backup-restore/applicationRestore.yml
106+
```
107+
108+
15. Monitor the status of the restore:
109+
110+
```
111+
watch storkctl get applicationrestores -n petclinic
112+
```
113+
114+
16. Show the application has been restored:
115+
116+
```
117+
kubectl get all,pvc -n petclinic
118+
```
119+
120+
17. Show the pods starting:
121+
122+
```
123+
kubectl get pod -n petclinic
124+
```
125+
126+
18. Refresh the browser tab. Click Find Owners and Find Owner and show the data is still there.

0 commit comments

Comments
 (0)