Skip to content

Commit 456e0d7

Browse files
committed
Updated after some testing.
1 parent f08ef95 commit 456e0d7

File tree

1 file changed

+147
-48
lines changed

1 file changed

+147
-48
lines changed

EKS/Trident-Protect/README.md

Lines changed: 147 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -4,34 +4,28 @@ A simple sample for setting up your application to be backed up by Trident Prote
44

55
## Prerequisites:
66
The following items should be already be deployed before install Trident Protect.
7-
- EKS cluster. If you don't already have one, refer to the [FSx for NetApp ONTAP as persistent storage](https://github.com/NetApp/FSx-ONTAP-samples-scripts/tree/main/EKS/FSxN-as-PVC-for-EKS) GitHub repo for an example of how to not only deploy an EKS cluster, but also deploy an FSx for ONTAP file system with Tident installed and its backend and storage classes configured.
7+
- EKS cluster. If you don't already have one, refer to the [FSx for NetApp ONTAP as persistent storage](https://github.com/NetApp/FSx-ONTAP-samples-scripts/tree/main/EKS/FSxN-as-PVC-for-EKS) GitHub repo for an example of how to not only deploy an EKS cluster, but also deploy an FSx for ONTAP file system with Tident installed and its backend and storage classes configured. If you follow it, it will provide the rest of the prerequisites listed below.
88
- Trident installed. Please refer to this [Trident installation documentation](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-helm.html) for the easiest way to do that.
9-
- Configure Trident Backend. Refer to the NetApp Trident documentation for guidance on creating [TridentBackendConfig resources](https://docs.netapp.com/us-en/trident/trident-use/backend-kubectl.html)
10-
- Install the Trident CSI drivers for SAN and NAS type storage. Refer to NetApp documentation for [installation instructions](https://docs.netapp.com/us-en/trident/trident-use/ontap-san-examples)
11-
This guide provides steps to set up and configure a StorageClass using ONTAP NAS backends with Trident.
9+
- Configure Trident Backend. Refer to the NetApp Trident documentation for guidance on creating [TridentBackendConfig resources](https://docs.netapp.com/us-en/trident/trident-use/backend-kubectl.html).
10+
- Install the Trident CSI drivers for SAN and NAS type storage. Refer to NetApp documentation for [installation instructions](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx-storage-backend.html).
11+
- Configure a StorageClass Trident for SAN and/or NAS type storage. Refer to NetApp documentation for [instructions](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx-storageclass-pvc.html).
1212
- kubectl installed - Refer to [this documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) on how to install it.
1313
- helm installed - Refer to [this documentation](https://helm.sh/docs/intro/install/) on how to install it.
1414

1515
## Preperation
1616
The following are the steps required before you can use Trident Protect to backup your EKS application.
1717

18-
1. [Install Trident Protect](#1-install-trident-protect)
19-
2. [Configure Trident Backend](#2-make-sure-trident-backend-is-configured-correctly)
20-
3. [Install Trident CSI Drivers](#3-make-sure-trident-csi-drivers-for-nas-and-san-are-installed)
21-
4. [Create S3 Bucket](#4-create-private-s3-bucket-for-backup-data-and-metadata)
18+
1. [Configure Trident Backend](#1-make-sure-trident-backend-is-configured-correctly)
19+
1. [Configure Storage Classes for Trident storage types](#2-make-sure-trident-csi-drivers-for-nas-and-san-are-installed)
20+
1. [Install the Kubnernettes external snapshotter](#3-install-the-kubernetes-external-snapshotter)
21+
1. [Create VolumeStoraeClass for Storage Provider](#4-create-volumestorageclasses-for-your-storage-provider)
22+
1. [Install Trident Protect](#5-install-trident-protect)
23+
1. [Create S3 Bucket](#6-create-private-s3-bucket-for-backup-data-and-metadata)
24+
1. [Create Kubernetes secret for S3 bucket](#7-create-a-kubernetes-secret-for-the-s3-bucket)
2225

23-
### 1. Install Trident Protect
24-
Execute the following commands to install Trident Protect. For more info please consult official [Trident Protect documentation](https://docs.netapp.com/us-en/trident/trident-protect/trident-protect-installation.html).
26+
### 1. Make sure Trident Backend is configured correctly
2527

26-
```markdown
27-
helm repo add netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart
28-
helm install trident-protect-crds netapp-trident-protect/trident-protect-crds --version 100.2410.1 --create-namespace --namespace trident-protect
29-
helm install trident-protect netapp-trident-protect/trident-protect --set autoSupport.enabled=false --set clusterName=<name_of_cluster> --version 100.2410.1 --create-namespace --namespace trident-protect
30-
```
31-
32-
### 2. Make sure Trident Backend is configured correctly
33-
34-
Run the follwing kubectl commands to check if TridentBackendConfig for ontap-san and ontap-nas exists and configured correctly, It outputs the name of any matching TridentBackendConfig:
28+
Run the follwing kubectl commands to check if TridentBackendConfig for ontap-san and ontap-nas exists and configured correctly. These commands should output the name of any matching TridentBackendConfigs:
3529

3630
#### SAN Backend
3731
```bash
@@ -40,14 +34,14 @@ kubectl get tbc -n trident -o jsonpath='{.items[?(@.spec.storageDriverName=="ont
4034

4135
### NAS Backend
4236
```bash
43-
kubectl get tbc -n trident -o jsonpath='{.items[?(@.spec.storageDriverName=="ontap-san")].metadata.name}'
37+
kubectl get tbc -n trident -o jsonpath='{.items[?(@.spec.storageDriverName=="ontap-nas")].metadata.name}'
4438
```
4539

4640
If no matching TridentBackendConfig resources are found, you may need to create one. Refer to the prerequisites section above for more information on how to do that.
47-
### 3. Make Sure Trident CSI Drivers for NAS and SAN are Installed
41+
### 2. Make Sure Trident CSI Drivers for NAS and SAN are Installed
4842
Run the follwing kubectl commands to check that a storageclass exist for both SAN and NAS type storage.
4943

50-
#### SAN Driver
44+
#### SAN StorageClass
5145
Checks for StorageClasses in Kubernetes that use 'ontap-san' as their backend type. It outputs the name of any matching StorageClass:
5246
```bash
5347
kubectl get storageclass -o jsonpath='{.items[?(@.parameters.backendType=="ontap-san")].metadata.name}'
@@ -61,8 +55,59 @@ kubectl get storageclass -o jsonpath='{.items[?(@.parameters.backendType=="ontap
6155

6256
If one or both are not found, you may need to create them. Refer to the prerequisites section above for more information on how to do that.
6357

58+
### 3. Install the Kubernetes External Snapshotter
59+
Run the following commands to install the Kubernetes External Snapshotter. For more information please consult the official [external-snapshotter documentation](https://github.com/kubernetes-csi/external-snapshotter).
60+
61+
```bash
62+
kubectl kustomize https://github.com/kubernetes-csi/external-snapshotter/client/config/crd | kubectl create -f -
63+
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
64+
kubectl kustomize https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/csi-snapshotter | kubectl create -f -
65+
```
66+
67+
### 4. Create VolumeSnapshotClasses for your storage provider.
68+
Trident Protect requires a VolumeSnapshotClass to be created for the storage CSI driver you are using. You can use the following command to see if you already one defined:
69+
```
70+
kubectl get VolumeSnapshotClass
71+
```
72+
If you don't have one defined you'll need to create one. Here is an example of a yaml file that defines a VolumeSnapshotClass for Trident CSI driver:
73+
```
74+
apiVersion: snapshot.storage.k8s.io/v1
75+
kind: VolumeSnapshotClass
76+
metadata:
77+
name: trident-csi-snapclass
78+
annotations:
79+
snapshot.storage.kubernetes.io/is-default-class: "true"
80+
driver: csi.trident.netapp.io
81+
deletionPolicy: Delete
82+
```
83+
84+
Here is an example of a yaml file that defines a VolumeSnapshotClass for EBS CSI driver:
85+
```
86+
apiVersion: snapshot.storage.k8s.io/v1
87+
kind: VolumeSnapshotClass
88+
metadata:
89+
name: ebs-csi-snapclass
90+
driver: ebs.csi.aws.com
91+
deletionPolicy: Delete
92+
```
93+
94+
After creating the yaml file with the VolumeSnapshotClass for your CSI driver, run the following command to create the VolumeSnapshotClass:
95+
96+
```bash
97+
kubectl apply -f <VolumeSnapshotClass.yaml>
98+
```
99+
100+
### 5. Install Trident Protect
101+
Execute the following commands to install Trident Protect. For more info please consult official [Trident Protect documentation](https://docs.netapp.com/us-en/trident/trident-protect/trident-protect-installation.html).
102+
103+
```markdown
104+
helm repo add netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart
105+
helm install trident-protect-crds netapp-trident-protect/trident-protect-crds --create-namespace --namespace trident-protect
106+
helm install trident-protect netapp-trident-protect/trident-protect --set autoSupport.enabled=false --set clusterName=trident-protect-cluster --namespace trident-protect
107+
```
108+
Note that the above commands should install the latest version. If you want to install a specific version add the --version option and provide the version you want to sue. Please use version `100.2410.1` or later.
64109

65-
### 4. Create Private S3 Bucket for Backup Data and Metadata
110+
### 6. Create Private S3 Bucket for Backup Data and Metadata
66111

67112
```markdown
68113
aws s3 mb s3://<bucket_name> --region <aws_region>
@@ -72,6 +117,14 @@ Replace:
72117
- `<bucket_name>` with the name you want to assign to the bucket. Note it must be a unique name.
73118
- `<aws_region>` the AWS region you want the bucket to reside.
74119

120+
### 7. Create a Kubernetes secret for the S3 bucket
121+
If required, create a service account within AWS IAM that has rights to read and write to the S3 bucketd create. Then create an access key.
122+
Once you have the Access Key Id and Secret Access Key, create a Kubernetes secret with the following command:
123+
124+
```markdown
125+
kubectl create secret generic -n trident-protect s3 --from-literal=accessKeyID=<AccessKeyID> --from-literal=secretAccessKey=<secretAccessKey>
126+
```
127+
75128
## Configure Trident Protect to backup your application
76129
Preform these steps to configure Trident Protect to backup your application:
77130
- [Define Trident Vault](#define-a-trident-vault-to-store-the-backup)
@@ -94,24 +147,22 @@ spec:
94147
providerConfig:
95148
s3:
96149
bucketName: <APP VAULT BUCKET NAME>
97-
endpoint: <AWS REGION>
150+
endpoint: <S3 ENDPOINT>
98151
providerCredentials:
99152
accessKeyID:
100153
valueFromSecret:
101-
key: <accessKeyID>
154+
key: accessKeyID
102155
name: s3
103156
secretAccessKey:
104157
valueFromSecret:
105-
key: <secretAccessKey>
158+
key: secretAccessKey
106159
name: s3
107160
```
108161

109162
Replace:
110163
- `<APP VAULT NAME>` with the name you want assigned to the Trident Vault
111164
- `<APP VAULT BUCKET NAME>` with the name of the bucket you created in step 5 above.
112-
- `<AWS_REGION>` with the AWS region the s3 bucket was created in.
113-
- `<accessKeyID>` with the access key ID that has access to the S3 bucket.
114-
- `<secretAccessKey>` with the secret that is associated with the access key ID.
165+
- `<S3 ENDPOINT>` the hostname of the S3 endpoint. For example: `s3.us-west-2.amazonaws.com`.
115166

116167
Now run the following command to create the Trident Vault:
117168

@@ -128,14 +179,14 @@ If you want to avoid storing AWS credentials explicitly in Kubernetes secrets, a
128179
- Create a Kubernetes service account in the trident-protect namespace and associate it with the IAM role
129180

130181
### Create a Trident Application
131-
Create a Trident application to backup your application by first creating a file named `trident-application.yaml` with the following contents:
182+
You create a Trident application with the specification of your application in order to back it up. You do that by creating a file named `trident-application.yaml` with the following contents:
132183

133184
```markdown
134185
apiVersion: protect.trident.netapp.io/v1
135186
kind: Application
136187
metadata:
137188
name: <APP NAME>
138-
namespace: trident-protect
189+
namespace: <APP NAMESPACE>
139190
spec:
140191
includedNamespaces:
141192
- namespace: <APP NAMESPACE>
@@ -152,22 +203,22 @@ kubectl apply -f trident-application.yaml
152203
```
153204

154205
### Run Backup for Application
155-
To backup the application first create a backup configuration file named `trident-backup.yaml` with the following contents:
206+
To perform an on-demand backup of the application first create a backup configuration file named `trident-backup.yaml` with the following contents:
156207

157208
```markdown
158209
apiVersion: protect.trident.netapp.io/v1
159210
kind: Backup
160211
metadata:
161-
namespace: trident-protect
212+
namespace: <APP NAMESPACE>
162213
name: <APP BACKUP NAME>
163214
spec:
164215
applicationRef: <APP NAME>
165216
appVaultRef: <APP VAULT NAME>
166-
dataMover: Kopia
167217
```
168218

169219
Replace:
170-
- `<APP BACKUP NAME>` with the name you want assigned to the backup.
220+
- `<APP NAMESPACE>` with the namespace where the application resides.
221+
- `<APP BACKUP NAME>` with the name you want assigned to the backup. This has to be unique.
171222
- `<APP NAME>` with the name of the application defined in the step above.
172223
- `<APP VAULT NAME>` with the name of the Trident Vault created in the step above.
173224

@@ -181,52 +232,100 @@ kubectl apply -f trident-backup.yaml
181232
To check the status of the backup run the following command:
182233

183234
```markdown
184-
kubectl get snapshot -n trident-protect <APP BACKUP NAME> -o jsonpath='{.status.state}'
235+
kubectl get backup -n <APP NAMESPACE> <APP BACKUP NAME> -o jsonpath='{.status.state}'
185236
```
186237

187238
- If status is `Completed` Backup completed successfully
188239
- If status is `Running` run the command again in a few minutes to check status
189240
- If status is `Failed` check the error message:
190241

191242
```markdown
192-
kubectl get snapshot -n trident-protect <APP BACKUP NAME> -o jsonpath='{.status.error}'
243+
kubectl get backup -n <APP NAMESPACE> <APP BACKUP NAME> -o jsonpath='{.status.error}'
244+
```
245+
## Perform an restore of the backup
246+
There are two ways to restore a backup:
247+
- [Restore backup to a different namespace](#restore-backup-to-a-different-namespace)
248+
- [Restore backup to the same namespace](#restore-backup-to-the-same-namespace)
249+
250+
### Restore backup to a different namespace
251+
To restore the backup you created above to a different namespace, you first need to create a restore configuration file named `trident-restore.yaml` with the following contents:
252+
253+
```markdown
254+
apiVersion: protect.trident.netapp.io/v1
255+
kind: BackupRestore
256+
metadata:
257+
name: <APP RESTORE NAME>
258+
namespace: <DESTINATION NAMESPACE>
259+
spec:
260+
appArchivePath: <APP ARCHIVE PATH>
261+
appVaultRef: <APP VAULT NAME>
262+
namespaceMapping:
263+
- source: <SOURCE NAMESPACE>
264+
destination: <DESTINATION NAMESPACE>
265+
```
266+
267+
Where:
268+
- `<APP RESTORE NAME>` with the name you want to assign the restore configuration
269+
- `<DESTINATION NAMESPACE>` with the namespace where you want to restore the application
270+
- `<APP VAULT NAME>` with the name of the backup configuration used to create the backup you want to restore from.
271+
- `<SOURCE NAMESPACE>` with the namespace where the application was backed up from.
272+
- `<DESTINATION NAMESPACE>` with the namespace where you want the application to be restored to.
273+
- `<APP ARCHIVE PATH>` with the path to the backup archive. You can get this by running the following command:
274+
```markdown
275+
kubectl get backup -n <APP NAMESPACE> <APP BACKUP NAME> -o jsonpath='{.status.appArchivePath}'
276+
```
277+
278+
Run the following command to start the restore:
279+
280+
```markdown
281+
kubectl apply -f trident-restore.yaml
193282
```
194283

195-
## Perform an in place restore with volume migration (from gp3 to FSxN/trident-csi)
196-
Before running the Restore command get appArchivePath by running:
284+
You can check the status of the restore by running the following command:
197285

198286
```markdown
199-
kubectl get backup -n trident-protect <APP BACKUP NAME> -o jsonpath='{.status.appArchivePath}'
287+
kubectl get backuprestore -n <DESTINATION NAMESPACE> <APP RESTORE NAME> -o jsonpath='{.status.state}'
200288
```
201289

290+
## Restore backup to the same namespace
202291
Run the restore by first creating an in place restore configuration file named `backupinplacerestore.yaml` with the following contents:
203292

204293
```markdown
205294
apiVersion: protect.trident.netapp.io/v1
206295
kind: BackupInplaceRestore
207296
metadata:
208297
name: <APP BACKUP RESTORE NAME>
209-
namespace: trident-protect
298+
namespace: <APP NAMESPACE>
210299
spec:
211-
appArchivePath: <BACKUP PATH>
300+
appArchivePath: <APP ARCHIVE PATH>
212301
appVaultRef: <APP VAULT NAME>
213-
storageClassMapping: [{"source": "gp3", "destination": "trident-csi-nas"}]
302+
storageClassMapping:
303+
- source: <SOURCE STORAGE CLASS>
304+
destination: <DESTINATION STORAGE CLASS>
214305
```
215306

216307
Replace:
217308
- `<APP BACKUP RESTORE NAME>` with the name you want to assign the restore configuration
218-
- `<BACKUP PATH>` with the appArchivePath obtained from the step above.
309+
- `<APP NAMESPACE>` with the namespace where the application was backed up from.
219310
- `<APP VAULT NAME>` with the name of the backup configuration used to create the backup you want to restore from.
311+
- `<SOURCE STORAGE CLASS>` with the storage class of the PVC you want to migrate from.
312+
- `<DESTINATION STORAGE CLASS>` with the storage class of the PVC you want to migrate to.
313+
- `<APP ARCHIVE PATH>` with the path to the backup archive. You can get this by running the following command:
314+
315+
```markdown
316+
kubectl get backup -n <APP NAMESPACE> <APP BACKUP NAME> -o jsonpath='{.status.appArchivePath}'
317+
```
318+
319+
Note in the above example, not only are we reestoring to the same namespace, but we are also migrating the PVCs from one storage class to anther. If you don't want to do that, you can remove the `storageClassMapping` section from the yaml file.
220320

221-
Run the following command to keep the application in place while migrating application's PVC from gp3 to trident-csi-nas
321+
Once the yaml file is created, run the following command to start the restore:
222322

223323
```markdown
224324
kubectl apply -f backupinplacerestore.yaml
225325
```
226326

227-
Verify application restore was successful and check PVC storage class:
327+
Verify application restore was successful run the following command:
228328

229329
```markdown
230-
kubectl get <APP BACKUP RESTORE NAME> -n trident-protect -o jsonpath='{.status.state}'
231-
kubectl get pvc <PVC NAME> -n <NAMESPACE> -o jsonpath='{.spec.storageClassName}'
330+
kubectl get <APP BACKUP RESTORE NAME> -n <APP NAMESPACE> -o jsonpath='{.status.state}'
232331
```

0 commit comments

Comments
 (0)