Skip to content

Commit b47ff72

Browse files
committed
Updated the README.md file to be clearer.
1 parent 9775ec4 commit b47ff72

File tree

2 files changed

+116
-36
lines changed

2 files changed

+116
-36
lines changed

Solutions/FSxN-as-PVC-for-EKS/README.md

Lines changed: 105 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,11 @@ A Unix based system with the following installed:
1414
- Deploy an EKS cluster
1515
- Deploy an FSx for Netapp ONTAP File System
1616
- Create security groups
17+
- Create policies and roles
18+
- Create secrtets in AWS SecretsManager
1719
- Create a VPC and subnets
1820
- Create a NAT Gateway
21+
- Create a Internet Gateway
1922
- Create an EC2 instance
2023

2124
## Installation Overview
@@ -27,16 +30,18 @@ The overall process is as follows:
2730
- Run 'terraform apply -auto-approve' to:
2831
- Create a new VPC with public and private subnets.
2932
- Deploy a FSx for NetApp ONTAP File System.
33+
- Create a secret in AWS SecretsManager to hold the FSxN password.
3034
- Deploy an EKS cluster.
3135
- Deploy an EC2 Linux based instance.
36+
- Create policies, roles and security groups to protect the new environement.
3237
- SSH to the Linux based instance to complete the setup:
3338
- Install the FSx for NetApp ONTAP Trident CSI driver.
3439
- Configure the Trident CSI driver.
3540
- Create a Kubernetes storage class.
3641
- Deploy a sample application to test the storage with.
3742

3843
## Detailed Instructions
39-
## Clone the "NetApp/FSx-ONTAP-samples-scripts" repo from GitHub
44+
### Clone the "NetApp/FSx-ONTAP-samples-scripts" repo from GitHub
4045
Execute the following commands to clone the repo and change into the directory where the
4146
terraform files are located:
4247
```bash
@@ -46,6 +51,7 @@ cd FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS/terraform
4651
### Make any desired changes to the variables.tf file.
4752
Variables that can be changed include:
4853
- aws_region - The AWS region where you want to deploy the resources.
54+
- aws_secrets_region - The region where the fsx_password_secret will be created.
4955
- fsx_name - The name you want applied to the FSx for NetApp ONTAP File System. Must not already exist.
5056
- fsx_password_secret_name - A base name of the AWS SecretsManager secret that will hold the FSxN password.
5157
A random string will be appended to this name to ensure uniqueness.
@@ -73,14 +79,15 @@ the following is an example of last part of the output of a successful deploymen
7379
```bash
7480
Outputs:
7581

76-
eks-cluster-name = "fsx-eks-mWFem72Z"
77-
eks-jump-server = "Instance ID: i-0bcf0ed9adeb55814, Public IP: 35.92.238.240"
78-
fsx-id = "fs-04794c394fa5a85de"
79-
fsx-password-secret-arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsx-eks-secret20240618170506480900000001-u8IQEp"
80-
fsx-password-secret-name = "fsx-eks-secrete-3f55084"
82+
eks-cluster-name = "fsx-eks-DB0H69vL"
83+
eks-jump-server = "Instance ID: i-0e99a61431a39d327, Public IP: 54.244.16.198"
84+
fsx-id = "fs-0887a493cXXXXXXXX"
85+
fsx-management-ip = "198.19.255.174"
86+
fsx-password-secret-arn = "arn:aws:secretsmanager:us-west-2:759995400000:secret:fsx-eks-secret-3b8bde97-Fst5rj"
87+
fsx-password-secret-name = "fsx-eks-secret-3b8bde97"
8188
fsx-svm-name = "ekssvm"
8289
region = "us-west-2"
83-
vpc-id = "vpc-043a3d602b64e2f56"
90+
vpc-id = "vpc-03ed6b1867d76e1a9"
8491
```
8592
You will use the values in the commands below, so probably a good idea to copy the output somewhere
8693
so you can easily reference it later.
@@ -89,7 +96,7 @@ Note that an FSxN File System was created, with a vserver (a.k.a. SVM). The defa
8996
for the FSxN File System is 'fsxadmin'. And the default username for the vserver is 'vsadmin'. The
9097
password for both of these users is the same and is what is stored in the AWS SecretsManager secret
9198
shown above. Note that since Terraform was used to create the secret, the password is stored in
92-
plain text therefore it is **HIGHLY** recommended that you change the password to something else
99+
plain text and therefore it is **HIGHLY** recommended that you change the password to something else
93100
by first changing the passwords via the AWS Management Console and then updating the password in
94101
the AWS SecretsManager secret. You can update the 'username' key in the secret if you want, but
95102
it must be a vserver admin user, not a system level user. This secret is used by Astra
@@ -113,11 +120,9 @@ There are various ways to configure the AWS cli. If you are unsure how to do it,
113120
refer to this the AWS documentation for instructions:
114121
[Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)
115122

116-
**NOTE:** When asked for a default region, use the region you specified in the variables.tf file.
117-
118123
### Allow access to the EKS cluster for your user id
119124
AWS's EKS clusters have a secondary form for permissions. As such, you have to add an "access-entry"
120-
to your EKS configuration and associate it with Cluster Admin policy to be able to view and
125+
to your EKS configuration and associate it with the Cluster Admin policy to be able to view and
121126
configure the EKS cluster. The first step to do this is to find out your IAM ARN.
122127
You can do that via this command:
123128
```bash
@@ -186,8 +191,8 @@ trident-operator-67d6fd899b-jrnt2 1/1 Running 0 20h
186191
For the example below we are going to set up an iSCSI LUN for a MySQL
187192
database. To help facilicate that, we are going to setup Astra Trident as a backend provider.
188193
Since we are going to be creating an iSCSI LUN, we are going to use its `ontap-san` driver.
189-
Astra Trident has varioius different drivers that you might be interested in. You can read
190-
more about the drivers it supports in the
194+
Astra Trident has serveral different drivers to choose from. You can read more about the
195+
drivers it supports in the
191196
[Astra Trident documentation.](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details)
192197

193198
As you go through the steps below, you will notice that most of the files have "-san" in their
@@ -214,7 +219,7 @@ After making the following substitutions in the commands below:
214219

215220
Execute the following commands to configure Trident to use the FSxN file system that was
216221
created earlier using the `terraform --apply` command:
217-
```bash
222+
```
218223
cd ~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS
219224
mkdir temp
220225
export FSX_ID=<fsx-id>
@@ -244,6 +249,20 @@ The output should look similar to this:
244249
NAME BACKEND NAME BACKEND UUID PHASE STATUS
245250
backend-fsx-ontap-san backend-fsx-ontap-san 7a551921-997c-4c37-a1d1-f2f4c87fa629 Bound Success
246251
```
252+
If the status is Failed, then you can add the "--output=json" flag to the `kubectl get tridentbackendconfig`
253+
command to get more information as to why it failed. Specifically, look at the "message" field in the output.
254+
The following command will get just the status messages:
255+
```bash
256+
kubectl get tridentbackendconfig -n trident --output=json | jq '.items[] | .status.message'
257+
```
258+
Once you have resolved any issues, you can remove the failed backend by running:
259+
```bash
260+
kubectl delete -n trident -f temp/backend-tbc-ontap-san.yaml
261+
```
262+
Then, you can re-run the `kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml` command.
263+
If the issues was with one of the variables that was substituted in, then you will need to
264+
rerun the `envsubst` command to create a new `temp/backend-tbc-ontap-san.yaml` file
265+
before running the `kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml` command.
247266

248267
### Create a Kubernetes storage class
249268
The next step is to create a Kubernetes storage class by executing:
@@ -283,7 +302,31 @@ kubectl get pvc
283302
The output should look similar to this:
284303
```bash
285304
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
286-
mysql-volume-san Bound pvc-15e834eb-daf8-4a96-a2d5-4044442fbe90 50Gi RWO fsx-basic-san <unset> 13s
305+
mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 114m
306+
```
307+
308+
If you want to see what was created on the FSxN file system, you can log into it and take a look.
309+
You will want to login as the 'fsxadmin' user, using the password stored in the AWS SecretsManager secret.
310+
You can find the IP address of the FSxN file system in the output from the `terraform apply` command.
311+
Here is an example of logging in and listing all the LUNs on the system:
312+
```bash
313+
ubuntu@ip-10-0-4-125:~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS$ ssh -l fsxadmin 198.19.255.174
314+
([email protected]) Password:
315+
316+
Last login time: 6/21/2024 15:30:27
317+
FsxId0887a493c777c5122::> lun show
318+
Vserver Path State Mapped Type Size
319+
--------- ------------------------------- ------- -------- -------- --------
320+
ekssvm /vol/trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0/lun0
321+
online mapped linux 50GB
322+
323+
FsxId0887a493c777c5122::> volume show
324+
Vserver Volume Aggregate State Type Size Available Used%
325+
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
326+
ekssvm ekssvm_root aggr1 online RW 1GB 972.4MB 0%
327+
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
328+
aggr1 online RW 55GB 54.90GB 0%
329+
3 entries were displayed.
287330
```
288331

289332
### Deploy a MySQL database using the storage created above
@@ -297,10 +340,11 @@ kubectl get pods
297340
```
298341
The output should look similar to this:
299342
```bash
300-
NAME READY STATUS RESTARTS AGE
301-
csi-snapshotter-0 3/3 Running 0 22h
302-
mysql-fsx-695b497757-pvn7q 1/1 Running 0 20h
343+
NAME READY STATUS RESTARTS AGE
344+
mysql-fsx-san-79cdb57b58-m2lgr 1/1 Running 0 31s
303345
```
346+
Note that it might take a minute or two for the pod to get to the Running status.
347+
304348
To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` file.
305349

306350
### Populate the MySQL database with data
@@ -316,7 +360,7 @@ kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san" --namespace=default -o
316360

317361
After you have logged in, here is a session showing an example of creating a database, then creating a table, then inserting
318362
some values into the table:
319-
```sql
363+
```
320364
mysql> create database fsxdatabase;
321365
Query OK, 1 row affected (0.01 sec)
322366
@@ -347,6 +391,9 @@ mysql> select * from fsx;
347391
| netapp04 | 1024GB | us-west-1 |
348392
+------------+----------+-----------+
349393
6 rows in set (0.00 sec)
394+
395+
mysql> quit
396+
Bye
350397
```
351398

352399
## Create a snapshot of the MySQL data
@@ -355,7 +402,7 @@ These snapshots take almost no additional space on the backend storage and pose
355402

356403
### Install the Kubernetes Snapshot CRDs and Snapshot Controller:
357404
The first step is to install the Snapshot CRDs and the Snapshot Controller.
358-
To do that run the following commands:
405+
To do that by running these commands:
359406
```bash
360407
git clone https://github.com/kubernetes-csi/external-snapshotter
361408
cd external-snapshotter/
@@ -364,9 +411,8 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
364411
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
365412
cd ..
366413
```
367-
368-
### Create a snapshot of the MySQL data
369-
Now, add the volume snapshot class by executing:
414+
### Create a snapshot class based on the CRD instsalled
415+
Create a snapshot class by executing:
370416
```bash
371417
kubectl create -f manifests/volume-snapshot-class.yaml
372418
```
@@ -377,7 +423,8 @@ volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
377423
Note, that this storage class works for both LUNs and NFS volumes, so there aren't different versions
378424
of this file based on the storage type you are testing with.
379425

380-
The final step is to create a snapshot of the data by executing:
426+
### Create a snapshot of the MySQL data
427+
Now you can create a snapshot by executing:
381428
```bash
382429
kubectl create -f manifests/volume-snapshot-san.yaml
383430
```
@@ -392,9 +439,19 @@ kubectl get volumesnapshot
392439
The output should look like:
393440
```bash
394441
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
395-
mysql-volume-san-snap-01 true mysql-volume-san 50Gi fsx-snapclass snapcontent-b3491f26-47e3-484c-aae0-69d45087d6c7 4s 5s
442+
mysql-volume-san-snap-01 true mysql-volume-san 50Gi fsx-snapclass snapcontent-bdce9310-9698-4b37-9f9b-d1d802e44f17 2m18s 2m18s
396443
```
397444

445+
You can log onto the FSxN file system to see that the snapshot was created there:
446+
```bash
447+
FsxId0887a493c777c5122::> snapshot show -volume trident_pvc_*
448+
---Blocks---
449+
Vserver Volume Snapshot Size Total% Used%
450+
-------- -------- ------------------------------------- -------- ------ -----
451+
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
452+
snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17
453+
140KB 0% 0%
454+
398455
## Clone the MySQL data to a new storage persistent volume
399456
Now that you have a snapshot of the data, you can use it to create a read/write version
400457
of it. This can be used as a new storage volume for another mysql database. This operation
@@ -406,6 +463,29 @@ The first step is to create a Persistent Volume Claim from the snapshot by execu
406463
```bash
407464
kubectl create -f manifests/pvc-from-san-snapshot.yaml
408465
```
466+
To check that it worked, execute:
467+
```bash
468+
kubectl get pvc
469+
```
470+
The output should look similar to this:
471+
```bash
472+
$ kubectl get pvc
473+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
474+
mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 125m
475+
mysql-volume-san-clone Bound pvc-ceb1b2c2-de35-4011-8d6e-682b6844bf02 50Gi RWO fsx-basic-san <unset> 2m22s
476+
```
477+
478+
To check it on the FSxN side, you can run:
479+
```bash
480+
FsxId0887a493c777c5122::> volume clone show
481+
Parent Parent Parent
482+
Vserver FlexClone Vserver Volume Snapshot State Type
483+
------- ------------- ------- ------------- -------------------- --------- ----
484+
ekssvm trident_pvc_ceb1b2c2_de35_4011_8d6e_682b6844bf02
485+
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
486+
snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17
487+
online RW
488+
```
409489
## Create a new MySQL database using the cloned volume
410490
Now that you have a new storage volume, you can create a new MySQL database that uses it by executing:
411491
```bash

Solutions/FSxN-as-PVC-for-EKS/terraform/variables.tf

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -3,24 +3,19 @@ variable "aws_region" {
33
description = "aws region where you want the resources deployed."
44
}
55

6+
variable "aws_secrets_region" {
7+
default = "us-west-2"
8+
description = "The region where you want the FSxN secret stored within AWS Secrets Manager."
9+
}
10+
611
variable "fsx_name" {
712
default = "eksfs"
813
description = "The name you want assigned to the FSxN file system."
914
}
1015

1116
variable "fsx_password_secret_name" {
1217
default = "fsx-eks-secret"
13-
description = "The basename of the secret to create within the AWS Secrets Manager that will contain the FSxN password. A random string will be appended to the end of the secreate name to ensure no name conflict."
14-
}
15-
16-
variable "aws_secrets_region" {
17-
default = "us-west-2"
18-
description = "The region where you want the secret stored within AWS Secrets Manager."
19-
}
20-
21-
variable "trident_version" {
22-
default = "v24.2.0-eksbuild.1"
23-
description = "The version of Astra Trident to 'add-on' to the EKS cluster."
18+
description = "The base name of the secret to create within the AWS Secrets Manager that will contain the FSxN password. A random string will be appended to the end of the secreate name to ensure no name conflict."
2419
}
2520

2621
variable "fsxn_throughput_capacity" {
@@ -48,6 +43,11 @@ variable "secure_ips" {
4843
# Don't change any variables below this line.
4944
################################################################################
5045

46+
variable "trident_version" {
47+
default = "v24.2.0-eksbuild.1"
48+
description = "The version of Astra Trident to 'add-on' to the EKS cluster."
49+
}
50+
5151
variable "kubernetes_version" {
5252
default = 1.29
5353
description = "kubernetes version"

0 commit comments

Comments
 (0)