@@ -8,7 +8,7 @@ for it. It will leverage NetApp's Astra Trident to provide the interface between
88
99## Prerequisites
1010
11- A Linux based EC2 instance with the following installed:
11+ A Unix based system with the following installed:
1212- HashiCorp's Terraform
1313- AWS CLI, authenticated with an account that has privileges necessary to:
1414 - Deploy an EKS cluster
@@ -77,15 +77,26 @@ eks-cluster-name = "fsx-eks-mWFem72Z"
7777eks-jump-server = " Instance ID: i-0bcf0ed9adeb55814, Public IP: 35.92.238.240"
7878fsx-id = " fs-04794c394fa5a85de"
7979fsx-password-secret-arn = " arn:aws:secretsmanager:us-west-2:759995470648:secret:fsx-eks-secret20240618170506480900000001-u8IQEp"
80- fsx-password-secret-name = " fsx-eks-secret20240618170506480900000001 "
80+ fsx-password-secret-name = " fsx-eks-secrete-3f55084 "
8181fsx-svm-name = " ekssvm"
8282region = " us-west-2"
8383vpc-id = " vpc-043a3d602b64e2f56"
84- zz_update_kubeconfig_command = " aws eks update-kubeconfig --name fsx-eks-mWFem72Z --region us-west-2"
8584```
8685You will use the values in the commands below, so probably a good idea to copy the output somewhere
8786so you can easily reference it later.
8887
88+ Note that a FSxN File System was created, with a vserver (a.k.a. SVM). The default username
89+ for the FSxN File System is 'fsxadmin'. And, the default username for the vserver is 'vsadmin'. The
90+ password for both of these users is the same and is what is stored in the AWS SecretsManager secret
91+ shown above. Note that since Terraform was used to create the secret, the password is stored in
92+ plain text therefore it is ** HIGHLY** recommended that you change the password to something else
93+ by first changing the passwords via the AWS Management Console and then updating the password in
94+ the AWS SecretsManager secret. You can update the 'username' key in the secret if you want, but
95+ it must be a vserver admin user, not a system level user. This secret is used by Astra
96+ Trident and it will always login via the vserver managmenet LIF and therefore it must be a
97+ vserver admin user. If you want to create a separate secret for the 'fsxadmin' user, feel free
98+ to do so.
99+
89100### SSH to the jump server to complete the setup
90101Use the following command to 'ssh' to the jump start server:
91102``` bash
@@ -98,28 +109,33 @@ referenced in the variables.tf file.
98109in the output from the ` terraform apply ` command.
99110
100111### Configure the 'aws' CLI
101- Run the following command to configure the 'aws' command:
102- ``` bash
103- aws configure
104- ```
105- It will prompt you for an access key and secret. See above for the required permissions.
106- It will also prompt you for a default region and output format. I would recommend setting
107- the region to the same region you set in the variables.tf file. It doesn't matter what
108- you set the default output format to.
112+ There are various ways to configure the AWS cli. If you are unsure how to do it, please
113+ refer to this URL for instructions:
114+ [ Configuring the AWS CLI] ( https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html )
115+
116+ ** NOTE:** When asked for a default region, use the region you specified in the variables.tf file.
109117
110118### Allow access to the EKS cluster for your user id
111119AWS's EKS clusters have a secondary form for permissions. As such, you have to add an "access-entry"
112120to your EKS configuration and associate it with Cluster Admin policy to be able to view and
113121configure the EKS cluster. The first step to do this is to find out your IAM ARN.
114122You can do that via this command:
115123``` bash
116- aws iam get-user --output=text --query User.Arn
124+ user_ARN=$( aws sts get-caller-identity --query Arn --output text)
125+ echo $user_ARN
117126```
118- To make the next few commands easy, create variables that hold the AWS region, EKS cluster name,
119- and the user ARN:
127+ Note that if you are using an SSO to authenicate with AWS, then the actual username
128+ you need to add is slightly different than what is output from the above command.
129+ The following command will take the output from the above command and format it correctly:
130+ ``` bash
131+ user_ARN=$( aws sts get-caller-identity | jq -r ' .Arn' | awk -F: ' {split($6, parts, "/"); printf "arn:aws:iam::%s:role/aws-reserved/sso.amazonaws.com/%s\n", $5, parts[2]}' )
132+ echo $user_ARN
133+ ```
134+ The above command will leverage a standard AWS role that is created when configuring AWS to use an SSO.
135+
136+ To make the next few commands easy, create variables that hold the AWS region and EKS cluster name.
120137``` bash
121138aws_region=< AWS_REGION>
122- user_ARN=$( aws iam get-user --output=text --query User.Arn)
123139cluster_name=< EKS_CLUSTER_NAME>
124140```
125141Of course, replace <AWS_REGION> with the region where the resources were deployed. And replace
@@ -133,15 +149,14 @@ aws eks associate-access-policy --cluster-name $cluster_name --principal-arn $us
133149```
134150
135151### Configure kubectl to use the EKS cluster
136- You'll notice at the bottom of the output from the ` terraform apply ` command a
137- "zz_update_kubeconfig_command" variable. The output of that variable shows the
138- command to run to configure kubectl to use the AWS EKS cluster.
139-
140- Here's an example based on the "terraform apply" output shown above:
152+ AWS makes it easy to configure 'kubectl' to use the EKS cluster. You can do that by running this command:
141153``` bash
142- aws eks update-kubeconfig --name fsx-eks-mWFem72Z --region us-west-2
154+ aws eks update-kubeconfig --name $cluster_name --region $aws_region
143155```
144- Run the following command to confirm you can communicate with the EKS cluster:
156+ Of course the above assumes the cluster_name and aws_region variables are still set from
157+ running the commands above.
158+
159+ To confirm you are able to communciate with the EKS cluster run the following command:
145160``` bash
146161kubectl get nodes
147162```
@@ -152,17 +167,6 @@ ip-10-0-1-84.us-west-2.compute.internal Ready <none> 76m v1.29.3-eks-a
152167ip-10-0-2-117.us-west-2.compute.internal Ready < none> 76m v1.29.3-eks-ae9a62a
153168```
154169
155- ### Install the Kubernetes Snapshot CRDs and Snapshot Controller:
156- Run these commands to install the CRDs:
157- ``` bash
158- git clone https://github.com/kubernetes-csi/external-snapshotter
159- cd external-snapshotter/
160- kubectl kustomize client/config/crd | kubectl create -f -
161- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
162- kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
163- cd ..
164- ```
165-
166170### Confirm Astra Trident is up and running
167171Astra Trident should have been added to your EKS Cluster as part of the terraform deployment.
168172Confirm that it is up and running by running this command:
@@ -180,19 +184,21 @@ trident-operator-67d6fd899b-jrnt2 1/1 Running 0 20h
180184
181185### Configure the Trident CSI backend to use FSx for NetApp ONTAP
182186For the example below we are going to set up an iSCSI LUN for a MySQL
183- database. Because of that, we are going to setup a Trident backend
184- to use the ` ontap-san ` driver. You can read more about the different driver types in the
185- [ Astra Trident documentation] ( https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details ) documentation.
187+ database. Because of that, we are going to setup Astra Trident as a backend provider
188+ and configure it to use its ` ontap-san ` driver. You can read more about
189+ the different type of drivers it supports in the
190+ [ Astra Trident documentation] ( https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details )
191+ documentation.
186192
187193As you go through the steps below, you will notice that most of the files have "-san" in their
188194name. If you want to see an example of using NFS instead of iSCSI, then there are equivalent
189- files that have "-nas" in the name. You can even create two mysql databases, one using iSCSI LUN
195+ files that have "-nas" in their name. You can even create two mysql databases, one using iSCSI LUN
190196and the another using NFS.
191197
192198The first step is to define a backend provider and, in the process, give it the information
193199it needs to make changes (e.g. create volumes, and LUNs) to the FSxN file system.
194200
195- In the command below you're going to need the FSxN ID, the FSX SVM Name , and the
201+ In the command below you're going to need the FSxN ID, the FSX SVM name , and the
196202secret ARN. All of that information can be obtained from the output
197203from the ` terraform apply ` command. If you have lost that output, you can always log back
198204into the server where you ran ` terraform apply ` and simply run it again. It should
@@ -204,6 +210,10 @@ used to create the environment with earlier. This copy will not have the terrafo
204210state information, nor your changes to the variables.tf file, but it does have
205211other files you'll need to complete the setup.
206212
213+ After making the following substitutions:
214+ - \< fsx-id> with the FSxN ID.
215+ - \< fsx-svm-name> with the name of the SVM that was created.
216+ - \< secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
207217Execute the following commands to configure Trident to use the ` ontap-san ` driver.
208218``` bash
209219cd ~ /FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS
@@ -214,17 +224,17 @@ export SECRET_ARN=<secret-arn>
214224envsubst < manifests/backend-tbc-ontap-san.tmpl > temp/backend-tbc-ontap-san.yaml
215225kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml
216226```
217- Of course replace:
218- - \< fsx-id> with the FSxN ID.
219- - \< fsx-svm-name> with the name of the SVM that was created.
220- - \< secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
221227
222228To get more information regarding how the backed was configured, look at the
223229` temp/backend-tbc-ontap-san.yaml ` file.
224230
225231As mentioned above, if you want to use NFS storage instead of iSCSI, you can use the
226232` manifests/backend-tbc-ontap-nas.tmpl ` file instead of the ` manifests/backend-tbc-ontap-san.tmpl `
227- file.
233+ file. The last two commands should look like:
234+ ``` bash
235+ envsubst < manifests/backend-tbc-ontap-nas.tmpl > temp/backend-tbc-ontap-nas.yaml
236+ kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml
237+ ```
228238
229239To confirm that the backend has been appropriately configured, run this command:
230240``` bash
@@ -297,22 +307,23 @@ To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` fi
297307### Populate the MySQL database with data
298308
299309Now to confirm that the database is able to read and write to the persistent storage you need
300- to put some data in the database. Do that by first logging into the MySQL instance.
301- It will prompt for a password. In the yaml file used to create the database, you'll see
302- that we set that to ` Netapp1! `
310+ to put some data in the database. Do that by first logging into the MySQL instance using the
311+ command below. It will prompt for a password. In the yaml file used to create the database,
312+ you'll see that we set that to ` Netapp1! `
303313``` bash
304314kubectl exec -it $( kubectl get pod -l " app=mysql-fsx-san" --namespace=default -o jsonpath=' {.items[0].metadata.name}' ) -- mysql -u root -p
305315```
306316** NOTE:** Replace "mysql-fsx-san" with "mysql-fsx-nas" if you are creating a NFS based MySQL server.
307317
308318After you have logged in, here is a session showing an example of creating a database, then creating a table, then inserting
309319some values into the table:
310- ``` bash
320+ ``` sql
311321mysql> create database fsxdatabase;
312322Query OK, 1 row affected (0 .01 sec)
313323
314324mysql> use fsxdatabase;
315325Database changed
326+
316327mysql> create table fsx (filesystem varchar (20 ), capacity varchar (20 ), region varchar (20 ));
317328Query OK, 0 rows affected (0 .04 sec)
318329
@@ -324,7 +335,7 @@ Records: 6 Duplicates: 0 Warnings: 0
324335```
325336
326337And, to confirm everything is there, here is an SQL statement to retrieve the data:
327- ``` bash
338+ ``` sql
328339mysql> select * from fsx;
329340+ -- ----------+----------+-----------+
330341| filesystem | capacity | region |
@@ -342,8 +353,21 @@ mysql> select * from fsx;
342353## Create a snapshot of the MySQL data
343354Of course, one of the benefits of FSxN is the ability to take space efficient snapshots of the volumes.
344355These snapshots take almost no additional space on the backend storage and pose no performance impact.
345- So, let' s create one for the SQL volume. The first step is to add the volume snapshot store class
346- by executing:
356+
357+ ### Install the Kubernetes Snapshot CRDs and Snapshot Controller:
358+ The first step is to install the Snapshot CRDs and the Snapshot Controller.
359+ To do that run the following commands:
360+ ``` bash
361+ git clone https://github.com/kubernetes-csi/external-snapshotter
362+ cd external-snapshotter/
363+ kubectl kustomize client/config/crd | kubectl create -f -
364+ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
365+ kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
366+ cd ..
367+ ```
368+
369+ ### Create a snapshot of the MySQL data
370+ Now, add the volume snapshot class by executing:
347371``` bash
348372kubectl create -f manifests/volume-snapshot-class.yaml
349373```
@@ -354,7 +378,7 @@ volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
354378Note, that this storage class works for both LUNs and NFS volumes, so there aren't different versions
355379of this file based on the storage type you are testing with.
356380
357- The next step is to create a snapshot of the data by executing:
381+ The findal step is to create a snapshot of the data by executing:
358382``` bash
359383kubectl create -f manifests/volume-snapshot-san.yaml
360384```
@@ -373,11 +397,11 @@ mysql-volume-san-snap-01 true mysql-volume-san
373397```
374398
375399## Clone the MySQL data to a new storage persisent volume
376- Now that you have a snapshot of the data, you use it to create a read/write version of it. This
377- can be used as a new storage volume for another mysql database. This step creates a new
378- FlexClone volume in FSx for ONTAP. Note that FlexClone volumes take up almost no space ;
379- only a pointer table is created to point to the shared data blocks of the volume it is
380- being cloned from.
400+ Now that you have a snapshot of the data, you can use it to create a read/write version
401+ of it. This can be used as a new storage volume for another mysql database. This operation
402+ creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volumes
403+ take up almost no additional space; only a pointer table is created to point to the
404+ shared data blocks of the volume it is being cloned from.
381405
382406The first step is to create a PersistentVolume from the snapshot by executing:
383407``` bash
0 commit comments