@@ -47,7 +47,7 @@ cd FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS/terraform
4747Variables that can be changed include:
4848- aws_region - The AWS region where you want to deploy the resources.
4949- fsx_name - The name you want applied to the FSx for NetApp ONTAP File System. Must not already exist.
50- - fsx_password_secret_name - A basename of the AWS SecretsManager secret that will hold the FSxN password.
50+ - fsx_password_secret_name - A base name of the AWS SecretsManager secret that will hold the FSxN password.
5151A random string will be appended to this name to ensure uniqueness.
5252- fsx_throughput_capacity - The throughput capacity of the FSx for NetApp ONTAP File System.
5353Read the "description" of the variable to see valid values.
@@ -58,13 +58,13 @@ Read the "description" of the variable to see the valid range.
5858- secure_ips - The IP address ranges to allow SSH access to the jump server. The default is wide open.
5959
6060### Initialize the Terraform environment
61- Run 'terraform init' to initialize the terraform environment.
61+ Run the following command to initialize the terraform environment.
6262``` bash
6363terraform init
6464```
6565
6666### Deploy the resources
67- Run 'terraform apply --auto-approve' to deploy the resources:
67+ Run the following command to deploy the all resources:
6868``` bash
6969terraform apply --auto-approve
7070```
@@ -85,32 +85,32 @@ vpc-id = "vpc-043a3d602b64e2f56"
8585You will use the values in the commands below, so probably a good idea to copy the output somewhere
8686so you can easily reference it later.
8787
88- Note that a FSxN File System was created, with a vserver (a.k.a. SVM). The default username
89- for the FSxN File System is 'fsxadmin'. And, the default username for the vserver is 'vsadmin'. The
88+ Note that an FSxN File System was created, with a vserver (a.k.a. SVM). The default username
89+ for the FSxN File System is 'fsxadmin'. And the default username for the vserver is 'vsadmin'. The
9090password for both of these users is the same and is what is stored in the AWS SecretsManager secret
9191shown above. Note that since Terraform was used to create the secret, the password is stored in
9292plain text therefore it is ** HIGHLY** recommended that you change the password to something else
9393by first changing the passwords via the AWS Management Console and then updating the password in
9494the AWS SecretsManager secret. You can update the 'username' key in the secret if you want, but
9595it must be a vserver admin user, not a system level user. This secret is used by Astra
96- Trident and it will always login via the vserver managmenet LIF and therefore it must be a
96+ Trident and it will always login via the vserver management LIF and therefore it must be a
9797vserver admin user. If you want to create a separate secret for the 'fsxadmin' user, feel free
9898to do so.
9999
100100### SSH to the jump server to complete the setup
101- Use the following command to 'ssh' to the jump start server:
101+ Use the following command to 'ssh' to the jump server:
102102``` bash
103103ssh -i < path_to_key_pair> ubuntu@< jump_server_public_ip>
104104```
105105Where:
106106- <path_to_key_pair> is the file path to where you have stored the key_pair that you
107107referenced in the variables.tf file.
108- - <jump_server_public_ip> is the IP address of the jump start server that was displayed
108+ - <jump_server_public_ip> is the IP address of the jump server that was displayed
109109in the output from the ` terraform apply ` command.
110110
111111### Configure the 'aws' CLI
112112There are various ways to configure the AWS cli. If you are unsure how to do it, please
113- refer to this URL for instructions:
113+ refer to this the AWS documentation for instructions:
114114[ Configuring the AWS CLI] ( https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html )
115115
116116** NOTE:** When asked for a default region, use the region you specified in the variables.tf file.
@@ -124,7 +124,7 @@ You can do that via this command:
124124user_ARN=$( aws sts get-caller-identity --query Arn --output text)
125125echo $user_ARN
126126```
127- Note that if you are using an SSO to authenicate with AWS, then the actual username
127+ Note that if you are using an SSO to authenticate with AWS, then the actual username
128128you need to add is slightly different than what is output from the above command.
129129The following command will take the output from the above command and format it correctly:
130130``` bash
@@ -156,7 +156,7 @@ aws eks update-kubeconfig --name $cluster_name --region $aws_region
156156Of course the above assumes the cluster_name and aws_region variables are still set from
157157running the commands above.
158158
159- To confirm you are able to communciate with the EKS cluster run the following command:
159+ To confirm you can communicate with the EKS cluster run the following command:
160160``` bash
161161kubectl get nodes
162162```
@@ -184,21 +184,18 @@ trident-operator-67d6fd899b-jrnt2 1/1 Running 0 20h
184184
185185### Configure the Trident CSI backend to use FSx for NetApp ONTAP
186186For the example below we are going to set up an iSCSI LUN for a MySQL
187- database. Because of that, we are going to setup Astra Trident as a backend provider
188- and configure it to use its ` ontap-san ` driver. You can read more about
189- the different type of drivers it supports in the
190- [ Astra Trident documentation ] ( https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details )
191- documentation.
187+ database. To help facilicate that, we are going to setup Astra Trident as a backend provider.
188+ Since we are going to be creating an iSCSI LUN, we are going to use its ` ontap-san ` driver.
189+ Astra Trident has varioius different drivers that you might be interested in. You can read
190+ more about the drivers it supports in the
191+ [ Astra Trident documentation.] ( https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details )
192192
193193As you go through the steps below, you will notice that most of the files have "-san" in their
194194name. If you want to see an example of using NFS instead of iSCSI, then there are equivalent
195195files that have "-nas" in their name. You can even create two mysql databases, one using iSCSI LUN
196196and the another using NFS.
197197
198- The first step is to define a backend provider and, in the process, give it the information
199- it needs to make changes (e.g. create volumes, and LUNs) to the FSxN file system.
200-
201- In the command below you're going to need the FSxN ID, the FSX SVM name, and the
198+ In the commands below you're going to need the FSxN ID, the FSX SVM name, and the
202199secret ARN. All of that information can be obtained from the output
203200from the ` terraform apply ` command. If you have lost that output, you can always log back
204201into the server where you ran ` terraform apply ` and simply run it again. It should
@@ -210,11 +207,13 @@ used to create the environment with earlier. This copy will not have the terrafo
210207state information, nor your changes to the variables.tf file, but it does have
211208other files you'll need to complete the setup.
212209
213- After making the following substitutions:
210+ After making the following substitutions in the commands below :
214211- \< fsx-id> with the FSxN ID.
215212- \< fsx-svm-name> with the name of the SVM that was created.
216213- \< secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
217- Execute the following commands to configure Trident to use the ` ontap-san ` driver.
214+
215+ Execute the following commands to configure Trident to use the FSxN file system that was
216+ created earlier using the ` terraform --apply ` command:
218217``` bash
219218cd ~ /FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS
220219mkdir temp
@@ -247,7 +246,7 @@ backend-fsx-ontap-san backend-fsx-ontap-san 7a551921-997c-4c37-a1d1-f2f4c87f
247246```
248247
249248### Create a Kubernetes storage class
250- The next step is to create a Kubernetes store class by executing:
249+ The next step is to create a Kubernetes storage class by executing:
251250``` bash
252251kubectl create -f manifests/storageclass-fsxn-san.yaml
253252```
@@ -306,7 +305,7 @@ To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` fi
306305
307306### Populate the MySQL database with data
308307
309- Now to confirm that the database is able to read and write to the persistent storage you need
308+ Now to confirm that the database can read and write to the persistent storage you need
310309to put some data in the database. Do that by first logging into the MySQL instance using the
311310command below. It will prompt for a password. In the yaml file used to create the database,
312311you'll see that we set that to ` Netapp1! `
@@ -378,7 +377,7 @@ volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
378377Note, that this storage class works for both LUNs and NFS volumes, so there aren't different versions
379378of this file based on the storage type you are testing with.
380379
381- The findal step is to create a snapshot of the data by executing:
380+ The final step is to create a snapshot of the data by executing:
382381``` bash
383382kubectl create -f manifests/volume-snapshot-san.yaml
384383```
@@ -396,18 +395,18 @@ NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT
396395mysql-volume-san-snap-01 true mysql-volume-san 50Gi fsx-snapclass snapcontent-b3491f26-47e3-484c-aae0-69d45087d6c7 4s 5s
397396```
398397
399- ## Clone the MySQL data to a new storage persisent volume
398+ ## Clone the MySQL data to a new storage persistent volume
400399Now that you have a snapshot of the data, you can use it to create a read/write version
401400of it. This can be used as a new storage volume for another mysql database. This operation
402- creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volumes
401+ creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volume
403402take up almost no additional space; only a pointer table is created to point to the
404403shared data blocks of the volume it is being cloned from.
405404
406- The first step is to create a PersistentVolume from the snapshot by executing:
405+ The first step is to create a Persistent Volume Claim from the snapshot by executing:
407406``` bash
408407kubectl create -f manifests/pvc-from-san-snapshot.yaml
409408```
410- ## Create a new MySQL database using the cloned storage
409+ ## Create a new MySQL database using the cloned volume
411410Now that you have a new storage volume, you can create a new MySQL database that uses it by executing:
412411``` bash
413412kubectl create -f manifests/mysql-san-clone.yaml
0 commit comments