2323 * [ Install the Kubernetes Snapshot CRDs and Snapshot Controller] ( #Install-the-Kubernetes-Snapshot-CRDs-and-Snapshot-Controller )
2424 * [ Create a snapshot class based on the CRD instsalled] ( #Create-a-snapshot-class-based-on-the-CRD-installed )
2525 * [ Create a snapshot of the MySQL data] ( #Create-a-snapshot-of-the-MySQL-data )
26- * [ Clone the MySQL data to a new storage persistent volume] ( #Clone-the-MySQL-data-to-a-new-storage -persistent-volume )
26+ * [ Clone the MySQL data to a new persistent volume] ( #Clone-the-MySQL-data-to-a-new-persistent-volume )
2727 * [ Create a new MySQL database using the cloned volume] ( #Create-a-new-MySQL-database-using-the-cloned-volume )
2828 * [ Confirm that the new database is up and running] ( #Confirm-that-the-new-database-is-up-and-running )
2929* [ Final steps] ( #Final-steps )
@@ -55,14 +55,14 @@ A Unix based system with the following installed:
5555The overall process is as follows:
5656- Ensure the prerequisites have been installed and configured.
5757- Clone this repo from GitHub.
58- - Make changes to the variables.tf file.
58+ - Make changes to the variables.tf file. Only one change is really required.
5959- Run 'terraform init' to initialize the terraform environment.
6060- Run 'terraform apply -auto-approve' to:
6161 - Create a new VPC with public and private subnets.
6262 - Deploy a FSx for NetApp ONTAP File System.
6363 - Create a secret in AWS SecretsManager to hold the FSxN password.
6464 - Deploy an EKS cluster.
65- - Deploy an EC2 Linux based instance.
65+ - Deploy an EC2 Linux based instance. Used as a jump server to complete the setup.
6666 - Create policies, roles and security groups to protect the new environment.
6767- SSH to the Linux based instance to complete the setup:
6868 - Install the FSx for NetApp ONTAP Trident CSI driver.
@@ -81,17 +81,17 @@ cd FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS/terraform
8181### Make any desired changes to the variables.tf file.
8282Variables that can be changed include:
8383- aws_region - The AWS region where you want to deploy the resources.
84- - aws_secrets_region - The region where the fsx_password_secret will be created.
84+ - aws_secrets_region - The region where the fsx password secret will be created.
8585- fsx_name - The name you want applied to the FSx for NetApp ONTAP File System. Must not already exist.
8686- fsx_password_secret_name - A base name of the AWS SecretsManager secret that will hold the FSxN password.
8787A random string will be appended to this name to ensure uniqueness.
88- - fsx_throughput_capacity - The throughput capacity of the FSx for NetApp ONTAP File System.
89- Read the "description" of the variable to see valid values.
9088- fsx_storage_capacity - The storage capacity of the FSx for NetApp ONTAP File System.
9189Read the "description" of the variable to see the valid range.
90+ - fsx_throughput_capacity - The throughput capacity of the FSx for NetApp ONTAP File System.
91+ Read the "description" of the variable to see valid values.
9292- key_pair_name - The name of the EC2 key pair to use to access the jump server.
93- ** Note:** You must set this variable, otherwise the deployment will fail.
9493- secure_ips - The IP address ranges to allow SSH access to the jump server. The default is wide open.
94+ :warning : You must change the key_pair_name variable, otherwise the deployment will fail.
9595
9696### Initialize the Terraform environment
9797Run the following command to initialize the terraform environment.
@@ -119,20 +119,21 @@ fsx-svm-name = "ekssvm"
119119region = " us-west-2"
120120vpc-id = " vpc-03ed6b1867d76e1a9"
121121```
122- You will use the values in the commands below, so probably a good idea to copy the output somewhere
122+ :blub: ** Tip: ** You will use the values in the commands below, so probably a good idea to copy the output somewhere
123123so you can easily reference it later.
124124
125- Note that an FSxN File System was created, with a vserver (a.k.a. SVM). The default username
126- for the FSxN File System is 'fsxadmin'. And the default username for the vserver is 'vsadmin'. The
127- password for both of these users is the same and is what is stored in the AWS SecretsManager secret
128- shown above. Note that since Terraform was used to create the secret, the password is stored in
129- plain text and therefore it is ** HIGHLY** recommended that you change the password to something else
130- by first changing the passwords via the AWS Management Console and then updating the password in
131- the AWS SecretsManager secret. You can update the 'username' key in the secret if you want, but
132- it must be a vserver admin user, not a system level user. This secret is used by Astra
133- Trident and it will always login via the vserver management LIF and therefore it must be a
134- vserver admin user. If you want to create a separate secret for the 'fsxadmin' user, feel free
135- to do so.
125+ > [ NOTE!]
126+ > Note that an FSxN File System was created, with a vserver (a.k.a. SVM). The default username
127+ > for the FSxN File System is 'fsxadmin'. And the default username for the vserver is 'vsadmin'. The
128+ > password for both of these users is the same and is what is stored in the AWS SecretsManager secret
129+ > shown above. Note that since Terraform was used to create the secret, the password is stored in
130+ > plain text in its "state" database and therefore it is ** HIGHLY** recommended that you change
131+ > the password to something else by first changing the passwords via the AWS Management Console and
132+ > then updating the password in the AWS SecretsManager secret. You can update the 'username' key in
133+ > the secret if you want, but it must be a vserver admin user, not a system level user. This secret
134+ > is used by Astra Trident and it will always login via the vserver management LIF and therefore it
135+ > must be a vserver admin user. If you want to create a separate secret for the 'fsxadmin' user,
136+ > feel free to do so.
136137
137138### SSH to the jump server to complete the setup
138139Use the following command to 'ssh' to the jump server:
@@ -178,7 +179,7 @@ cluster_name=<EKS_CLUSTER_NAME>
178179```
179180Of course, replace <AWS_REGION> with the region where the resources were deployed. And replace
180181<EKS_CLUSTER_NAME> with the name of your EKS cluster. Both of these values can be found
181- from the output of the ` terraform plan ` command.
182+ from the output of the ` terraform apply ` command.
182183
183184Once you have your variables set, add the EKS access-entry by running these commands:
184185``` bash
@@ -225,10 +226,10 @@ For the example below we are going to set up an NFS file system for a MySQL
225226database. To help facilitate that, we are going to set up Astra Trident as a backend provider.
226227Since we are going to be creating an NFS file system, we are going to use its ` ontap-nas ` driver.
227228Astra Trident has several different drivers to choose from. You can read more about the
228- drivers it supports in the
229+ different drivers it supports in the
229230[ Astra Trident documentation.] ( https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details )
230231
231- If you want to use an iSCSI LUN instead of an NFS file system, please refer to [ these instructions] ( README-san.md ) .
232+ : memo : ** Tip: ** If you want to use an iSCSI LUN instead of an NFS file system, please refer to [ these instructions] ( README-san.md ) .
232233
233234In the commands below you're going to need the FSxN ID, the FSX SVM name, and the
234235secret ARN. All of that information can be obtained from the output
@@ -239,7 +240,7 @@ state that there aren't any changes to be made and simply show the output again.
239240Note that a copy of this repo has been put into ubuntu's home directory on the
240241jump server for you. Don't be confused with this copy of the repo and the one you
241242used to create the environment with earlier. This copy will not have the terraform
242- state information , nor your changes to the variables.tf file, but it does have
243+ state database , nor your changes to the variables.tf file, but it does have
243244other files you'll need to complete the setup.
244245
245246After making the following substitutions in the commands below:
@@ -258,6 +259,8 @@ export SECRET_ARN=<secret-arn>
258259envsubst < manifests/backend-tbc-ontap-nas.tmpl > temp/backend-tbc-ontap-nas.yaml
259260kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml
260261```
262+ :memo : ** Tip:** Put the above commands in your favorite text editor and make the substitutions there. Then copy and paste the commands into the terminal.
263+
261264To get more information regarding how the backed was configured, look at the
262265` temp/backend-tbc-ontap-nas.yaml ` file.
263266
@@ -270,7 +273,7 @@ The output should look similar to this:
270273NAME BACKEND NAME BACKEND UUID PHASE STATUS
271274backend-fsx-ontap-nas backend-fsx-ontap-nas 7a551921-997c-4c37-a1d1-f2f4c87fa629 Bound Success
272275```
273- If the status is ` Failed ` , then you can add the "--output=json" flag to the ` kubectl get tridentbackendconfig `
276+ If the status is ` Failed ` , then you can add the "--output=json" option to the ` kubectl get tridentbackendconfig `
274277command to get more information as to why it failed. Specifically, look at the "message" field in the output.
275278The following command will get just the status messages:
276279``` bash
@@ -283,7 +286,7 @@ Once you have resolved any issues, you can remove the failed backend by running:
283286kubectl delete -n trident -f temp/backend-tbc-ontap-nas.yaml
284287```
285288Now you can re-run the ` kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml ` command.
286- If the issues was with one of the variables that was substituted in, then you will need to
289+ If the issue was with one of the variables that was substituted in, then you will need to
287290rerun the ` envsubst ` command to create a new ` temp/backend-tbc-ontap-nas.yaml ` file
288291before running the ` kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml ` command.
289292
@@ -365,7 +368,7 @@ To see how the MySQL was configured, check out the `manifests/mysql-nas.yaml` fi
365368
366369### Populate the MySQL database with data
367370
368- Now to confirm that the database can read and write to the persistent storage you need
371+ To confirm that the database can read and write to the persistent storage you need
369372to put some data in the database. Do that by first logging into the MySQL instance using the
370373command below. It will prompt for a password. In the yaml file used to create the database,
371374you'll see that we set that to ` Netapp1! `
@@ -464,7 +467,7 @@ ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
464467 snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17
465468 140KB 0% 0%
466469```
467- ## Clone the MySQL data to a new storage persistent volume
470+ ## Clone the MySQL data to a new persistent volume
468471Now that you have a snapshot of the data, you can use it to create a read/write version
469472of it. This can be used as a new storage volume for another mysql database. This operation
470473creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volume
0 commit comments