Skip to content

Commit 9d2090a

Browse files
committed
Fixed some typos.
1 parent b47ff72 commit 9d2090a

File tree

1 file changed

+30
-27
lines changed

1 file changed

+30
-27
lines changed

Solutions/FSxN-as-PVC-for-EKS/README.md

Lines changed: 30 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,11 @@ A Unix based system with the following installed:
1515
- Deploy an FSx for Netapp ONTAP File System
1616
- Create security groups
1717
- Create policies and roles
18-
- Create secrtets in AWS SecretsManager
19-
- Create a VPC and subnets
20-
- Create a NAT Gateway
18+
- Create secrets in AWS SecretsManager
19+
- Create a VPC and subnets
20+
- Create a NAT Gateway
2121
- Create a Internet Gateway
22-
- Create an EC2 instance
22+
- Create an EC2 instance
2323

2424
## Installation Overview
2525
The overall process is as follows:
@@ -33,7 +33,7 @@ The overall process is as follows:
3333
- Create a secret in AWS SecretsManager to hold the FSxN password.
3434
- Deploy an EKS cluster.
3535
- Deploy an EC2 Linux based instance.
36-
- Create policies, roles and security groups to protect the new environement.
36+
- Create policies, roles and security groups to protect the new environment.
3737
- SSH to the Linux based instance to complete the setup:
3838
- Install the FSx for NetApp ONTAP Trident CSI driver.
3939
- Configure the Trident CSI driver.
@@ -42,7 +42,7 @@ The overall process is as follows:
4242

4343
## Detailed Instructions
4444
### Clone the "NetApp/FSx-ONTAP-samples-scripts" repo from GitHub
45-
Execute the following commands to clone the repo and change into the directory where the
45+
Run the following commands to clone the repo and change into the directory where the
4646
terraform files are located:
4747
```bash
4848
git clone https://github.com/NetApp/FSx-ONTAP-samples-scripts.git
@@ -70,7 +70,7 @@ terraform init
7070
```
7171

7272
### Deploy the resources
73-
Run the following command to deploy the all resources:
73+
Run the following command to deploy all the resources:
7474
```bash
7575
terraform apply --auto-approve
7676
```
@@ -132,13 +132,15 @@ echo $user_ARN
132132
Note that if you are using an SSO to authenticate with AWS, then the actual username
133133
you need to add is slightly different than what is output from the above command.
134134
The following command will take the output from the above command and format it correctly:
135+
**ONLY RUN THIS COMMAND IF YOU ARE USING AN SSO TO AUTHENTICATE WITH AWS**
135136
```bash
136137
user_ARN=$(aws sts get-caller-identity | jq -r '.Arn' | awk -F: '{split($6, parts, "/"); printf "arn:aws:iam::%s:role/aws-reserved/sso.amazonaws.com/%s\n", $5, parts[2]}')
137138
echo $user_ARN
138139
```
139140
The above command will leverage a standard AWS role that is created when configuring AWS to use an SSO.
140141

141-
To make the next few commands easy, create variables that hold the AWS region and EKS cluster name.
142+
As you can see above, a variable named "user_ARN" was create to hold the your user's ARN. To make
143+
the next few commands easy, also create variables that hold the AWS region and EKS cluster name.
142144
```bash
143145
aws_region=<AWS_REGION>
144146
cluster_name=<EKS_CLUSTER_NAME>
@@ -189,9 +191,9 @@ trident-operator-67d6fd899b-jrnt2 1/1 Running 0 20h
189191

190192
### Configure the Trident CSI backend to use FSx for NetApp ONTAP
191193
For the example below we are going to set up an iSCSI LUN for a MySQL
192-
database. To help facilicate that, we are going to setup Astra Trident as a backend provider.
194+
database. To help facilitate that, we are going to set up Astra Trident as a backend provider.
193195
Since we are going to be creating an iSCSI LUN, we are going to use its `ontap-san` driver.
194-
Astra Trident has serveral different drivers to choose from. You can read more about the
196+
Astra Trident has several different drivers to choose from. You can read more about the
195197
drivers it supports in the
196198
[Astra Trident documentation.](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details)
197199

@@ -217,7 +219,7 @@ After making the following substitutions in the commands below:
217219
- \<fsx-svm-name> with the name of the SVM that was created.
218220
- \<secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
219221

220-
Execute the following commands to configure Trident to use the FSxN file system that was
222+
Run them to configure Trident to use the FSxN file system that was
221223
created earlier using the `terraform --apply` command:
222224
```
223225
cd ~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS
@@ -249,13 +251,14 @@ The output should look similar to this:
249251
NAME BACKEND NAME BACKEND UUID PHASE STATUS
250252
backend-fsx-ontap-san backend-fsx-ontap-san 7a551921-997c-4c37-a1d1-f2f4c87fa629 Bound Success
251253
```
252-
If the status is Failed, then you can add the "--output=json" flag to the `kubectl get tridentbackendconfig`
254+
If the status is `Failed`, then you can add the "--output=json" flag to the `kubectl get tridentbackendconfig`
253255
command to get more information as to why it failed. Specifically, look at the "message" field in the output.
254256
The following command will get just the status messages:
255257
```bash
256258
kubectl get tridentbackendconfig -n trident --output=json | jq '.items[] | .status.message'
257259
```
258260
Once you have resolved any issues, you can remove the failed backend by running:
261+
**ONLY RUN THIS COMMAND IF THE STATUS IS FAILED**
259262
```bash
260263
kubectl delete -n trident -f temp/backend-tbc-ontap-san.yaml
261264
```
@@ -269,7 +272,7 @@ The next step is to create a Kubernetes storage class by executing:
269272
```bash
270273
kubectl create -f manifests/storageclass-fsxn-san.yaml
271274
```
272-
To confirm it worked execute this command:
275+
To confirm it worked run this command:
273276
```bash
274277
kubectl get storageclass
275278
```
@@ -285,17 +288,17 @@ file.
285288
## Create a stateful application
286289
Now that you have set up Kubernetes to use Trident to interface with FSxN for persistent
287290
storage, you are ready to create an application that will use it. In the example below,
288-
we are setting up a MySQL database that will use an iSCSI LUN configured on the FSxN file system.
291+
we are setting up a MySQL database that will use an iSCSI LUN provisioned on the FSxN file system.
289292
As mentioned before, if you want to use NFS instead of iSCSI, use the files that have
290293
"-nas" in their names instead of the "-san".
291294

292295
### Create a Persistent Volume Claim
293-
The first step is to create an iSCSI LUN for the database by executing:
296+
The first step is to create an iSCSI LUN for the database by running:
294297

295298
```bash
296299
kubectl create -f manifests/pvc-fsxn-san.yaml
297300
```
298-
To check that it worked, execute:
301+
To check that it worked, run:
299302
```bash
300303
kubectl get pvc
301304
```
@@ -307,8 +310,8 @@ mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi
307310

308311
If you want to see what was created on the FSxN file system, you can log into it and take a look.
309312
You will want to login as the 'fsxadmin' user, using the password stored in the AWS SecretsManager secret.
310-
You can find the IP address of the FSxN file system in the output from the `terraform apply` command.
311-
Here is an example of logging in and listing all the LUNs on the system:
313+
You can find the IP address of the FSxN file system in the output from the `terraform apply` command, or
314+
from the AWS console. Here is an example of logging in and listing all the LUNs and volumes on the system:
312315
```bash
313316
ubuntu@ip-10-0-4-125:~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS$ ssh -l fsxadmin 198.19.255.174
314317
([email protected]) Password:
@@ -326,11 +329,11 @@ Vserver Volume Aggregate State Type Size Available Used%
326329
ekssvm ekssvm_root aggr1 online RW 1GB 972.4MB 0%
327330
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
328331
aggr1 online RW 55GB 54.90GB 0%
329-
3 entries were displayed.
332+
2 entries were displayed.
330333
```
331334

332335
### Deploy a MySQL database using the storage created above
333-
Now you can deploy a MySQL database by executing:
336+
Now you can deploy a MySQL database by running:
334337
```bash
335338
kubectl create -f manifests/mysql-san.yaml
336339
```
@@ -378,7 +381,7 @@ Records: 6 Duplicates: 0 Warnings: 0
378381
```
379382

380383
And, to confirm everything is there, here is an SQL statement to retrieve the data:
381-
```sql
384+
```
382385
mysql> select * from fsx;
383386
+------------+----------+-----------+
384387
| filesystem | capacity | region |
@@ -420,19 +423,19 @@ The output should look like:
420423
```bash
421424
volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
422425
```
423-
Note, that this storage class works for both LUNs and NFS volumes, so there aren't different versions
426+
Note that this storage class works for both LUNs and NFS volumes, so there aren't different versions
424427
of this file based on the storage type you are testing with.
425428

426429
### Create a snapshot of the MySQL data
427-
Now you can create a snapshot by executing:
430+
Now you can create a snapshot by running:
428431
```bash
429432
kubectl create -f manifests/volume-snapshot-san.yaml
430433
```
431434
The output should look like:
432435
```bash
433436
volumesnapshot.snapshot.storage.k8s.io/mysql-volume-san-snap-01 created
434437
```
435-
To confirm that the snapshot was created, execute:
438+
To confirm that the snapshot was created, run:
436439
```bash
437440
kubectl get volumesnapshot
438441
```
@@ -451,7 +454,7 @@ Vserver Volume Snapshot Size Total% Used%
451454
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
452455
snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17
453456
140KB 0% 0%
454-
457+
```
455458
## Clone the MySQL data to a new storage persistent volume
456459
Now that you have a snapshot of the data, you can use it to create a read/write version
457460
of it. This can be used as a new storage volume for another mysql database. This operation
@@ -463,7 +466,7 @@ The first step is to create a Persistent Volume Claim from the snapshot by execu
463466
```bash
464467
kubectl create -f manifests/pvc-from-san-snapshot.yaml
465468
```
466-
To check that it worked, execute:
469+
To check that it worked, run:
467470
```bash
468471
kubectl get pvc
469472
```
@@ -507,7 +510,7 @@ mysql-fsx-san-clone-d66d9d4bf-2r9fw 1/1 Running 0 14s
507510
kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san-clone" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p
508511
```
509512
After you have logged in, check that the same data is in the new database:
510-
```bash
513+
```
511514
mysql> use fsxdatabase;
512515
mysql> select * from fsx;
513516
+------------+----------+-----------+

0 commit comments

Comments
 (0)