Skip to content

Commit 67dfa2f

Browse files
committed
Made more formatting changes.
1 parent 6990f21 commit 67dfa2f

File tree

1 file changed

+17
-10
lines changed

1 file changed

+17
-10
lines changed

Solutions/FSxN-as-PVC-for-EKS/README.md

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,8 @@ Read the "description" of the variable to see the valid range.
9191
Read the "description" of the variable to see valid values.
9292
- key_pair_name - The name of the EC2 key pair to use to access the jump server.
9393
- secure_ips - The IP address ranges to allow SSH access to the jump server. The default is wide open.
94-
:warning: You must change the key_pair_name variable, otherwise the deployment will fail.
9594

95+
:warning: You must change the key_pair_name variable, otherwise the deployment will not complete succesfully.
9696
### Initialize the Terraform environment
9797
Run the following command to initialize the terraform environment.
9898
```bash
@@ -126,7 +126,7 @@ so you can easily reference it later.
126126
> Note that an FSxN File System was created, with a vserver (a.k.a. SVM). The default username
127127
> for the FSxN File System is 'fsxadmin'. And the default username for the vserver is 'vsadmin'. The
128128
> password for both of these users is the same and is what is stored in the AWS SecretsManager secret
129-
> shown above. Note that since Terraform was used to create the secret, the password is stored in
129+
> shown above. Since Terraform was used to create the secret, the password is stored in
130130
> plain text in its "state" database and therefore it is **HIGHLY** recommended that you change
131131
> the password to something else by first changing the passwords via the AWS Management Console and
132132
> then updating the password in the AWS SecretsManager secret. You can update the 'username' key in
@@ -164,7 +164,7 @@ Note that if you are using an SSO to authenticate with AWS, then the actual user
164164
you need to add is slightly different than what is output from the above command.
165165
The following command will take the output from the above command and format it correctly:
166166

167-
:warning: Only run this command if you are using an sso to authenticate with aws
167+
:warning: Only run this command if you are using an SSO to authenticate with aws.
168168
```bash
169169
user_ARN=$(aws sts get-caller-identity | jq -r '.Arn' | awk -F: '{split($6, parts, "/"); printf "arn:aws:iam::%s:role/aws-reserved/sso.amazonaws.com/%s\n", $5, parts[2]}')
170170
echo $user_ARN
@@ -229,7 +229,7 @@ Astra Trident has several different drivers to choose from. You can read more ab
229229
different drivers it supports in the
230230
[Astra Trident documentation.](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details)
231231

232-
:memo: **Tip:** If you want to use an iSCSI LUN instead of an NFS file system, please refer to [these instructions](README-san.md).
232+
:memo: **Note:** If you want to use an iSCSI LUN instead of an NFS file system, please refer to [these instructions](README-san.md).
233233

234234
In the commands below you're going to need the FSxN ID, the FSX SVM name, and the
235235
secret ARN. All of that information can be obtained from the output
@@ -259,7 +259,7 @@ export SECRET_ARN=<secret-arn>
259259
envsubst < manifests/backend-tbc-ontap-nas.tmpl > temp/backend-tbc-ontap-nas.yaml
260260
kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml
261261
```
262-
:memo: **Tip:** Put the above commands in your favorite text editor and make the substitutions there. Then copy and paste the commands into the terminal.
262+
:bulb: **Tip:** Put the above commands in your favorite text editor and make the substitutions there. Then copy and paste the commands into the terminal.
263263

264264
To get more information regarding how the backed was configured, look at the
265265
`temp/backend-tbc-ontap-nas.yaml` file.
@@ -330,6 +330,8 @@ NAME STATUS VOLUME CAPACITY
330330
mysql-volume-nas Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-nas <unset> 114m
331331
```
332332

333+
To see more details on how the PVC was defined, look at the `manifests/pvc-fsxn-nas.yaml` file.
334+
333335
If you want to see what was created on the FSxN file system, you can log into it and take a look.
334336
You will want to login as the 'fsxadmin' user, using the password stored in the AWS SecretsManager secret.
335337
You can find the IP address of the FSxN file system in the output from the `terraform apply` command, or
@@ -346,6 +348,9 @@ ekssvm ekssvm_root aggr1 online RW 1GB 972.4MB 0%
346348
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
347349
aggr1 online RW 50GB 50GB 0%
348350
2 entries were displayed.
351+
352+
FsxId0887a493c777c5122::> quit
353+
Goodbye
349354
```
350355

351356
### Deploy a MySQL database using the storage created above
@@ -428,7 +433,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
428433
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
429434
cd ..
430435
```
431-
### Create a snapshot class based on the CRD instsalled
436+
### Create a snapshot class based on the CRD installed
432437
Create a snapshot class by executing:
433438
```bash
434439
kubectl create -f manifests/volume-snapshot-class.yaml
@@ -437,9 +442,9 @@ The output should look like:
437442
```bash
438443
volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
439444
```
440-
445+
To see how the snapshot class was defined, look at the `manifests/volume-snapshot-class.yaml` file.
441446
### Create a snapshot of the MySQL data
442-
Now you can create a snapshot by running:
447+
Now that you have defined the snapshot class you can create a snapshot by running:
443448
```bash
444449
kubectl create -f manifests/volume-snapshot-nas.yaml
445450
```
@@ -456,9 +461,10 @@ The output should look like:
456461
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
457462
mysql-volume-nas-snap-01 true mysql-volume-nas 50Gi fsx-snapclass snapcontent-bdce9310-9698-4b37-9f9b-d1d802e44f17 2m18s 2m18s
458463
```
464+
To see more details on how the snapshot was defined, look at the `manifests/volume-snapshot-nas.yaml` file.
459465

460466
You can log onto the FSxN file system to see that the snapshot was created there:
461-
```bash
467+
```
462468
FsxId0887a493c777c5122::> snapshot show -volume trident_pvc_*
463469
---Blocks---
464470
Vserver Volume Snapshot Size Total% Used%
@@ -489,6 +495,7 @@ NAME STATUS VOLUME CAP
489495
mysql-volume-nas Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-nas <unset> 125m
490496
mysql-volume-nas-clone Bound pvc-ceb1b2c2-de35-4011-8d6e-682b6844bf02 50Gi RWO fsx-basic-nas <unset> 2m22s
491497
```
498+
To see more details on how the PVC was defined, look at the `manifests/pvc-from-nas-snapshot.yaml` file.
492499

493500
To check it on the FSxN side, you can run:
494501
```bash
@@ -541,7 +548,7 @@ mysql> select * from fsx;
541548

542549
## Final steps
543550

544-
At this point you don't need the jump server created to configure the EKS environment for
551+
At this point you don't need the jump server used to configure the EKS environment for
545552
the FSxN File System, so feel free to `terminate` it (i.e. destroy it).
546553

547554
Other than that, you are welcome to deploy other applications that need persistent storage.

0 commit comments

Comments
 (0)