Skip to content

Commit b7ccdc3

Browse files
committed
Made additional formatting changes.
1 parent 67dfa2f commit b7ccdc3

File tree

2 files changed

+23
-20
lines changed

2 files changed

+23
-20
lines changed

Solutions/FSxN-as-PVC-for-EKS/README-san.md

Lines changed: 19 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ For the example below we are going to set up an iSCSI LUN for a MySQL
33
database. To help facilitate that, we are going to set up Astra Trident as a backend provider.
44
Since we are going to be creating an iSCSI LUN, we are going to use its `ontap-san` driver.
55
Astra Trident has several different drivers to choose from. You can read more about the
6-
drivers it supports in the
6+
different drivers it supports in the
77
[Astra Trident documentation.](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details)
88

99
In the commands below you're going to need the FSxN ID, the FSX SVM name, and the
@@ -15,7 +15,7 @@ state that there aren't any changes to be made and simply show the output again.
1515
Note that a copy of this repo has been put into ubuntu's home directory on the
1616
jump server for you. Don't be confused with this copy of the repo and the one you
1717
used to create the environment with earlier. This copy will not have the terraform
18-
state information, nor your changes to the variables.tf file, but it does have
18+
state database, nor your changes to the variables.tf file, but it does have
1919
other files you'll need to complete the setup.
2020

2121
After making the following substitutions in the commands below:
@@ -34,6 +34,7 @@ export SECRET_ARN=<secret-arn>
3434
envsubst < manifests/backend-tbc-ontap-san.tmpl > temp/backend-tbc-ontap-san.yaml
3535
kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml
3636
```
37+
:bulb: **Tip:** Put the above commands in your favorite text editor and make the substitutions there. Then copy and paste the commands into the terminal.
3738

3839
To get more information regarding how the backed was configured, look at the
3940
`temp/backend-tbc-ontap-san.yaml` file.
@@ -47,15 +48,15 @@ The output should look similar to this:
4748
NAME BACKEND NAME BACKEND UUID PHASE STATUS
4849
backend-fsx-ontap-san backend-fsx-ontap-san 7a551921-997c-4c37-a1d1-f2f4c87fa629 Bound Success
4950
```
50-
If the status is `Failed`, then you can add the "--output=json" flag to the `kubectl get tridentbackendconfig`
51+
If the status is `Failed`, then you can add the "--output=json" option to the `kubectl get tridentbackendconfig`
5152
command to get more information as to why it failed. Specifically, look at the "message" field in the output.
5253
The following command will get just the status messages:
5354
```bash
5455
kubectl get tridentbackendconfig -n trident --output=json | jq '.items[] | .status.message'
5556
```
5657
Once you have resolved any issues, you can remove the failed backend by running:
5758

58-
**ONLY RUN THIS COMMAND IF THE STATUS IS FAILED**
59+
:warning: **Warning:** Only run this command if the backend is in a failed state and you are ready to get rid of it.
5960
```bash
6061
kubectl delete -n trident -f temp/backend-tbc-ontap-san.yaml
6162
```
@@ -102,6 +103,7 @@ The output should look similar to this:
102103
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
103104
mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 114m
104105
```
106+
To see more details on how the PVC was defined, look at the `manifests/pvc-fsxn-san.yaml` file.
105107

106108
If you want to see what was created on the FSxN file system, you can log into it and take a look.
107109
You will want to login as the 'fsxadmin' user, using the password stored in the AWS SecretsManager secret.
@@ -125,6 +127,9 @@ ekssvm ekssvm_root aggr1 online RW 1GB 972.4MB 0%
125127
ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
126128
aggr1 online RW 55GB 54.90GB 0%
127129
2 entries were displayed.
130+
131+
FsxId0887a493c777c5122::> quit
132+
Goodbye
128133
```
129134

130135
### Deploy a MySQL database using the storage created above
@@ -147,14 +152,13 @@ To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` fi
147152

148153
### Populate the MySQL database with data
149154

150-
Now to confirm that the database can read and write to the persistent storage you need
155+
To confirm that the database can read and write to the persistent storage you need
151156
to put some data in the database. Do that by first logging into the MySQL instance using the
152157
command below. It will prompt for a password. In the yaml file used to create the database,
153158
you'll see that we set that to `Netapp1!`
154159
```bash
155160
kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p
156161
```
157-
158162
After you have logged in, here is a session showing an example of creating a database, then creating a table, then inserting
159163
some values into the table:
160164
```
@@ -208,7 +212,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
208212
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
209213
cd ..
210214
```
211-
### Create a snapshot class based on the CRD instsalled
215+
### Create a snapshot class based on the CRD installed
212216
Create a snapshot class by executing:
213217
```bash
214218
kubectl create -f manifests/volume-snapshot-class.yaml
@@ -217,11 +221,9 @@ The output should look like:
217221
```bash
218222
volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
219223
```
220-
Note that this storage class works for both LUNs and NFS volumes, so there aren't different versions
221-
of this file based on the storage type you are testing with.
222-
224+
To see how the snapshot class was defined, look at the `manifests/volume-snapshot-class.yaml` file.
223225
### Create a snapshot of the MySQL data
224-
Now you can create a snapshot by running:
226+
Now that you have defined the snapshot class you can create a snapshot by running:
225227
```bash
226228
kubectl create -f manifests/volume-snapshot-san.yaml
227229
```
@@ -240,7 +242,7 @@ mysql-volume-san-snap-01 true mysql-volume-san
240242
```
241243

242244
You can log onto the FSxN file system to see that the snapshot was created there:
243-
```bash
245+
```
244246
FsxId0887a493c777c5122::> snapshot show -volume trident_pvc_*
245247
---Blocks---
246248
Vserver Volume Snapshot Size Total% Used%
@@ -249,7 +251,7 @@ ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0
249251
snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17
250252
140KB 0% 0%
251253
```
252-
## Clone the MySQL data to a new storage persistent volume
254+
## Clone the MySQL data to a new persistent volume
253255
Now that you have a snapshot of the data, you can use it to create a read/write version
254256
of it. This can be used as a new storage volume for another mysql database. This operation
255257
creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volume
@@ -271,6 +273,7 @@ NAME STATUS VOLUME CAP
271273
mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 125m
272274
mysql-volume-san-clone Bound pvc-ceb1b2c2-de35-4011-8d6e-682b6844bf02 50Gi RWO fsx-basic-san <unset> 2m22s
273275
```
276+
To see more details on how the PVC was defined, look at the `manifests/pvc-from-san-snapshot.yaml` file.
274277

275278
To check it on the FSxN side, you can run:
276279
```bash
@@ -299,7 +302,8 @@ csi-snapshotter-0 3/3 Running 0 22h
299302
mysql-fsx-san-695b497757-8n6bb 1/1 Running 0 21h
300303
mysql-fsx-san-clone-d66d9d4bf-2r9fw 1/1 Running 0 14s
301304
```
302-
### To confirm that the new database is up and running, log into it and check the data
305+
### Confirm that the new database is up and running
306+
To confirm that the new database is up and running log into it by running this command:
303307
```bash
304308
kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san-clone" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p
305309
```
@@ -322,7 +326,7 @@ mysql> select * from fsx;
322326

323327
## Final steps
324328

325-
At this point you don't need the jump server created to configure the EKS environment for
329+
At this point you don't need the jump server used to configure the EKS environment for
326330
the FSxN File System, so feel free to `terminate` it (i.e. destroy it).
327331

328332
Other than that, you are welcome to deploy other applications that need persistent storage.

Solutions/FSxN-as-PVC-for-EKS/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ Read the "description" of the variable to see valid values.
9292
- key_pair_name - The name of the EC2 key pair to use to access the jump server.
9393
- secure_ips - The IP address ranges to allow SSH access to the jump server. The default is wide open.
9494

95-
:warning: You must change the key_pair_name variable, otherwise the deployment will not complete succesfully.
95+
:warning: **NOTE:** You must change the key_pair_name variable, otherwise the deployment will not complete succesfully.
9696
### Initialize the Terraform environment
9797
Run the following command to initialize the terraform environment.
9898
```bash
@@ -164,7 +164,7 @@ Note that if you are using an SSO to authenticate with AWS, then the actual user
164164
you need to add is slightly different than what is output from the above command.
165165
The following command will take the output from the above command and format it correctly:
166166

167-
:warning: Only run this command if you are using an SSO to authenticate with aws.
167+
:warning: **Warning:** Only run this command if you are using an SSO to authenticate with aws.
168168
```bash
169169
user_ARN=$(aws sts get-caller-identity | jq -r '.Arn' | awk -F: '{split($6, parts, "/"); printf "arn:aws:iam::%s:role/aws-reserved/sso.amazonaws.com/%s\n", $5, parts[2]}')
170170
echo $user_ARN
@@ -281,7 +281,7 @@ kubectl get tridentbackendconfig -n trident --output=json | jq '.items[] | .stat
281281
```
282282
Once you have resolved any issues, you can remove the failed backend by running:
283283

284-
:warning: Only run this command if the backend is in a failed state.
284+
:warning: **Warning:** Only run this command if the backend is in a failed state and you are ready to get rid of it.
285285
```bash
286286
kubectl delete -n trident -f temp/backend-tbc-ontap-nas.yaml
287287
```
@@ -329,7 +329,6 @@ The output should look similar to this:
329329
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
330330
mysql-volume-nas Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-nas <unset> 114m
331331
```
332-
333332
To see more details on how the PVC was defined, look at the `manifests/pvc-fsxn-nas.yaml` file.
334333

335334
If you want to see what was created on the FSxN file system, you can log into it and take a look.
@@ -525,7 +524,7 @@ mysql-fsx-nas-695b497757-8n6bb 1/1 Running 0 21h
525524
mysql-fsx-nas-clone-d66d9d4bf-2r9fw 1/1 Running 0 14s
526525
```
527526
### Confirm that the new database is up and running
528-
To confirm that hte new database is up and running log into it and check the data by running this command:
527+
To confirm that the new database is up and running log into it by running this command:
529528
```bash
530529
kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-nas-clone" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p
531530
```

0 commit comments

Comments
 (0)