|
| 1 | +### Configure the Trident CSI backend to use FSx for NetApp ONTAP |
| 2 | +For the example below we are going to set up an iSCSI LUN for a MySQL |
| 3 | +database. To help facilitate that, we are going to set up Astra Trident as a backend provider. |
| 4 | +Since we are going to be creating an iSCSI LUN, we are going to use its `ontap-san` driver. |
| 5 | +Astra Trident has several different drivers to choose from. You can read more about the |
| 6 | +different drivers it supports in the |
| 7 | +[Astra Trident documentation.](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details) |
| 8 | + |
| 9 | +In the commands below you're going to need the FSxN ID, the FSX SVM name, and the |
| 10 | +secret ARN. All of that information can be obtained from the output |
| 11 | +from the `terraform apply` command. If you have lost that output, you can always log back |
| 12 | +into the server where you ran `terraform apply` and simply run it again. It should |
| 13 | +state that there aren't any changes to be made and simply show the output again. |
| 14 | + |
| 15 | +Note that a copy of this repo has been put into ubuntu's home directory on the |
| 16 | +jump server for you. Don't be confused with this copy of the repo and the one you |
| 17 | +used to create the environment with earlier. This copy will not have the terraform |
| 18 | +state database, nor your changes to the variables.tf file, but it does have |
| 19 | +other files you'll need to complete the setup. |
| 20 | + |
| 21 | +After making the following substitutions in the commands below: |
| 22 | +- \<fsx-id> with the FSxN ID. |
| 23 | +- \<fsx-svm-name> with the name of the SVM that was created. |
| 24 | +- \<secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password. |
| 25 | + |
| 26 | +Run them to configure Trident to use the FSxN file system that was |
| 27 | +created earlier using the `terraform --apply` command: |
| 28 | +``` |
| 29 | +cd ~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS |
| 30 | +mkdir temp |
| 31 | +export FSX_ID=<fsx-id> |
| 32 | +export FSX_SVM_NAME=<fsx-svm-name> |
| 33 | +export SECRET_ARN=<secret-arn> |
| 34 | +envsubst < manifests/backend-tbc-ontap-san.tmpl > temp/backend-tbc-ontap-san.yaml |
| 35 | +kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml |
| 36 | +``` |
| 37 | +:bulb: **Tip:** Put the above commands in your favorite text editor and make the substitutions there. Then copy and paste the commands into the terminal. |
| 38 | + |
| 39 | +To get more information regarding how the backed was configured, look at the |
| 40 | +`temp/backend-tbc-ontap-san.yaml` file. |
| 41 | + |
| 42 | +To confirm that the backend has been appropriately configured, run this command: |
| 43 | +```bash |
| 44 | +kubectl get tridentbackendconfig -n trident |
| 45 | +``` |
| 46 | +The output should look similar to this: |
| 47 | +```bash |
| 48 | +NAME BACKEND NAME BACKEND UUID PHASE STATUS |
| 49 | +backend-fsx-ontap-san backend-fsx-ontap-san 7a551921-997c-4c37-a1d1-f2f4c87fa629 Bound Success |
| 50 | +``` |
| 51 | +If the status is `Failed`, then you can add the "--output=json" option to the `kubectl get tridentbackendconfig` |
| 52 | +command to get more information as to why it failed. Specifically, look at the "message" field in the output. |
| 53 | +The following command will get just the status messages: |
| 54 | +```bash |
| 55 | +kubectl get tridentbackendconfig -n trident --output=json | jq '.items[] | .status.message' |
| 56 | +``` |
| 57 | +Once you have resolved any issues, you can remove the failed backend by running: |
| 58 | + |
| 59 | +:warning: **Warning:** Only run this command if the backend is in a failed state and you are ready to get rid of it. |
| 60 | +```bash |
| 61 | +kubectl delete -n trident -f temp/backend-tbc-ontap-san.yaml |
| 62 | +``` |
| 63 | +Then, you can re-run the `kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml` command. |
| 64 | +If the issues was with one of the variables that was substituted in, then you will need to |
| 65 | +rerun the `envsubst` command to create a new `temp/backend-tbc-ontap-san.yaml` file |
| 66 | +before running the `kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml` command. |
| 67 | + |
| 68 | +### Create a Kubernetes storage class |
| 69 | +The next step is to create a Kubernetes storage class by executing: |
| 70 | +```bash |
| 71 | +kubectl create -f manifests/storageclass-fsxn-san.yaml |
| 72 | +``` |
| 73 | +To confirm it worked run this command: |
| 74 | +```bash |
| 75 | +kubectl get storageclass |
| 76 | +``` |
| 77 | +The output should be similar to this: |
| 78 | +```bash |
| 79 | +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE |
| 80 | +fsx-basic-san csi.trident.netapp.io Delete Immediate true 20h |
| 81 | +gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 44h |
| 82 | +``` |
| 83 | +To see more details on how the storage class was defined, look at the `manifests/storageclass-fsxn-san.yaml` |
| 84 | +file. |
| 85 | + |
| 86 | +## Create a stateful application |
| 87 | +Now that you have set up Kubernetes to use Trident to interface with FSxN for persistent |
| 88 | +storage, you are ready to create an application that will use it. In the example below, |
| 89 | +we are setting up a MySQL database that will use an iSCSI LUN provisioned on the FSxN file system. |
| 90 | + |
| 91 | +### Create a Persistent Volume Claim |
| 92 | +The first step is to create an iSCSI LUN for the database by running: |
| 93 | + |
| 94 | +```bash |
| 95 | +kubectl create -f manifests/pvc-fsxn-san.yaml |
| 96 | +``` |
| 97 | +To check that it worked, run: |
| 98 | +```bash |
| 99 | +kubectl get pvc |
| 100 | +``` |
| 101 | +The output should look similar to this: |
| 102 | +```bash |
| 103 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 104 | +mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 114m |
| 105 | +``` |
| 106 | +To see more details on how the PVC was defined, look at the `manifests/pvc-fsxn-san.yaml` file. |
| 107 | + |
| 108 | +If you want to see what was created on the FSxN file system, you can log into it and take a look. |
| 109 | +You will want to login as the 'fsxadmin' user, using the password stored in the AWS SecretsManager secret. |
| 110 | +You can find the IP address of the FSxN file system in the output from the `terraform apply` command, or |
| 111 | +from the AWS console. Here is an example of logging in and listing all the LUNs and volumes on the system: |
| 112 | +```bash |
| 113 | +ubuntu@ip-10-0-4-125:~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS$ ssh -l fsxadmin 198.19.255.174 |
| 114 | + |
| 115 | + |
| 116 | +Last login time: 6/21/2024 15:30:27 |
| 117 | +FsxId0887a493c777c5122::> lun show |
| 118 | +Vserver Path State Mapped Type Size |
| 119 | +--------- ------------------------------- ------- -------- -------- -------- |
| 120 | +ekssvm /vol/trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0/lun0 |
| 121 | + online mapped linux 50GB |
| 122 | + |
| 123 | +FsxId0887a493c777c5122::> volume show |
| 124 | +Vserver Volume Aggregate State Type Size Available Used% |
| 125 | +--------- ------------ ------------ ---------- ---- ---------- ---------- ----- |
| 126 | +ekssvm ekssvm_root aggr1 online RW 1GB 972.4MB 0% |
| 127 | +ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0 |
| 128 | + aggr1 online RW 55GB 54.90GB 0% |
| 129 | +2 entries were displayed. |
| 130 | + |
| 131 | +FsxId0887a493c777c5122::> quit |
| 132 | +Goodbye |
| 133 | +``` |
| 134 | + |
| 135 | +### Deploy a MySQL database using the storage created above |
| 136 | +Now you can deploy a MySQL database by running: |
| 137 | +```bash |
| 138 | +kubectl create -f manifests/mysql-san.yaml |
| 139 | +``` |
| 140 | +To check that it is up run: |
| 141 | +```bash |
| 142 | +kubectl get pods |
| 143 | +``` |
| 144 | +The output should look similar to this: |
| 145 | +```bash |
| 146 | +NAME READY STATUS RESTARTS AGE |
| 147 | +mysql-fsx-san-79cdb57b58-m2lgr 1/1 Running 0 31s |
| 148 | +``` |
| 149 | +Note that it might take a minute or two for the pod to get to the Running status. |
| 150 | + |
| 151 | +To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` file. |
| 152 | + |
| 153 | +### Populate the MySQL database with data |
| 154 | + |
| 155 | +To confirm that the database can read and write to the persistent storage you need |
| 156 | +to put some data in the database. Do that by first logging into the MySQL instance using the |
| 157 | +command below. It will prompt for a password. In the yaml file used to create the database, |
| 158 | +you'll see that we set that to `Netapp1!` |
| 159 | +```bash |
| 160 | +kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p |
| 161 | +``` |
| 162 | +After you have logged in, here is a session showing an example of creating a database, then creating a table, then inserting |
| 163 | +some values into the table: |
| 164 | +``` |
| 165 | +mysql> create database fsxdatabase; |
| 166 | +Query OK, 1 row affected (0.01 sec) |
| 167 | +
|
| 168 | +mysql> use fsxdatabase; |
| 169 | +Database changed |
| 170 | +
|
| 171 | +mysql> create table fsx (filesystem varchar(20), capacity varchar(20), region varchar(20)); |
| 172 | +Query OK, 0 rows affected (0.04 sec) |
| 173 | +
|
| 174 | +mysql> insert into fsx (`filesystem`, `capacity`, `region`) values ('netapp01','1024GB', 'us-east-1'), |
| 175 | +('netapp02', '10240GB', 'us-east-2'),('eks001', '2048GB', 'us-west-1'),('eks002', '1024GB', 'us-west-2'), |
| 176 | +('netapp03', '1024GB', 'us-east-1'),('netapp04', '1024GB', 'us-west-1'); |
| 177 | +Query OK, 6 rows affected (0.03 sec) |
| 178 | +Records: 6 Duplicates: 0 Warnings: 0 |
| 179 | +``` |
| 180 | + |
| 181 | +And, to confirm everything is there, here is an SQL statement to retrieve the data: |
| 182 | +``` |
| 183 | +mysql> select * from fsx; |
| 184 | ++------------+----------+-----------+ |
| 185 | +| filesystem | capacity | region | |
| 186 | ++------------+----------+-----------+ |
| 187 | +| netapp01 | 1024GB | us-east-1 | |
| 188 | +| netapp02 | 10240GB | us-east-2 | |
| 189 | +| eks001 | 2048GB | us-west-1 | |
| 190 | +| eks002 | 1024GB | us-west-2 | |
| 191 | +| netapp03 | 1024GB | us-east-1 | |
| 192 | +| netapp04 | 1024GB | us-west-1 | |
| 193 | ++------------+----------+-----------+ |
| 194 | +6 rows in set (0.00 sec) |
| 195 | +
|
| 196 | +mysql> quit |
| 197 | +Bye |
| 198 | +``` |
| 199 | + |
| 200 | +## Create a snapshot of the MySQL data |
| 201 | +Of course, one of the benefits of FSxN is the ability to take space efficient snapshots of the volumes. |
| 202 | +These snapshots take almost no additional space on the backend storage and pose no performance impact. |
| 203 | + |
| 204 | +### Install the Kubernetes Snapshot CRDs and Snapshot Controller: |
| 205 | +The first step is to install the Snapshot CRDs and the Snapshot Controller. |
| 206 | +To do that by running these commands: |
| 207 | +```bash |
| 208 | +git clone https://github.com/kubernetes-csi/external-snapshotter |
| 209 | +cd external-snapshotter/ |
| 210 | +kubectl kustomize client/config/crd | kubectl create -f - |
| 211 | +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - |
| 212 | +kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f - |
| 213 | +cd .. |
| 214 | +``` |
| 215 | +### Create a snapshot class based on the CRD installed |
| 216 | +Create a snapshot class by executing: |
| 217 | +```bash |
| 218 | +kubectl create -f manifests/volume-snapshot-class.yaml |
| 219 | +``` |
| 220 | +The output should look like: |
| 221 | +```bash |
| 222 | +volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created |
| 223 | +``` |
| 224 | +To see how the snapshot class was defined, look at the `manifests/volume-snapshot-class.yaml` file. |
| 225 | +### Create a snapshot of the MySQL data |
| 226 | +Now that you have defined the snapshot class you can create a snapshot by running: |
| 227 | +```bash |
| 228 | +kubectl create -f manifests/volume-snapshot-san.yaml |
| 229 | +``` |
| 230 | +The output should look like: |
| 231 | +```bash |
| 232 | +volumesnapshot.snapshot.storage.k8s.io/mysql-volume-san-snap-01 created |
| 233 | +``` |
| 234 | +To confirm that the snapshot was created, run: |
| 235 | +```bash |
| 236 | +kubectl get volumesnapshot |
| 237 | +``` |
| 238 | +The output should look like: |
| 239 | +```bash |
| 240 | +NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE |
| 241 | +mysql-volume-san-snap-01 true mysql-volume-san 50Gi fsx-snapclass snapcontent-bdce9310-9698-4b37-9f9b-d1d802e44f17 2m18s 2m18s |
| 242 | +``` |
| 243 | + |
| 244 | +You can log onto the FSxN file system to see that the snapshot was created there: |
| 245 | +``` |
| 246 | +FsxId0887a493c777c5122::> snapshot show -volume trident_pvc_* |
| 247 | + ---Blocks--- |
| 248 | +Vserver Volume Snapshot Size Total% Used% |
| 249 | +-------- -------- ------------------------------------- -------- ------ ----- |
| 250 | +ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0 |
| 251 | + snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17 |
| 252 | + 140KB 0% 0% |
| 253 | +``` |
| 254 | +## Clone the MySQL data to a new persistent volume |
| 255 | +Now that you have a snapshot of the data, you can use it to create a read/write version |
| 256 | +of it. This can be used as a new storage volume for another mysql database. This operation |
| 257 | +creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volume |
| 258 | +take up almost no additional space; only a pointer table is created to point to the |
| 259 | +shared data blocks of the volume it is being cloned from. |
| 260 | + |
| 261 | +The first step is to create a Persistent Volume Claim from the snapshot by executing: |
| 262 | +```bash |
| 263 | +kubectl create -f manifests/pvc-from-san-snapshot.yaml |
| 264 | +``` |
| 265 | +To check that it worked, run: |
| 266 | +```bash |
| 267 | +kubectl get pvc |
| 268 | +``` |
| 269 | +The output should look similar to this: |
| 270 | +```bash |
| 271 | +$ kubectl get pvc |
| 272 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 273 | +mysql-volume-san Bound pvc-1aae479e-4b27-4310-8bb2-71255134edf0 50Gi RWO fsx-basic-san <unset> 125m |
| 274 | +mysql-volume-san-clone Bound pvc-ceb1b2c2-de35-4011-8d6e-682b6844bf02 50Gi RWO fsx-basic-san <unset> 2m22s |
| 275 | +``` |
| 276 | +To see more details on how the PVC was defined, look at the `manifests/pvc-from-san-snapshot.yaml` file. |
| 277 | + |
| 278 | +To check it on the FSxN side, you can run: |
| 279 | +```bash |
| 280 | +FsxId0887a493c777c5122::> volume clone show |
| 281 | + Parent Parent Parent |
| 282 | +Vserver FlexClone Vserver Volume Snapshot State Type |
| 283 | +------- ------------- ------- ------------- -------------------- --------- ---- |
| 284 | +ekssvm trident_pvc_ceb1b2c2_de35_4011_8d6e_682b6844bf02 |
| 285 | + ekssvm trident_pvc_1aae479e_4b27_4310_8bb2_71255134edf0 |
| 286 | + snapshot-bdce9310-9698-4b37-9f9b-d1d802e44f17 |
| 287 | + online RW |
| 288 | +``` |
| 289 | +### Create a new MySQL database using the cloned volume |
| 290 | +Now that you have a new storage volume, you can create a new MySQL database that uses it by executing: |
| 291 | +```bash |
| 292 | +kubectl create -f manifests/mysql-san-clone.yaml |
| 293 | +``` |
| 294 | +To check that it is up run: |
| 295 | +```bash |
| 296 | +kubectl get pods |
| 297 | +``` |
| 298 | +The output should look similar to this: |
| 299 | +```bash |
| 300 | +NAME READY STATUS RESTARTS AGE |
| 301 | +csi-snapshotter-0 3/3 Running 0 22h |
| 302 | +mysql-fsx-san-695b497757-8n6bb 1/1 Running 0 21h |
| 303 | +mysql-fsx-san-clone-d66d9d4bf-2r9fw 1/1 Running 0 14s |
| 304 | +``` |
| 305 | +### Confirm that the new database is up and running |
| 306 | +To confirm that the new database is up and running log into it by running this command: |
| 307 | +```bash |
| 308 | +kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san-clone" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p |
| 309 | +``` |
| 310 | +After you have logged in, check that the same data is in the new database: |
| 311 | +``` |
| 312 | +mysql> use fsxdatabase; |
| 313 | +mysql> select * from fsx; |
| 314 | ++------------+----------+-----------+ |
| 315 | +| filesystem | capacity | region | |
| 316 | ++------------+----------+-----------+ |
| 317 | +| netapp01 | 1024GB | us-east-1 | |
| 318 | +| netapp02 | 10240GB | us-east-2 | |
| 319 | +| eks001 | 2048GB | us-west-1 | |
| 320 | +| eks002 | 1024GB | us-west-2 | |
| 321 | +| netapp03 | 1024GB | us-east-1 | |
| 322 | +| netapp04 | 1024GB | us-west-1 | |
| 323 | ++------------+----------+-----------+ |
| 324 | +6 rows in set (0.00 sec) |
| 325 | +``` |
| 326 | + |
| 327 | +## Final steps |
| 328 | + |
| 329 | +At this point you don't need the jump server used to configure the EKS environment for |
| 330 | +the FSxN File System, so feel free to `terminate` it (i.e. destroy it). |
| 331 | + |
| 332 | +Other than that, you are welcome to deploy other applications that need persistent storage. |
0 commit comments