Skip to content

Commit 3e2a5fb

Browse files
committed
Made changes to support when users use a SSO to authenticate with AWS.
1 parent 88250a3 commit 3e2a5fb

File tree

5 files changed

+132
-106
lines changed

5 files changed

+132
-106
lines changed

Solutions/FSxN-as-PVC-for-EKS/README.md

Lines changed: 80 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ for it. It will leverage NetApp's Astra Trident to provide the interface between
88

99
## Prerequisites
1010

11-
A Linux based EC2 instance with the following installed:
11+
A Unix based system with the following installed:
1212
- HashiCorp's Terraform
1313
- AWS CLI, authenticated with an account that has privileges necessary to:
1414
- Deploy an EKS cluster
@@ -77,15 +77,26 @@ eks-cluster-name = "fsx-eks-mWFem72Z"
7777
eks-jump-server = "Instance ID: i-0bcf0ed9adeb55814, Public IP: 35.92.238.240"
7878
fsx-id = "fs-04794c394fa5a85de"
7979
fsx-password-secret-arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsx-eks-secret20240618170506480900000001-u8IQEp"
80-
fsx-password-secret-name = "fsx-eks-secret20240618170506480900000001"
80+
fsx-password-secret-name = "fsx-eks-secrete-3f55084"
8181
fsx-svm-name = "ekssvm"
8282
region = "us-west-2"
8383
vpc-id = "vpc-043a3d602b64e2f56"
84-
zz_update_kubeconfig_command = "aws eks update-kubeconfig --name fsx-eks-mWFem72Z --region us-west-2"
8584
```
8685
You will use the values in the commands below, so probably a good idea to copy the output somewhere
8786
so you can easily reference it later.
8887

88+
Note that a FSxN File System was created, with a vserver (a.k.a. SVM). The default username
89+
for the FSxN File System is 'fsxadmin'. And, the default username for the vserver is 'vsadmin'. The
90+
password for both of these users is the same and is what is stored in the AWS SecretsManager secret
91+
shown above. Note that since Terraform was used to create the secret, the password is stored in
92+
plain text therefore it is **HIGHLY** recommended that you change the password to something else
93+
by first changing the passwords via the AWS Management Console and then updating the password in
94+
the AWS SecretsManager secret. You can update the 'username' key in the secret if you want, but
95+
it must be a vserver admin user, not a system level user. This secret is used by Astra
96+
Trident and it will always login via the vserver managmenet LIF and therefore it must be a
97+
vserver admin user. If you want to create a separate secret for the 'fsxadmin' user, feel free
98+
to do so.
99+
89100
### SSH to the jump server to complete the setup
90101
Use the following command to 'ssh' to the jump start server:
91102
```bash
@@ -98,28 +109,33 @@ referenced in the variables.tf file.
98109
in the output from the `terraform apply` command.
99110

100111
### Configure the 'aws' CLI
101-
Run the following command to configure the 'aws' command:
102-
```bash
103-
aws configure
104-
```
105-
It will prompt you for an access key and secret. See above for the required permissions.
106-
It will also prompt you for a default region and output format. I would recommend setting
107-
the region to the same region you set in the variables.tf file. It doesn't matter what
108-
you set the default output format to.
112+
There are various ways to configure the AWS cli. If you are unsure how to do it, please
113+
refer to this URL for instructions:
114+
[Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)
115+
116+
**NOTE:** When asked for a default region, use the region you specified in the variables.tf file.
109117

110118
### Allow access to the EKS cluster for your user id
111119
AWS's EKS clusters have a secondary form for permissions. As such, you have to add an "access-entry"
112120
to your EKS configuration and associate it with Cluster Admin policy to be able to view and
113121
configure the EKS cluster. The first step to do this is to find out your IAM ARN.
114122
You can do that via this command:
115123
```bash
116-
aws iam get-user --output=text --query User.Arn
124+
user_ARN=$(aws sts get-caller-identity --query Arn --output text)
125+
echo $user_ARN
117126
```
118-
To make the next few commands easy, create variables that hold the AWS region, EKS cluster name,
119-
and the user ARN:
127+
Note that if you are using an SSO to authenicate with AWS, then the actual username
128+
you need to add is slightly different than what is output from the above command.
129+
The following command will take the output from the above command and format it correctly:
130+
```bash
131+
user_ARN=$(aws sts get-caller-identity | jq -r '.Arn' | awk -F: '{split($6, parts, "/"); printf "arn:aws:iam::%s:role/aws-reserved/sso.amazonaws.com/%s\n", $5, parts[2]}')
132+
echo $user_ARN
133+
```
134+
The above command will leverage a standard AWS role that is created when configuring AWS to use an SSO.
135+
136+
To make the next few commands easy, create variables that hold the AWS region and EKS cluster name.
120137
```bash
121138
aws_region=<AWS_REGION>
122-
user_ARN=$(aws iam get-user --output=text --query User.Arn)
123139
cluster_name=<EKS_CLUSTER_NAME>
124140
```
125141
Of course, replace <AWS_REGION> with the region where the resources were deployed. And replace
@@ -133,15 +149,14 @@ aws eks associate-access-policy --cluster-name $cluster_name --principal-arn $us
133149
```
134150

135151
### Configure kubectl to use the EKS cluster
136-
You'll notice at the bottom of the output from the `terraform apply` command a
137-
"zz_update_kubeconfig_command" variable. The output of that variable shows the
138-
command to run to configure kubectl to use the AWS EKS cluster.
139-
140-
Here's an example based on the "terraform apply" output shown above:
152+
AWS makes it easy to configure 'kubectl' to use the EKS cluster. You can do that by running this command:
141153
```bash
142-
aws eks update-kubeconfig --name fsx-eks-mWFem72Z --region us-west-2
154+
aws eks update-kubeconfig --name $cluster_name --region $aws_region
143155
```
144-
Run the following command to confirm you can communicate with the EKS cluster:
156+
Of course the above assumes the cluster_name and aws_region variables are still set from
157+
running the commands above.
158+
159+
To confirm you are able to communciate with the EKS cluster run the following command:
145160
```bash
146161
kubectl get nodes
147162
```
@@ -152,17 +167,6 @@ ip-10-0-1-84.us-west-2.compute.internal Ready <none> 76m v1.29.3-eks-a
152167
ip-10-0-2-117.us-west-2.compute.internal Ready <none> 76m v1.29.3-eks-ae9a62a
153168
```
154169

155-
### Install the Kubernetes Snapshot CRDs and Snapshot Controller:
156-
Run these commands to install the CRDs:
157-
```bash
158-
git clone https://github.com/kubernetes-csi/external-snapshotter
159-
cd external-snapshotter/
160-
kubectl kustomize client/config/crd | kubectl create -f -
161-
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
162-
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
163-
cd ..
164-
```
165-
166170
### Confirm Astra Trident is up and running
167171
Astra Trident should have been added to your EKS Cluster as part of the terraform deployment.
168172
Confirm that it is up and running by running this command:
@@ -180,19 +184,21 @@ trident-operator-67d6fd899b-jrnt2 1/1 Running 0 20h
180184

181185
### Configure the Trident CSI backend to use FSx for NetApp ONTAP
182186
For the example below we are going to set up an iSCSI LUN for a MySQL
183-
database. Because of that, we are going to setup a Trident backend
184-
to use the `ontap-san` driver. You can read more about the different driver types in the
185-
[Astra Trident documentation](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details) documentation.
187+
database. Because of that, we are going to setup Astra Trident as a backend provider
188+
and configure it to use its `ontap-san` driver. You can read more about
189+
the different type of drivers it supports in the
190+
[Astra Trident documentation](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html#fsx-for-ontap-driver-details)
191+
documentation.
186192

187193
As you go through the steps below, you will notice that most of the files have "-san" in their
188194
name. If you want to see an example of using NFS instead of iSCSI, then there are equivalent
189-
files that have "-nas" in the name. You can even create two mysql databases, one using iSCSI LUN
195+
files that have "-nas" in their name. You can even create two mysql databases, one using iSCSI LUN
190196
and the another using NFS.
191197

192198
The first step is to define a backend provider and, in the process, give it the information
193199
it needs to make changes (e.g. create volumes, and LUNs) to the FSxN file system.
194200

195-
In the command below you're going to need the FSxN ID, the FSX SVM Name, and the
201+
In the command below you're going to need the FSxN ID, the FSX SVM name, and the
196202
secret ARN. All of that information can be obtained from the output
197203
from the `terraform apply` command. If you have lost that output, you can always log back
198204
into the server where you ran `terraform apply` and simply run it again. It should
@@ -204,6 +210,10 @@ used to create the environment with earlier. This copy will not have the terrafo
204210
state information, nor your changes to the variables.tf file, but it does have
205211
other files you'll need to complete the setup.
206212

213+
After making the following substitutions:
214+
- \<fsx-id> with the FSxN ID.
215+
- \<fsx-svm-name> with the name of the SVM that was created.
216+
- \<secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
207217
Execute the following commands to configure Trident to use the `ontap-san` driver.
208218
```bash
209219
cd ~/FSx-ONTAP-samples-scripts/Solutions/FSxN-as-PVC-for-EKS
@@ -214,17 +224,17 @@ export SECRET_ARN=<secret-arn>
214224
envsubst < manifests/backend-tbc-ontap-san.tmpl > temp/backend-tbc-ontap-san.yaml
215225
kubectl create -n trident -f temp/backend-tbc-ontap-san.yaml
216226
```
217-
Of course replace:
218-
- \<fsx-id> with the FSxN ID.
219-
- \<fsx-svm-name> with the name of the SVM that was created.
220-
- \<secret-arn> with the ARN of the AWS SecretsManager secret that holds the FSxN password.
221227

222228
To get more information regarding how the backed was configured, look at the
223229
`temp/backend-tbc-ontap-san.yaml` file.
224230

225231
As mentioned above, if you want to use NFS storage instead of iSCSI, you can use the
226232
`manifests/backend-tbc-ontap-nas.tmpl` file instead of the `manifests/backend-tbc-ontap-san.tmpl`
227-
file.
233+
file. The last two commands should look like:
234+
```bash
235+
envsubst < manifests/backend-tbc-ontap-nas.tmpl > temp/backend-tbc-ontap-nas.yaml
236+
kubectl create -n trident -f temp/backend-tbc-ontap-nas.yaml
237+
```
228238

229239
To confirm that the backend has been appropriately configured, run this command:
230240
```bash
@@ -297,22 +307,23 @@ To see how the MySQL was configured, check out the `manifests/mysql-san.yaml` fi
297307
### Populate the MySQL database with data
298308

299309
Now to confirm that the database is able to read and write to the persistent storage you need
300-
to put some data in the database. Do that by first logging into the MySQL instance.
301-
It will prompt for a password. In the yaml file used to create the database, you'll see
302-
that we set that to `Netapp1!`
310+
to put some data in the database. Do that by first logging into the MySQL instance using the
311+
command below. It will prompt for a password. In the yaml file used to create the database,
312+
you'll see that we set that to `Netapp1!`
303313
```bash
304314
kubectl exec -it $(kubectl get pod -l "app=mysql-fsx-san" --namespace=default -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p
305315
```
306316
**NOTE:** Replace "mysql-fsx-san" with "mysql-fsx-nas" if you are creating a NFS based MySQL server.
307317

308318
After you have logged in, here is a session showing an example of creating a database, then creating a table, then inserting
309319
some values into the table:
310-
```bash
320+
```sql
311321
mysql> create database fsxdatabase;
312322
Query OK, 1 row affected (0.01 sec)
313323

314324
mysql> use fsxdatabase;
315325
Database changed
326+
316327
mysql> create table fsx (filesystem varchar(20), capacity varchar(20), region varchar(20));
317328
Query OK, 0 rows affected (0.04 sec)
318329

@@ -324,7 +335,7 @@ Records: 6 Duplicates: 0 Warnings: 0
324335
```
325336

326337
And, to confirm everything is there, here is an SQL statement to retrieve the data:
327-
```bash
338+
```sql
328339
mysql> select * from fsx;
329340
+------------+----------+-----------+
330341
| filesystem | capacity | region |
@@ -342,8 +353,21 @@ mysql> select * from fsx;
342353
## Create a snapshot of the MySQL data
343354
Of course, one of the benefits of FSxN is the ability to take space efficient snapshots of the volumes.
344355
These snapshots take almost no additional space on the backend storage and pose no performance impact.
345-
So, let's create one for the SQL volume. The first step is to add the volume snapshot store class
346-
by executing:
356+
357+
### Install the Kubernetes Snapshot CRDs and Snapshot Controller:
358+
The first step is to install the Snapshot CRDs and the Snapshot Controller.
359+
To do that run the following commands:
360+
```bash
361+
git clone https://github.com/kubernetes-csi/external-snapshotter
362+
cd external-snapshotter/
363+
kubectl kustomize client/config/crd | kubectl create -f -
364+
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
365+
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
366+
cd ..
367+
```
368+
369+
### Create a snapshot of the MySQL data
370+
Now, add the volume snapshot class by executing:
347371
```bash
348372
kubectl create -f manifests/volume-snapshot-class.yaml
349373
```
@@ -354,7 +378,7 @@ volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
354378
Note, that this storage class works for both LUNs and NFS volumes, so there aren't different versions
355379
of this file based on the storage type you are testing with.
356380

357-
The next step is to create a snapshot of the data by executing:
381+
The findal step is to create a snapshot of the data by executing:
358382
```bash
359383
kubectl create -f manifests/volume-snapshot-san.yaml
360384
```
@@ -373,11 +397,11 @@ mysql-volume-san-snap-01 true mysql-volume-san
373397
```
374398

375399
## Clone the MySQL data to a new storage persisent volume
376-
Now that you have a snapshot of the data, you use it to create a read/write version of it. This
377-
can be used as a new storage volume for another mysql database. This step creates a new
378-
FlexClone volume in FSx for ONTAP. Note that FlexClone volumes take up almost no space;
379-
only a pointer table is created to point to the shared data blocks of the volume it is
380-
being cloned from.
400+
Now that you have a snapshot of the data, you can use it to create a read/write version
401+
of it. This can be used as a new storage volume for another mysql database. This operation
402+
creates a new FlexClone volume in FSx for ONTAP. Note that initially a FlexClone volumes
403+
take up almost no additional space; only a pointer table is created to point to the
404+
shared data blocks of the volume it is being cloned from.
381405

382406
The first step is to create a PersistentVolume from the snapshot by executing:
383407
```bash

Solutions/FSxN-as-PVC-for-EKS/manifests/pvc-from-san-snapshot.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ metadata:
55
spec:
66
accessModes:
77
- ReadWriteOnce
8-
storageClassName: fsx-basic-block
8+
storageClassName: fsx-basic-san
99
resources:
1010
requests:
1111
storage: 50Gi

Solutions/FSxN-as-PVC-for-EKS/terraform/fsx.tf

Lines changed: 0 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -45,46 +45,3 @@ resource "aws_fsx_ontap_storage_virtual_machine" "ekssvm" {
4545
name = "ekssvm"
4646
svm_admin_password = random_string.fsx_password.result
4747
}
48-
#
49-
# Create a security group.
50-
resource "aws_security_group" "fsx_sg" {
51-
name_prefix = "security group for fsx access"
52-
vpc_id = module.vpc.vpc_id
53-
tags = {
54-
Name = "fsx_sg"
55-
}
56-
}
57-
#
58-
# This rule allows traffic over port 22 when the source has
59-
# the jump start SG assigned.
60-
resource "aws_security_group_rule" "fsx_sg_ssh_from_jump_server" {
61-
description = "allow ssh from jump_server to fsx"
62-
from_port = 0
63-
protocol = "tcp"
64-
to_port = 22
65-
security_group_id = aws_security_group.fsx_sg.id
66-
type = "ingress"
67-
source_security_group_id = aws_security_group.eks_jump_server.id
68-
}
69-
#
70-
# This rule allow all traffic from the provide subnets.
71-
resource "aws_security_group_rule" "fsx_sg_inbound" {
72-
description = "allow inbound traffic to eks"
73-
from_port = 0
74-
protocol = "-1"
75-
to_port = 0
76-
security_group_id = aws_security_group.fsx_sg.id
77-
type = "ingress"
78-
cidr_blocks = module.vpc.private_subnets_cidr_blocks
79-
}
80-
#
81-
# This rule allows all outbound traffic.
82-
resource "aws_security_group_rule" "fsx_sg_outbound" {
83-
description = "allow outbound traffic to anywhere"
84-
from_port = 0
85-
protocol = "-1"
86-
security_group_id = aws_security_group.fsx_sg.id
87-
to_port = 0
88-
type = "egress"
89-
cidr_blocks = ["0.0.0.0/0"]
90-
}

Solutions/FSxN-as-PVC-for-EKS/terraform/outputs.tf

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,10 @@ output "fsx-id" {
1919
value = aws_fsx_ontap_file_system.eksfs.id
2020
}
2121

22+
output "fsx-management-ip" {
23+
value = format(join("", aws_fsx_ontap_file_system.eksfs.endpoints[0].management[0].ip_addresses))
24+
}
25+
2226
output "eks-cluster-name" {
2327
value = data.aws_eks_cluster.eks.id
2428
}
@@ -30,7 +34,3 @@ output "vpc-id" {
3034
output "eks-jump-server" {
3135
value = format("Instance ID: %s, Public IP: %s", aws_instance.eks_jump_server.id, aws_instance.eks_jump_server.public_ip)
3236
}
33-
34-
output "zz_update_kubeconfig_command" {
35-
value = format("%s %s %s %s", "aws eks update-kubeconfig --name", module.eks.cluster_name, "--region", var.aws_region)
36-
}

0 commit comments

Comments
 (0)