You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: aws/README.md
+22-15Lines changed: 22 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,11 +19,11 @@ Have the following tools installed:
19
19
20
20
Make sure you have an active account at AWS for which you have configured the credentials on the system where you will execute the steps below. In this example we stored the credentials under an aws profile as `awsuser`.
21
21
22
-
### Multi-user setup: shared state
22
+
##Installation
23
23
24
-
If you want to host a multi-user setup, you will probably want to share the state file so that everyone can try related challenges. We have provided a starter to easily do so using a Terraform S3 backend.
24
+
First, we want to create a shared state. We've provided the terraform code for this in the `shared-state` subfolder.
25
25
26
-
First, create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):
26
+
To create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):
27
27
28
28
```bash
29
29
cd shared-state
@@ -32,9 +32,7 @@ terraform apply
32
32
```
33
33
34
34
The bucket name should be in the output. Please use that to configure the Terraform backend in `main.tf`.
35
-
The bucket ARN will be printed, make a note of this as it will be used in the next steps.
36
-
37
-
## Installation
35
+
The bucket ARN will be printed, make a note of this as it will be used in the next steps. It should look something like `arn:aws:s3:::terraform-20230102231352749300000001`.
38
36
39
37
The terraform code is loosely based on [this EKS managed Node Group TF example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group).
40
38
@@ -43,18 +41,18 @@ The terraform code is loosely based on [this EKS managed Node Group TF example](
43
41
**Note-II**: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.
44
42
45
43
1. export your AWS credentials (`export AWS_PROFILE=awsuser`)
46
-
2. check whether you have the right profile by doing `aws sts get-caller-identity` and make sure you have enough rights with the caller its identity and that the actual accountnumber displayed is the account designated for you to apply this TF to.
47
-
3. Do `terraform init` (if required, use tfenv to select TF 0.13.1 or higher )
48
-
4. The bucket ARN will be asked for in the next 2 steps. Take the one provided to you and add `arn:aws:s3:::` to the start. e.g. ``arn:aws:s3:::terraform-20230102231352749300000001`
44
+
2. check whether you have the right profile by doing `aws sts get-caller-identity`. Make sure you have the right account and have the rights to do this.
45
+
3. Do `terraform init` (if required, use tfenv to select TF 0.14.0 or higher )
46
+
4. The bucket ARN will be asked in the next 2 steps. Take the one provided to you in the output earlier (e.g., `arn:aws:s3:::terraform-20230102231352749300000001`).
49
47
5. Do `terraform plan`
50
48
6. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
51
49
7. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
52
50
8. Do `export KUBECONFIG=~/.kube/wrongsecrets`
53
51
9. Run `./build-an-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
54
52
55
-
Your EKS cluster should be visible in [EU-West-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
53
+
Your EKS cluster should be visible in [eu-west-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
56
54
57
-
Are you done playing? Please run `terraform destroy` twice to clean up.
55
+
Are you done playing? Please run `terraform destroy` twice to clean up (first in the main `aws` folder, then the `shared-state` subfolder).
58
56
59
57
### Test it
60
58
@@ -137,15 +135,18 @@ The documentation below is auto-generated to give insight on what's created via
@@ -199,7 +200,13 @@ The documentation below is auto-generated to give insight on what's created via
199
200
| Name | Description |
200
201
|------|-------------|
201
202
| <aname="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint)| Endpoint for EKS control plane. |
203
+
| <aname="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id)| The id of the cluster |
204
+
| <aname="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name)| The EKS cluster name |
202
205
| <aname="output_cluster_security_group_id"></a> [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id)| Security group ids attached to the cluster control plane. |
203
-
| <aname="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role)| The role ARN used in the IRSA setup |
206
+
| <aname="output_ebs_role"></a> [ebs\_role](#output\_ebs\_role)| EBS CSI driver role |
207
+
| <aname="output_ebs_role_arn"></a> [ebs\_role\_arn](#output\_ebs\_role\_arn)| EBS CSI driver role |
208
+
| <aname="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role)| The role name used in the IRSA setup |
209
+
| <aname="output_irsa_role_arn"></a> [irsa\_role\_arn](#output\_irsa\_role\_arn)| The role ARN used in the IRSA setup |
204
210
| <aname="output_secrets_manager_secret_name"></a> [secrets\_manager\_secret\_name](#output\_secrets\_manager\_secret\_name)| The name of the secrets manager secret |
211
+
| <aname="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name)| Terraform s3 state bucket name |
Copy file name to clipboardExpand all lines: aws/build-an-deploy-aws.sh
+76-57Lines changed: 76 additions & 57 deletions
Original file line number
Diff line number
Diff line change
@@ -5,9 +5,10 @@ echo "Make sure you have updated your AWS credentials and your kubeconfig prior
5
5
echo"For this to work the AWS kubernetes cluster must have access to the same local registry / image cache which 'docker build ...' writes its image to"
6
6
echo"For example docker-desktop with its included k8s cluster"
7
7
8
-
echo"NOTE: WE ARE WORKING HERE WITH A 5 LEGGED BALANCER on aWS which costs money by themselves!"
8
+
echo"NOTE: WE ARE WORKING HERE WITH A 5 LEGGED LOAD BALANCER on AWS which costs money by themselves!"
9
9
10
-
echo"NOTE2: please replace balancer.cookie.cookieParserSecret witha value you fanchy and ensure you have TLS on (see outdated guides)."
10
+
echo"NOTE 2: You can replace balancer.cookie.cookieParserSecret with a value you fancy."
11
+
echo"Note 3: Ensure you turn TLS on :)."
11
12
12
13
echo"Usage: ./build-an-deploy-aws.sh "
13
14
@@ -17,17 +18,10 @@ checkCommandsAvailable helm aws kubectl eksctl sed
17
18
iftest -n "${AWS_REGION-}";then
18
19
echo"AWS_REGION is set to <$AWS_REGION>"
19
20
else
20
-
AWS_REGION=eu-west-1
21
+
exportAWS_REGION=eu-west-1
21
22
echo"AWS_REGION is not set or empty, defaulting to ${AWS_REGION}"
22
23
fi
23
24
24
-
iftest -n "${CLUSTERNAME-}";then
25
-
secho "CLUSTERNAME is set to <$CLUSTERNAME> which is different than the default. Please update the cluster-autoscaler-policy.json."
26
-
else
27
-
CLUSTERNAME=wrongsecrets-exercise-cluster
28
-
echo"CLUSTERNAME is not set or empty, defaulting to ${CLUSTERNAME}"
0 commit comments