Skip to content

Commit 1d3cab4

Browse files
authored
Merge pull request #154 from OWASP/feat/tf-refactor
feat: refactor part of bash script to be included in TF
2 parents f46af25 + 19e449e commit 1d3cab4

File tree

12 files changed

+258
-154
lines changed

12 files changed

+258
-154
lines changed

aws/README.md

Lines changed: 22 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,11 @@ Have the following tools installed:
1919

2020
Make sure you have an active account at AWS for which you have configured the credentials on the system where you will execute the steps below. In this example we stored the credentials under an aws profile as `awsuser`.
2121

22-
### Multi-user setup: shared state
22+
## Installation
2323

24-
If you want to host a multi-user setup, you will probably want to share the state file so that everyone can try related challenges. We have provided a starter to easily do so using a Terraform S3 backend.
24+
First, we want to create a shared state. We've provided the terraform code for this in the `shared-state` subfolder.
2525

26-
First, create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):
26+
To create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):
2727

2828
```bash
2929
cd shared-state
@@ -32,9 +32,7 @@ terraform apply
3232
```
3333

3434
The bucket name should be in the output. Please use that to configure the Terraform backend in `main.tf`.
35-
The bucket ARN will be printed, make a note of this as it will be used in the next steps.
36-
37-
## Installation
35+
The bucket ARN will be printed, make a note of this as it will be used in the next steps. It should look something like `arn:aws:s3:::terraform-20230102231352749300000001`.
3836

3937
The terraform code is loosely based on [this EKS managed Node Group TF example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group).
4038

@@ -43,18 +41,18 @@ The terraform code is loosely based on [this EKS managed Node Group TF example](
4341
**Note-II**: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.
4442

4543
1. export your AWS credentials (`export AWS_PROFILE=awsuser`)
46-
2. check whether you have the right profile by doing `aws sts get-caller-identity` and make sure you have enough rights with the caller its identity and that the actual accountnumber displayed is the account designated for you to apply this TF to.
47-
3. Do `terraform init` (if required, use tfenv to select TF 0.13.1 or higher )
48-
4. The bucket ARN will be asked for in the next 2 steps. Take the one provided to you and add `arn:aws:s3:::` to the start. e.g. ``arn:aws:s3:::terraform-20230102231352749300000001`
44+
2. check whether you have the right profile by doing `aws sts get-caller-identity`. Make sure you have the right account and have the rights to do this.
45+
3. Do `terraform init` (if required, use tfenv to select TF 0.14.0 or higher )
46+
4. The bucket ARN will be asked in the next 2 steps. Take the one provided to you in the output earlier (e.g., `arn:aws:s3:::terraform-20230102231352749300000001`).
4947
5. Do `terraform plan`
5048
6. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
5149
7. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
5250
8. Do `export KUBECONFIG=~/.kube/wrongsecrets`
5351
9. Run `./build-an-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
5452

55-
Your EKS cluster should be visible in [EU-West-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
53+
Your EKS cluster should be visible in [eu-west-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
5654

57-
Are you done playing? Please run `terraform destroy` twice to clean up.
55+
Are you done playing? Please run `terraform destroy` twice to clean up (first in the main `aws` folder, then the `shared-state` subfolder).
5856

5957
### Test it
6058

@@ -137,15 +135,18 @@ The documentation below is auto-generated to give insight on what's created via
137135

138136
| Name | Version |
139137
|------|---------|
140-
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.31.0 |
141-
| <a name="provider_http"></a> [http](#provider\_http) | 3.1.0 |
138+
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.48.0 |
139+
| <a name="provider_http"></a> [http](#provider\_http) | 3.2.1 |
142140
| <a name="provider_random"></a> [random](#provider\_random) | 3.4.3 |
143141

144142
## Modules
145143

146144
| Name | Source | Version |
147145
|------|--------|---------|
148-
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 18.30.2 |
146+
| <a name="module_cluster_autoscaler_irsa_role"></a> [cluster\_autoscaler\_irsa\_role](#module\_cluster\_autoscaler\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
147+
| <a name="module_ebs_csi_irsa_role"></a> [ebs\_csi\_irsa\_role](#module\_ebs\_csi\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
148+
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 19.4.2 |
149+
| <a name="module_load_balancer_controller_irsa_role"></a> [load\_balancer\_controller\_irsa\_role](#module\_load\_balancer\_controller\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
149150
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 3.18.1 |
150151

151152
## Resources
@@ -199,7 +200,13 @@ The documentation below is auto-generated to give insight on what's created via
199200
| Name | Description |
200201
|------|-------------|
201202
| <a name="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint) | Endpoint for EKS control plane. |
203+
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | The id of the cluster |
204+
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | The EKS cluster name |
202205
| <a name="output_cluster_security_group_id"></a> [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id) | Security group ids attached to the cluster control plane. |
203-
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role ARN used in the IRSA setup |
206+
| <a name="output_ebs_role"></a> [ebs\_role](#output\_ebs\_role) | EBS CSI driver role |
207+
| <a name="output_ebs_role_arn"></a> [ebs\_role\_arn](#output\_ebs\_role\_arn) | EBS CSI driver role |
208+
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role name used in the IRSA setup |
209+
| <a name="output_irsa_role_arn"></a> [irsa\_role\_arn](#output\_irsa\_role\_arn) | The role ARN used in the IRSA setup |
204210
| <a name="output_secrets_manager_secret_name"></a> [secrets\_manager\_secret\_name](#output\_secrets\_manager\_secret\_name) | The name of the secrets manager secret |
211+
| <a name="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name) | Terraform s3 state bucket name |
205212
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

aws/build-an-deploy-aws.sh

Lines changed: 76 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,10 @@ echo "Make sure you have updated your AWS credentials and your kubeconfig prior
55
echo "For this to work the AWS kubernetes cluster must have access to the same local registry / image cache which 'docker build ...' writes its image to"
66
echo "For example docker-desktop with its included k8s cluster"
77

8-
echo "NOTE: WE ARE WORKING HERE WITH A 5 LEGGED BALANCER on aWS which costs money by themselves!"
8+
echo "NOTE: WE ARE WORKING HERE WITH A 5 LEGGED LOAD BALANCER on AWS which costs money by themselves!"
99

10-
echo "NOTE2: please replace balancer.cookie.cookieParserSecret witha value you fanchy and ensure you have TLS on (see outdated guides)."
10+
echo "NOTE 2: You can replace balancer.cookie.cookieParserSecret with a value you fancy."
11+
echo "Note 3: Ensure you turn TLS on :)."
1112

1213
echo "Usage: ./build-an-deploy-aws.sh "
1314

@@ -17,17 +18,10 @@ checkCommandsAvailable helm aws kubectl eksctl sed
1718
if test -n "${AWS_REGION-}"; then
1819
echo "AWS_REGION is set to <$AWS_REGION>"
1920
else
20-
AWS_REGION=eu-west-1
21+
export AWS_REGION=eu-west-1
2122
echo "AWS_REGION is not set or empty, defaulting to ${AWS_REGION}"
2223
fi
2324

24-
if test -n "${CLUSTERNAME-}"; then
25-
secho "CLUSTERNAME is set to <$CLUSTERNAME> which is different than the default. Please update the cluster-autoscaler-policy.json."
26-
else
27-
CLUSTERNAME=wrongsecrets-exercise-cluster
28-
echo "CLUSTERNAME is not set or empty, defaulting to ${CLUSTERNAME}"
29-
fi
30-
3125
echo "Checking for compatible shell"
3226
case "$SHELL" in
3327
*bash*)
@@ -45,12 +39,18 @@ esac
4539
ACCOUNT_ID=$(aws sts get-caller-identity | jq '.Account' -r)
4640
echo "ACCOUNT_ID=${ACCOUNT_ID}"
4741

42+
CLUSTERNAME="$(terraform output -raw cluster_name)"
43+
STATE_BUCKET="$(terraform output -raw state_bucket_name)"
44+
IRSA_ROLE_ARN="$(terraform output -raw irsa_role_arn)"
45+
EBS_ROLE_ARN="$(terraform output -raw ebs_role_arn)"
4846

49-
version="$(uuidgen)"
47+
echo "CLUSTERNAME=${CLUSTERNAME}"
48+
echo "STATE_BUCKET=${STATE_BUCKET}"
49+
echo "IRSA_ROLE_ARN=${IRSA_ROLE_ARN}"
50+
echo "EBS_ROLE_ARN=${EBS_ROLE_ARN}"
5051

51-
AWS_REGION="eu-west-1"
52+
version="$(uuidgen)"
5253

53-
echo "Install autoscaler first!"
5454
echo "If the below output is different than expected: please hard stop this script (running aws sts get-caller-identity first)"
5555

5656
aws sts get-caller-identity
@@ -59,23 +59,23 @@ echo "Giving you 4 seconds before we add autoscaling"
5959

6060
sleep 4
6161

62-
echo "Installing policies and service accounts"
62+
# echo "Installing policies and service accounts"
6363

64-
aws iam create-policy \
65-
--policy-name AmazonEKSClusterAutoscalerPolicy \
66-
--policy-document file://cluster-autoscaler-policy.json
64+
# aws iam create-policy \
65+
# --policy-name AmazonEKSClusterAutoscalerPolicy \
66+
# --policy-document file://cluster-autoscaler-policy.json
6767

68-
echo "Installing iamserviceaccount"
68+
# echo "Installing iamserviceaccount"
6969

70-
eksctl create iamserviceaccount \
71-
--cluster=$CLUSTERNAME \
72-
--region=$AWS_REGION \
73-
--namespace=kube-system \
74-
--name=cluster-autoscaler \
75-
--role-name=AmazonEKSClusterAutoscalerRole \
76-
--attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
77-
--override-existing-serviceaccounts \
78-
--approve
70+
# eksctl create iamserviceaccount \
71+
# --cluster=$CLUSTERNAME \
72+
# --region=$AWS_REGION \
73+
# --namespace=kube-system \
74+
# --name=cluster-autoscaler \
75+
# --role-name=AmazonEKSClusterAutoscalerRole \
76+
# --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
77+
# --override-existing-serviceaccounts \
78+
# --approve
7979

8080
echo "Deploying the k8s autoscaler for eks through kubectl"
8181

@@ -87,7 +87,7 @@ kubectl apply -f cluster-autoscaler-autodiscover.yaml
8787
echo "annotating service account for cluster-autoscaler"
8888
kubectl annotate serviceaccount cluster-autoscaler \
8989
-n kube-system \
90-
eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKSClusterAutoscalerRole
90+
eks.amazonaws.com/role-arn=${CLUSTER_AUTOSCALER}
9191

9292
kubectl patch deployment cluster-autoscaler \
9393
-n kube-system \
@@ -123,43 +123,62 @@ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/late
123123

124124
wait
125125

126-
DEFAULT_PASSWORD=thankyou
127-
#TODO: REWRITE ABOVE, REWRITE THE HARDCODED DEPLOYMENT VALS INTO VALUES AND OVERRIDE THEM HERE!
128-
echo "default password is ${DEFAULT_PASSWORD}"
126+
# if passed as arguments, use those
127+
# otherwise, create new default values
128+
129+
if [[ -z $APP_PASSWORD ]]; then
130+
echo "No app password passed, creating a new one"
131+
APP_PASSWORD="$(uuidgen)"
132+
else
133+
echo "App password already set"
134+
fi
135+
136+
if [[ -z $CREATE_TEAM_HMAC ]]; then
137+
CREATE_TEAM_HMAC="$(openssl rand -base64 24)"
138+
else
139+
echo "Create team HMAC already set"
140+
fi
141+
142+
if [[ -z $COOKIE_PARSER_SECRET ]]; then
143+
COOKIE_PARSER_SECRET="$(openssl rand -base64 24)"
144+
else
145+
echo "Cookie parser secret already set"
146+
fi
147+
148+
echo "App password is ${APP_PASSWORD}"
129149
helm upgrade --install mj ../helm/wrongsecrets-ctf-party \
130-
--set="imagePullPolicy=Always" \
131150
--set="balancer.env.K8S_ENV=aws" \
132-
--set="balancer.env.IRSA_ROLE=arn:aws:iam::${ACCOUNT_ID}:role/wrongsecrets-secret-manager" \
133-
--set="balancer.env.REACT_APP_ACCESS_PASSWORD=${DEFAULT_PASSWORD}" \
134-
--set="balancer.cookie.cookieParserSecret=thisisanewrandomvaluesowecanworkatit" \
135-
--set="balancer.repository=jeroenwillemsen/wrongsecrets-balancer" \
136-
--set="balancer.replicas=4" \
137-
--set="wrongsecretsCleanup.repository=jeroenwillemsen/wrongsecrets-ctf-cleaner" \
138-
--set="wrongsecrets.ctfKey=test" # this key isn't actually necessary in a setup with CTFd
151+
--set="balancer.env.IRSA_ROLE=${IRSA_ROLE_ARN}" \
152+
--set="balancer.env.REACT_APP_ACCESS_PASSWORD=${APP_PASSWORD}" \
153+
--set="balancer.env.REACT_APP_S3_BUCKET_URL=s3://${STATE_BUCKET}" \
154+
--set="balancer.env.REACT_APP_CREATE_TEAM_HMAC_KEY=${CREATE_TEAM_HMAC}" \
155+
--set="balancer.cookie.cookieParserSecret=${COOKIE_PARSER_SECRET}"
156+
157+
# echo "Installing EBS CSI driver"
158+
# eksctl create iamserviceaccount \
159+
# --name ebs-csi-controller-sa \
160+
# --namespace kube-system \
161+
# --cluster $CLUSTERNAME \
162+
# --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
163+
# --approve \
164+
# --role-only \
165+
# --role-name AmazonEKS_EBS_CSI_DriverRole
166+
# --region $AWS_REGION
167+
168+
# echo "managing EBS CSI Driver as a separate eks addon"
169+
# eksctl create addon --name aws-ebs-csi-driver \
170+
# --cluster $CLUSTERNAME \
171+
# --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
172+
# --force \
173+
# --region $AWS_REGION
139174

140175
# Install CTFd
141176

142-
echo "Installing EBS CSI driver"
143-
eksctl create iamserviceaccount \
144-
--name ebs-csi-controller-sa \
145-
--namespace kube-system \
146-
--cluster $CLUSTERNAME \
147-
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
148-
--approve \
149-
--role-only \
150-
--role-name AmazonEKS_EBS_CSI_DriverRole
151-
--region $AWS_REGION
152-
153-
echo "managing EBS CSI Driver as a separate eks addon"
154-
eksctl create addon --name aws-ebs-csi-driver \
155-
--cluster $CLUSTERNAME \
156-
--service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
157-
--force \
158-
--region $AWS_REGION
177+
echo "Installing CTFd"
159178

160179
export HELM_EXPERIMENTAL_OCI=1
161180
kubectl create namespace ctfd
162-
helm -n ctfd install ctfd oci://ghcr.io/bman46/ctfd/ctfd \
181+
helm upgrade --install ctfd -n ctfd oci://ghcr.io/bman46/ctfd/ctfd \
163182
--set="redis.auth.password=$(openssl rand -base64 24)" \
164183
--set="mariadb.auth.rootPassword=$(openssl rand -base64 24)" \
165184
--set="mariadb.auth.password=$(openssl rand -base64 24)" \

aws/cluster-autoscaler-policy.json

Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,36 @@
11
{
2-
"Version": "2012-10-17",
3-
"Statement": [
4-
{
5-
"Sid": "VisualEditor0",
6-
"Effect": "Allow",
7-
"Action": [
8-
"autoscaling:SetDesiredCapacity",
9-
"autoscaling:TerminateInstanceInAutoScalingGroup",
10-
"ec2:DescribeImages",
11-
"ec2:GetInstanceTypesFromInstanceRequirements",
12-
"eks:DescribeNodegroup"
13-
],
14-
"Resource": "*",
15-
"Condition": {
16-
"StringEquals": {
17-
"aws:ResourceTag/k8s.io/cluster-autoscaler/wrongsecrets-exercise-cluster": "owned"
18-
}
19-
}
20-
},
21-
{
22-
"Sid": "VisualEditor1",
23-
"Effect": "Allow",
24-
"Action": [
25-
"autoscaling:DescribeAutoScalingGroups",
26-
"autoscaling:DescribeAutoScalingInstances",
27-
"autoscaling:DescribeLaunchConfigurations",
28-
"autoscaling:DescribeScalingActivities",
29-
"autoscaling:DescribeTags",
30-
"ec2:DescribeInstanceTypes",
31-
"ec2:DescribeLaunchTemplateVersions"
32-
],
33-
"Resource": "*"
2+
"Version": "2012-10-17",
3+
"Statement": [
4+
{
5+
"Sid": "VisualEditor0",
6+
"Effect": "Allow",
7+
"Action": [
8+
"autoscaling:SetDesiredCapacity",
9+
"autoscaling:TerminateInstanceInAutoScalingGroup",
10+
"ec2:DescribeImages",
11+
"ec2:GetInstanceTypesFromInstanceRequirements",
12+
"eks:DescribeNodegroup"
13+
],
14+
"Resource": "*",
15+
"Condition": {
16+
"StringEquals": {
17+
"aws:ResourceTag/k8s.io/cluster-autoscaler/wrongsecrets-exercise-cluster": "owned"
3418
}
35-
]
19+
}
20+
},
21+
{
22+
"Sid": "VisualEditor1",
23+
"Effect": "Allow",
24+
"Action": [
25+
"autoscaling:DescribeAutoScalingGroups",
26+
"autoscaling:DescribeAutoScalingInstances",
27+
"autoscaling:DescribeLaunchConfigurations",
28+
"autoscaling:DescribeScalingActivities",
29+
"autoscaling:DescribeTags",
30+
"ec2:DescribeInstanceTypes",
31+
"ec2:DescribeLaunchTemplateVersions"
32+
],
33+
"Resource": "*"
34+
}
35+
]
3636
}

aws/k8s/ctfd_resources/index_fragment.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,4 @@ <h4 class="text-center">
1111
<a href="challenges">Click here</a> to start hacking!
1212
</h4>
1313
</div>
14-
</div>
14+
</div>

0 commit comments

Comments
 (0)