Skip to content

Commit 1cafc6d

Browse files
added 2021-11-29-kubernetes-bootstrap-a-cluster-with-kubeadm-on-aws-ec2.md
1 parent e0e6184 commit 1cafc6d

File tree

2 files changed

+260
-1
lines changed

2 files changed

+260
-1
lines changed

_posts/aws/2021-11-14-aws-ec2-launch-instances-the-hard-way-with-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ We have also enabled public IP address as we need to SSH in to the instances fro
188188

189189
Great, so our instances are created finally.
190190
```
191-
$ aws ec2 describe-instances | jq -r '.Reservations[0] | .Instances[] | select(.SubnetId==env.KUBEADM_SUBNET_ID) | .InstanceId'
191+
$ aws ec2 describe-instances | jq -r '.Reservations[] | .Instances[] | select(.SubnetId==env.KUBEADM_SUBNET_ID) | .InstanceId'
192192
<i-id1>
193193
<i-id2>
194194
<i-id3>
Lines changed: 259 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,259 @@
1+
---
2+
categories: kubernetes
3+
title: kubernetes set prerequisites for kubeadm based cluster in aws with cli
4+
---
5+
6+
[kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) is one of the popular tools used for bootstrapping kubernetes, here we would be setting up the [prerequisites][prerequisites](https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/) on AWS that are essential before launching the cluster.
7+
8+
This is a continuation to this [blog](https://networkandcode.github.io/aws/ec2/2021/11/14/aws-ec2-launch-instances-the-hard-way-with-cli.html) where we have launched the instances via CLI, if you followed that, you should have a file k8s-node-ips.txt with the list of instance IPs.
9+
10+
```
11+
$ cat k8s-node-ips.txt
12+
<K8S_MASTER_IP>
13+
<K8S_NODE1_IP>
14+
<K8S_NODE2_IP>
15+
16+
$ ips=$(<k8s-node-ips.txt)
17+
```
18+
19+
Let's proceed with setting up the prerequisites...
20+
21+
## Hostname
22+
Ensure the hostname matches with the private DNS name of the instance. Let's first check the present hostname.
23+
```
24+
$ for ip in $ips; do ssh -i ~/.ssh/kubeadmKeyPair.pem ubuntu@$ip hostname; done
25+
ip-10-0-0-9
26+
ip-10-0-0-6
27+
ip-10-0-0-4
28+
```
29+
30+
And the check the private DNS.
31+
```
32+
$ for ip in $ips; do ssh -i ~/.ssh/kubeadmKeyPair.pem ubuntu@$ip "curl http://169.254.169.254/latest/meta-data/local-hostname --silent; echo"; done
33+
ip-10-0-0-9.us-east-2.compute.internal
34+
ip-10-0-0-6.us-east-2.compute.internal
35+
ip-10-0-0-4.us-east-2.compute.internal
36+
```
37+
Note that the AWS region in this blog is different from the one in the instances launching [blog](https://networkandcode.github.io/aws/ec2/2021/11/14/aws-ec2-launch-instances-the-hard-way-with-cli.html), however most of the concepts are still the same.
38+
39+
Ok, so we need to set the hostname to match with the private dns, so that the region and compute.internal domain get appended to the hostname.
40+
```
41+
$ for ip in $ips; do ssh -i ~/.ssh/kubeadmKeyPair.pem ubuntu@$ip "sudo hostnamectl set-hostname $(curl http://169.254.169.254/latest/meta-data/local-hostname --silent)"; done
42+
```
43+
44+
Let's verify.
45+
```
46+
$ for ip in $ips; do ssh -i ~/.ssh/kubeadmKeyPair.pem ubuntu@$ip hostname; done ip-172-31-13-141.us-east-2.compute.internal
47+
ip-172-31-13-141.us-east-2.compute.internal
48+
ip-172-31-13-141.us-east-2.compute.internal
49+
```
50+
51+
So hostname is now as expected, note that you could also enable DNS hostname at the VPC level.
52+
```
53+
$ aws ec2 modify-vpc-attribute --vpc-id $KUBEADM_VPC_ID --enable-dns-hostname
54+
```
55+
56+
57+
## IAM policies
58+
We have to setup different [policies](https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2) for the control plane and worker nodes. Let's begin with the control plane.
59+
60+
Define the control plane policy.
61+
```
62+
$ cat k8s-control-plane-policy.json <<EOF
63+
{
64+
"Version": "2012-10-17",
65+
"Statement": [
66+
{
67+
"Effect": "Allow",
68+
"Action": [
69+
"autoscaling:DescribeAutoScalingGroups",
70+
"autoscaling:DescribeLaunchConfigurations",
71+
"autoscaling:DescribeTags",
72+
"ec2:DescribeInstances",
73+
"ec2:DescribeRegions",
74+
"ec2:DescribeRouteTables",
75+
"ec2:DescribeSecurityGroups",
76+
"ec2:DescribeSubnets",
77+
"ec2:DescribeVolumes",
78+
"ec2:CreateSecurityGroup",
79+
"ec2:CreateTags",
80+
"ec2:CreateVolume",
81+
"ec2:ModifyInstanceAttribute",
82+
"ec2:ModifyVolume",
83+
"ec2:AttachVolume",
84+
"ec2:AuthorizeSecurityGroupIngress",
85+
"ec2:CreateRoute",
86+
"ec2:DeleteRoute",
87+
"ec2:DeleteSecurityGroup",
88+
"ec2:DeleteVolume",
89+
"ec2:DetachVolume",
90+
"ec2:RevokeSecurityGroupIngress",
91+
"ec2:DescribeVpcs",
92+
"elasticloadbalancing:AddTags",
93+
"elasticloadbalancing:AttachLoadBalancerToSubnets",
94+
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
95+
"elasticloadbalancing:CreateLoadBalancer",
96+
"elasticloadbalancing:CreateLoadBalancerPolicy",
97+
"elasticloadbalancing:CreateLoadBalancerListeners",
98+
"elasticloadbalancing:ConfigureHealthCheck",
99+
"elasticloadbalancing:DeleteLoadBalancer",
100+
"elasticloadbalancing:DeleteLoadBalancerListeners",
101+
"elasticloadbalancing:DescribeLoadBalancers",
102+
"elasticloadbalancing:DescribeLoadBalancerAttributes",
103+
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
104+
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
105+
"elasticloadbalancing:ModifyLoadBalancerAttributes",
106+
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
107+
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
108+
"elasticloadbalancing:AddTags",
109+
"elasticloadbalancing:CreateListener",
110+
"elasticloadbalancing:CreateTargetGroup",
111+
"elasticloadbalancing:DeleteListener",
112+
"elasticloadbalancing:DeleteTargetGroup",
113+
"elasticloadbalancing:DescribeListeners",
114+
"elasticloadbalancing:DescribeLoadBalancerPolicies",
115+
"elasticloadbalancing:DescribeTargetGroups",
116+
"elasticloadbalancing:DescribeTargetHealth",
117+
"elasticloadbalancing:ModifyListener",
118+
"elasticloadbalancing:ModifyTargetGroup",
119+
"elasticloadbalancing:RegisterTargets",
120+
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
121+
"iam:CreateServiceLinkedRole",
122+
"kms:DescribeKey"
123+
],
124+
"Resource": [
125+
"*"
126+
]
127+
}
128+
]
129+
}
130+
EOF
131+
```
132+
133+
Create the control plane policy.
134+
```
135+
$ aws iam create-policy --policy-name k8s-control-plane-policy --policy-document file://
136+
```
137+
138+
Likewise repeat the steps for worker nodes.
139+
140+
Define the worker node policy.
141+
```
142+
$ cat k8s-worker-nodes-policy.json <<EOF
143+
{
144+
"Version": "2012-10-17",
145+
"Statement": [
146+
{
147+
"Effect": "Allow",
148+
"Action": [
149+
"ec2:DescribeInstances",
150+
"ec2:DescribeRegions",
151+
"ecr:GetAuthorizationToken",
152+
"ecr:BatchCheckLayerAvailability",
153+
"ecr:GetDownloadUrlForLayer",
154+
"ecr:GetRepositoryPolicy",
155+
"ecr:DescribeRepositories",
156+
"ecr:ListImages",
157+
"ecr:BatchGetImage"
158+
],
159+
"Resource": "*"
160+
}
161+
]
162+
}
163+
EOF
164+
```
165+
166+
Create the worker nodes policy.
167+
```
168+
$ aws iam create-policy --policy-name k8s-worker-nodes-policy --policy-document file://k8s-worker-nodes-policy.json
169+
```
170+
171+
## Trust policy
172+
We shall define a trust policy with EC2 as the trust identity, so that we can attach that trust policy to roles we are about to create.
173+
```
174+
$ cat ec2-trust-policy.json <<EOF
175+
{
176+
"Version": "2012-10-17",
177+
"Statement": [
178+
{
179+
"Effect": "Allow",
180+
"Principal": { "Service": "ec2.amazonaws.com"},
181+
"Action": "sts:AssumeRole"
182+
}
183+
]
184+
}
185+
EOF
186+
```
187+
188+
## Roles
189+
It's time to create roles, one for the control plane, and other for the worker nodes.
190+
```
191+
$ aws iam create-role --role-name k8s-control-plane-role --assume-role-policy-document file://ec2-trust-policy.json
192+
193+
$ aws iam create-role --role-name k8s-worker-nodes-role --assume-role-policy-document file://ec2-trust-policy.json
194+
```
195+
196+
Find the policy ARNs.
197+
```
198+
$ export K8S_CONTROL_PLANE_POLICY_ARN=$(aws iam list-policies | jq -r '.Policies[] | select(.PolicyName=="k8s-control-plane-policy") | .Arn')
199+
200+
$ echo $K8S_CONTROL_PLANE_POLICY_ARN
201+
arn:aws:iam::<account-id>:policy/k8s-control-plane-policy
202+
203+
$ export K8S_WORKER_NODES_POLICY_ARN=$(aws iam list-policies | jq -r '.Policies[] | select(.PolicyName=="k8s-worker-nodes-policy") | .Arn')
204+
205+
$ echo $K8S_WORKER_NODES_POLICY_ARN
206+
arn:aws:iam::<account-id>:policy/k8s-worker-nodes-policy
207+
```
208+
209+
Attach the policies to roles.
210+
```
211+
$ aws iam attach-role-policy --role-name k8s-control-plane-role --policy-arn $K8S_CONTROL_PLANE_POLICY_ARN
212+
213+
$ aws iam attach-role-policy --role-name k8s-worker-nodes-role --policy-arn $K8S_WORKER_NODES_POLICY_ARN
214+
```
215+
216+
## Instance Profiles
217+
Create [instance profiles](https://aws.amazon.com/blogs/security/new-attach-an-aws-iam-role-to-an-existing-amazon-ec2-instance-by-using-the-aws-cli/) for the EC2 instances.
218+
```
219+
$ aws iam create-instance-profile --instance-profile-name k8s-control-plane-instance-profile
220+
221+
$ aws iam create-instance-profile --instance-profile-name k8s-worker-nodes-instance-profile
222+
```
223+
224+
And add roles to these instance profiles.
225+
```
226+
$ aws iam add-role-to-instance-profile --role-name k8s-control-plane-role --instance-profile-name k8s-control-plane-instance-profile
227+
228+
$ aws iam add-role-to-instance-profile --role-name k8s-worker-nodes-role --instance-profile-name k8s-worker-nodes-instance-profile
229+
```
230+
231+
## Tags
232+
We have to add [tags](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-tags.html) to the AWS resources with the format owned: kubernetes.io/cluster/<cluster-name>: owned, if we keep kubernetes as the cluster name also, then it would be kubernetes.io/cluster/kubernetes: owned.
233+
234+
Add tags to VPC, Subnet, Internet gateway and Route table.
235+
```
236+
$ aws ec2 create-tags --tags "Key=kubernetes.io/cluster/kubernetes,Value=owned" --resources $KUBEADM_VPC_ID
237+
238+
$ aws ec2 create-tags --tags "Key=kubernetes.io/cluster/kubernetes,Value=owned" --resources $KUBEADM_SUBNET_ID
239+
240+
$ aws ec2 create-tags --tags "Key=kubernetes.io/cluster/kubernetes,Value=owned" --resources $KUBEADM_IGW_ID
241+
242+
$ aws ec2 create-tags --tags "Key=kubernetes.io/cluster/kubernetes,Value=owned" --resources $KUBEADM_RTB_ID
243+
```
244+
245+
Add tags to EC2 instances.
246+
```
247+
$ aws ec2 describe-instances | jq -r '.Reservations[] | .Instances[] | select(.SubnetId==env.KUBEADM_SUBNET_ID) | .InstanceId'
248+
<i-id1>
249+
<i-id2>
250+
<i-id3>
251+
252+
$ ids=$(<instance-ids.txt)
253+
254+
$ for id in $ids; do aws ec2 create-tags --tags "Key=kubernetes.io/cluster/kubernetes,Value=owned" --resources $id; done
255+
```
256+
257+
Alright, so I think we are done with the prerequisites, we may have to revisit though, if we face an issue while launching the cluster. Thank you for reading !!!
258+
259+
--end-of-post--

0 commit comments

Comments
 (0)