Skip to content

Commit 59f0587

Browse files
authored
Merge pull request #3401 from LucaLanziani/documentation/multitenant-eks-setup
first multitenancy example
2 parents 5ca13bb + 96adc2c commit 59f0587

File tree

2 files changed

+260
-0
lines changed

2 files changed

+260
-0
lines changed

docs/book/src/SUMMARY_PREFIX.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
- [Spot instances](./topics/spot-instances.md)
1212
- [Machine Pools](./topics/machinepools.md)
1313
- [Multi-tenancy](./topics/multitenancy.md)
14+
- [Multi-tenancy in EKS-managed clusters](./topics/full-multitenancy-implementation.md)
1415
- [EKS Support](./topics/eks/index.md)
1516
- [Prerequisites](./topics/eks/prerequisites.md)
1617
- [Enabling EKS Support](./topics/eks/enabling.md)
Lines changed: 259 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,259 @@
1+
# Multitenancy setup with EKS and Service Account
2+
3+
See [multitenancy](./multitenancy.md) for more
4+
details on enabling the functionality and the various options you can use.
5+
6+
In this example, we are going to see how to create the following architecture with cluster API:
7+
8+
```
9+
AWS Account 1
10+
+--------------------+
11+
| |
12+
+---------------+->EKS - (Managed) |
13+
| | |
14+
| +--------------------+
15+
AWS Account 0 | AWS Account 2
16+
+----------------+---+ +--------------------+
17+
| | | | |
18+
| EKS - (Manager)---+-----------+->EKS - (Managed) |
19+
| | | | |
20+
+----------------+---+ +--------------------+
21+
| AWS Account 3
22+
| +--------------------+
23+
| | |
24+
+---------------+->EKS - (Managed) |
25+
| |
26+
+--------------------+
27+
```
28+
29+
And specifically, we will only include:
30+
31+
- AWS Account 0 (aka Manager account used by management cluster where cluster API controllers reside)
32+
- AWS Account 1 (aka Managed account used for EKS-managed workload clusters)
33+
34+
## Prerequisites
35+
36+
- A bootstrap cluster (kind)
37+
- AWS CLI installed
38+
- 2 (or more) AWS accounts
39+
- [clusterawsadm](https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases)
40+
- [clusterctl](https://github.com/kubernetes-sigs/cluster-api/releases)
41+
42+
## Set variables
43+
44+
**Note:** the credentials below are the ones of the manager account
45+
46+
Export the following environment variables:
47+
48+
- AWS_REGION
49+
- AWS_ACCESS_KEY_ID
50+
- AWS_SECRET_ACCESS_KEY
51+
- AWS_SESSION_TOKEN (if you are using Multi-factor authentication)
52+
- AWS_MANAGER_ACCOUNT_ID
53+
- AWS_MANAGED_ACCOUNT_ID
54+
- OIDC_PROVIDER_ID="WeWillReplaceThisLater"
55+
56+
## Prepare the manager account
57+
58+
As explained in the [EKS prerequisites page](./eks/prerequisites.md), we need a couple of roles in the account to build the cluster, `clusterawsadm` CLI can take care of it.
59+
60+
We know that the CAPA provider in the Manager account should be able to assume roles in the Managed account (AWS Account 1).
61+
62+
We can create a clusterawsadm configuration that adds an inline policy to the `controllers.cluster-api-provider-aws.sigs.k8s.io` role.
63+
64+
```bash
65+
envsubst > bootstrap-manager-account.yaml << EOL
66+
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
67+
kind: AWSIAMConfiguration
68+
spec:
69+
eks: # This section should be changed accordingly to your requirements
70+
iamRoleCreation: false
71+
managedMachinePool:
72+
disable: true
73+
fargate:
74+
disable: false
75+
clusterAPIControllers: # This is the section that really matter
76+
disabled: false
77+
extraStatements:
78+
- Action:
79+
- "sts:AssumeRole"
80+
Effect: "Allow"
81+
Resource: ["arn:aws:iam::${AWS_MANAGED_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io"]
82+
trustStatements:
83+
- Action:
84+
- "sts:AssumeRoleWithWebIdentity"
85+
Effect: "Allow"
86+
Principal:
87+
Federated:
88+
- "arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:oidc-provider/oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}"
89+
Condition:
90+
"ForAnyValue:StringEquals":
91+
"oidc.eks.${AWS_REGION}.amazonaws.com/id/${OIDC_PROVIDER_ID}:sub":
92+
- system:serviceaccount:capa-system:capa-controller-manager
93+
- system:serviceaccount:capa-eks-control-plane-system:capa-eks-control-plane-controller-manager # Include if also using EKS
94+
EOL
95+
```
96+
97+
Let's provision the Manager role with:
98+
99+
```
100+
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-manager-account.yaml
101+
```
102+
103+
## Manager cluster
104+
105+
The following commands assume you have the AWS credentials for the Manager account exposed, and your kube context is pointing to the bootstrap cluster.
106+
107+
### Install cluster API provider in the bootstrap cluster
108+
109+
```bash
110+
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
111+
export EKS=true
112+
export EXP_MACHINE_POOL=true
113+
clusterctl init --infrastructure aws --target-namespace capi-providers
114+
```
115+
116+
### Generate the cluster configuration
117+
118+
**NOTE:** You might want to update the Kubernetes and VPC addon versions to one of the available versions when running this command.
119+
120+
- [Kubernetes versions](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html)
121+
- [VPC CNI add-on versions](https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html) don't forget to add the `v` prefix
122+
123+
```bash
124+
export AWS_SSH_KEY_NAME=default
125+
export VPC_ADDON_VERSION="v1.10.2-eksbuild.1"
126+
clusterctl generate cluster manager --flavor eks-managedmachinepool-vpccni --kubernetes-version v1.20.2 --worker-machine-count=3 > manager-cluster.yaml
127+
```
128+
129+
### Apply the cluster configuration
130+
131+
```bash
132+
kubectl apply -f manager-cluster.yaml
133+
```
134+
135+
**WAIT**: time to have a drink, the cluster is creating and we will have to wait for it to be there before continuing.
136+
137+
### IAM OIDC Identity provider
138+
139+
Follow AWS documentation to create an OIDC provider https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
140+
141+
### Update the TrustStatement above
142+
143+
```bash
144+
export OIDC_PROVIDER_ID=<OIDC_ID_OF_THE_CLUSTER>
145+
```
146+
147+
run the [Prepare the manager account](./full-multitenancy-implementation.md#prepare-the-manager-aws-account-0-account) step again
148+
149+
### Get manager cluster credentials
150+
151+
```bash
152+
kubectl --namespace=default get secret manager-user-kubeconfig \
153+
-o jsonpath={.data.value} | base64 --decode \
154+
> manager.kubeconfig
155+
```
156+
157+
### Install the CAPA provider in the manager cluster
158+
159+
Here we install the Cluster API providers into the manager cluster and create a service account to use the `controllers.cluster-api-provider-aws.sigs.k8s.io` role for the Management Components.
160+
161+
```bash
162+
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
163+
export EKS=true
164+
export EXP_MACHINE_POOL=true
165+
export AWS_CONTROLLER_IAM_ROLE=arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
166+
clusterctl init --kubeconfig manager.kubeconfig --infrastructure aws --target-namespace capi-providers
167+
```
168+
169+
## Managed cluster
170+
171+
Time to build the managed cluster for pivoting the bootstrap cluster.
172+
173+
### Generate the cluster configuration
174+
175+
**NOTE:** As for the manager cluster you might want to update the Kubernetes and VPC addon versions.
176+
177+
```bash
178+
export AWS_SSH_KEY_NAME=default
179+
export VPC_ADDON_VERSION="v1.10.2-eksbuild.1"
180+
clusterctl generate cluster manager --flavor eks-managedmachinepool-vpccni --kubernetes-version v1.20.2 --worker-machine-count=3 > managed-cluster.yaml
181+
```
182+
183+
Edit the file and add the following to the `AWSManagedControlPlane` resource spec to point the controller to the manager account when creating the cluster.
184+
185+
```yaml
186+
identityRef:
187+
kind: AWSClusterRoleIdentity
188+
name: managed-account
189+
```
190+
191+
### Create the identities
192+
193+
```bash
194+
envsubst > cluster-role-identity.yaml << EOL
195+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
196+
kind: AWSClusterRoleIdentity
197+
metadata:
198+
name: managed-account
199+
spec:
200+
allowedNamespaces: {} # This is unsafe since every namespace is allowed to use the role identity
201+
roleARN: arn:aws:iam::${AWS_MANAGED_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io
202+
sourceidentityRef:
203+
kind: AWSClusterControllerIdentity
204+
name: default
205+
---
206+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
207+
kind: AWSClusterControllerIdentity
208+
metadata:
209+
name: default
210+
spec:
211+
allowedNamespaces:{}
212+
EOL
213+
```
214+
215+
### Prepare the managed account
216+
217+
**NOTE:** Expose the **managed** account credentials before running the following commands.
218+
219+
This configuration is adding the trustStatement in the cluster api controller role to allow the `controllers.cluster-api-provider-aws.sigs.k8s.io` in the manager account to assume it.
220+
221+
```bash
222+
envsubst > bootstrap-managed-account.yaml << EOL
223+
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1beta1
224+
kind: AWSIAMConfiguration
225+
spec:
226+
eks:
227+
iamRoleCreation: false # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles
228+
managedMachinePool:
229+
disable: true # Set to false to enable creation of the default node role for managed machine pools
230+
fargate:
231+
disable: false # Set to false to enable creation of the default role for the fargate profiles
232+
clusterAPIControllers:
233+
disabled: false
234+
trustStatements:
235+
- Action:
236+
- "sts:AssumeRole"
237+
Effect: "Allow"
238+
Principal:
239+
AWS:
240+
- "arn:aws:iam::${AWS_MANAGER_ACCOUNT_ID}:role/controllers.cluster-api-provider-aws.sigs.k8s.io"
241+
EOL
242+
```
243+
244+
Let's provision the Managed account with:
245+
246+
```bash
247+
clusterawsadm bootstrap iam create-cloudformation-stack --config bootstrap-managed-account.yaml
248+
```
249+
250+
### Apply the cluster configuration
251+
252+
**Note:** Back to the **manager** account credentials
253+
254+
```
255+
kubectl --kubeconfig manager.kubeconfig apply -f cluster-role-identity.yaml
256+
kubectl --kubeconfig manager.kubeconfig apply -f managed-cluster.yaml
257+
```
258+
259+
Time for another drink, enjoy your multi-tenancy setup.

0 commit comments

Comments
 (0)