1
1
# Multi-Cluster Tutorial
2
2
3
- This document will walk-through how to create two managed Kubernetes clusters on
4
- separate providers (GKE and EKS), deploying:
5
- - [ Dex] ( https://github.com/dexidp/dex ) as the OIDC issuer for both clusters.
6
- - [ Gangway] ( https://github.com/heptiolabs/gangway ) web server to authenticate
7
- users to Dex and help generate Kubeconfig files.
8
- - [ kube-oidc-proxy] ( https://github.com/jetstack/kube-oidc-proxy ) to expose both
3
+ This document will walk-through how to create three managed Kubernetes clusters on
4
+ separate providers (Google, Amazon and Digitalocean), deploying:
5
+
6
+ - [ Dex] ( https://github.com/dexidp/dex ) as the OIDC issuer for all clusters
7
+ running only in the master cluster.
8
+
9
+ - [ Gangway] ( https://github.com/heptiolabs/gangway ) web server to authenticate
10
+ users to Dex and help generate Kubeconfig files.
11
+
12
+ - [ kube-oidc-proxy] ( https://github.com/jetstack/kube-oidc-proxy ) to expose all
9
13
clusters to OIDC authentication.
14
+
10
15
- [ Contour] ( https://github.com/heptio/contour ) as the ingress controller with
11
16
TLS SNI passthrough enabled.
17
+
12
18
- [ Cert-Manager] ( https://github.com/jetstack/cert-manager ) to issue and manage
13
19
certificates.
14
20
@@ -17,17 +23,19 @@ supports, namely, username and password, and GitHub, however [more are
17
23
available.] ( https://github.com/dexidp/dex#connectors )
18
24
19
25
## Prerequisites
26
+
20
27
The tutorial will be using Cert-Manager to generate certificates signed by
21
- [ Let's Encrypt] ( https://letsencrypt.org/ ) for components in both GKE and EKS
22
- using a DNS challenge. Although not the only way to generate certificates, the
23
- tutorial assumes that a domain will be used which belongs to your Google Cloud
24
- project, and records of sub-domains of this domain will be created to assign DNS
25
- to the components. A Google Cloud Service Account will be created to manage
26
- these DNS challenges and it's secrets passed to Cert-Manager.
28
+ [ Let's Encrypt] ( https://letsencrypt.org/ ) for components in all clouds using a
29
+ DNS challenge. Although not the only way to generate certificates, the tutorial
30
+ assumes that a domain will be used which belongs to your Google Cloud project,
31
+ and records of sub-domains of this domain will be created to assign DNS to the
32
+ components. A Google Cloud Service Account will be created to manage these DNS
33
+ challenges and it's secrets passed to Cert-Manager.
27
34
28
35
A Service Account has been created for Terraform with its secrets stored at
29
36
` ~/.config/gcloud/terraform-admin.json ` . The Service Account needs at least
30
37
these IAM Roles attached:
38
+
31
39
```
32
40
Compute Admin
33
41
Kubernetes Engine Admin
@@ -43,57 +51,73 @@ relevent permissions to create a fully fledged cluster, including creating
43
51
load balancers, instance pools etc. Typically, these environment variables must
44
52
be set when running ` terraform ` and deploying the manifests before OIDC
45
53
authentication has been set up:
54
+
46
55
```
47
56
AWS_SECRET_ACCESS_KEY
48
57
AWS_SESSION_TOKEN
49
58
AWS_ACCESS_KEY_ID
50
59
```
51
60
52
- ## Infrastructure
53
- First the GKE and EKS cluster will be created, along with secrets to be used for
54
- OIDC authentication for each cluster. The Amazon Terraform module has dependant
55
- resources on the Google module, so the Google module must be created first.
61
+ For Digitalocean you need to get an write token from the console and export it
62
+ using this environment variable:
56
63
57
64
```
58
- CLOUD=google make terraform_apply
59
- CLOUD=amazon make terraform_apply
65
+ DIGITALOCEAN_TOKEN
60
66
```
61
67
62
- This will create a standard Kubernetes cluster in both EKS and GKE, a Service
63
- Account to manage Google Cloud DNS records for DNS challenges and OIDC secrets
64
- for both clusters. It should generate a JSON configuration file for both
65
- clusters in ` ./manifests/google-config.json ` and ` ./manifests/amazon.json `
66
- respectively.
68
+ ## Infrastructure
69
+
70
+ First the clusters will be created, along with secrets to be used for OIDC
71
+ authentication for each cluster. The Amazon and Digitalocean Terraform module
72
+ has dependant resources on the Google module, so the Google module must be
73
+ created first.
74
+
75
+ ```
76
+ CLOUD=google make terraform_apply
77
+ CLOUD=amazon make terraform_apply
78
+ CLOUD=digitalocean make terraform_apply
79
+ ```
67
80
81
+ This will create each cluster and a Service Account to manage Google Cloud DNS
82
+ records for DNS challenges and OIDC secrets for all clusters. It should
83
+ generate a JSON configuration file for each cluster in
84
+ ` ./manifests/[google|amazon|digitalocean]-config.json ` respectively.
68
85
69
86
## Configuration
70
- Copy ` config.dist.jsonnet ` to both ` gke-config.jsonnet ` and ` eks-config.jsonnet ` .
71
- These two files will hold configuration for setting up the OIDC authentication
72
- in both clusters as well as assigning DNS. Firstly, determine what sub-domain
73
- will be used for either cluster, using a domain you own in your Google Cloud
74
- Project, e.g.
87
+
88
+ Copy ` config.dist.jsonnet ` to ` config.jsonnet ` . This file will hold
89
+ configuration for setting up the OIDC authentication in all clusters as well as
90
+ assigning DNS. Firstly, determine what ` base_domain ` will be used for this
91
+ demo. Ensure the ` base_domain ` starts with a ` . ` .
92
+
93
+ The domain, which needs to be managed in Google Cloud DNS will have records
94
+ like this:
95
+
75
96
```
76
- gke.mydomain.company.net
77
- eks.mydomain.company.net
97
+ dex.mydomain.company.net
98
+ gangway-gke.mydomain.company.net
99
+ gangway-eks.mydomain.company.net
100
+ gangway-dok.mydomain.company.net
78
101
```
79
102
80
- Populate each configuration file with its corresponding domain and Let's
81
- Encrypt contract email.
103
+ Populate the configuration file with its corresponding domain
104
+ ( ` .mydomain.company.net ` in our example) and Let's Encrypt contact email.
82
105
83
- ### GKE
106
+ ### Dex customisations
84
107
85
- Since the GKE cluster will be hosting Dex, the OIDC issuer, its
86
- configuration file must contain how or what users will use to authenticate. Here
87
- we will show two methods, username and password, and GitHub.
108
+ Since the GKE cluster will be hosting Dex, the OIDC issuer, its configuration
109
+ file must contain how or what users will use to authenticate. Here we will show
110
+ two methods, username and password, and GitHub.
88
111
89
112
Usernames and passwords can be populated with the following block within the
90
113
` dex ` block.
91
114
92
115
```
93
- dex+: {
116
+ dex+: if $.master then {
94
117
users: [
95
118
$.dex.Password('[email protected] ', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
96
119
],
120
+ } else {
97
121
},
98
122
```
99
123
@@ -108,11 +132,12 @@ htpasswd -bnBC 10 "" MyVerySecurePassword | tr -d ':'
108
132
Dex also supports multiple 'connectors' that enable third party applications to
109
133
provide OAuth to it's system. For GitHub, this involves creating an 'OAuth App'.
110
134
The ` Authorization callback URL ` should be populated with the Dex callback URL, i.e.
111
- ` https://dex.gke.mydomain.company.net/callback ` .
135
+ ` https://dex.gke.mydomain.company.net/callback ` .
112
136
The resulting ` Client ID ` and ` Client Secret ` can then be used to populate the
113
137
configuration file:
138
+
114
139
```
115
- dex+: {
140
+ dex+: if $.master then {
116
141
connectors: [
117
142
$.dex.Connector('github', 'GitHub', 'github', {
118
143
clientID: 'myGithubAppClientID',
@@ -122,88 +147,19 @@ configuration file:
122
147
}],
123
148
}),
124
149
],
150
+ } else {
125
151
},
126
152
```
127
153
128
154
You can find more information on GitHub OAuth apps
129
155
[ here.] ( https://developer.github.com/v3/oauth/ )
130
156
131
- Finally, Dex needs to be configured to also accept the Gangway client in the EKS
132
- cluster. To do this, we add a ` dex.Client ` block in the configuration. We need to
133
- populate its redirect URL as well as the client ID and client secret using
134
- values that were created in the ` ./manifests/amazon-config.json ` by Terraform.
135
- The resulting block should would look like:
136
-
137
- ```
138
- eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
139
- secret: 'my_client_secret_in_./manifests/amazon-config.json',
140
- redirectURIs: [
141
- 'https://gangway.eks.mydomain.company.net/callback',
142
- ],
143
- },
144
- ```
145
-
146
- The resulting ` gke-config.jsonnet ` file should look similar to
147
-
148
- ```
149
- (import './manifests/main.jsonnet') {
150
- base_domain: 'gke.mydomain.company.net',
151
-
152
- cert_manager+: {
153
- letsencrypt_contact_email:: '[email protected] ',
154
- },
155
-
156
- dex+: {
157
- users: [
158
- $.dex.Password('[email protected] ', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
159
- ],
160
-
161
- connectors: [
162
- $.dex.Connector('github', 'GitHub', 'github', {
163
- clientID: 'myGithubAppClientID',
164
- clientSecret: 'myGithubAppClientSecret',
165
- orgs: [{
166
- name: 'company',
167
- }],
168
- }),
169
- ],
170
- },
171
-
172
- eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
173
- secret: 'my_client_secret_in_./manifests/amazon-config.json',
174
- redirectURIs: [
175
- 'https://gangway.eks.mydomain.company.net/callback',
176
- ],
177
- },
178
- }
179
- ```
180
-
181
- ### EKS
182
-
183
- The EKS cluster will not be hosting the dex server so only needs to be
184
- configured with its domain, Dex's domain and the Let's Encrypt contact email.
185
- The resuting ` eks-config.jsonnet ` file should look similar to:
186
-
187
- ```
188
- (import './manifests/main.jsonnet') {
189
- base_domain: 'eks.mydomain.company.net',
190
- dex_domain: 'dex.gke.mydomain.company.net',
191
- cert_manager+: {
192
- letsencrypt_contact_email:: '[email protected] ',
193
- },
194
- }
195
- ```
196
-
197
157
## Deployment
198
158
199
- Once the configuration files have been created the manifests can be deployed.
200
- Copy or create a symbolic link from the ` gke-config.jsonnet ` file to
201
- ` config.jsonnet ` and apply.
159
+ Once the configuration file has been created the manifests can be deployed.
202
160
203
161
```
204
- $ ln -s gke-config.jsonnet config.jsonnet
205
- $ export CLOUD=google
206
- $ make manifests_apply
162
+ $ CLOUD=google manifests_apply
207
163
```
208
164
209
165
You should then see the components deployed to the cluster in the ` auth `
@@ -219,6 +175,7 @@ gangway-77dfdb68d-x84hj 0/1 ContainerCreating 0 11s
219
175
```
220
176
221
177
Verify that the ingress has been configured to what you were expecting.
178
+
222
179
```
223
180
$ kubectl get ingressroutes -n auth
224
181
```
@@ -234,31 +191,17 @@ $ kubectl get -n auth secret
234
191
```
235
192
236
193
You can save these certifcates locally, and restore them any time using:
194
+
237
195
```
238
196
$ make manifests_backup_certificates
239
197
$ make manifests_restore_certificates
240
198
```
241
199
242
- An ` A ` record can now be created so that DNS can be resolve the Contour Load
243
- Balancer public IP address. Take a note of the external-IP address exposed:
244
-
245
- ```
246
- $ kubectl get svc contour -n auth
247
- ```
248
-
249
- Create a wildcard ` A ` record (matching all sub-domains), with some
250
- reasonable TTL pointing to the exposed IP address of the Contour Load Balancer.
251
-
252
- ```
253
- DNS name: *.gke.mydomain.company.net
254
- Record resource type: A
255
- IPv4 address: $CONTOUR_IP
256
- ```
257
-
258
200
You can check that the DNS record has been propagated by trying to resolve it
259
201
using:
202
+
260
203
```
261
- $ host https://gangway. gke.mydomain.company.net
204
+ $ host https://gangway- gke.mydomain.company.net
262
205
```
263
206
264
207
Once propagated, you can then visit the Gangway URL, follow the instructions and
@@ -267,29 +210,21 @@ kube-oidc-proxy. Trying the Kubeconfig, you should be greeted with an error
267
210
message that your OIDC username does not have enough RBAC permissions to access
268
211
that resource.
269
212
270
- The EKS cluster manifests can now be deployed using the ` eks-config.jsonnet ` .
213
+ The EKS and Digitalocean cluster manifests can now be deployed:
271
214
272
215
```
273
- $ rm config.jsonnet && ln -s eks-config.jsonnet config.jsonnet
274
- $ export CLOUD=amazon
275
- $ make manifests_apply
216
+ $ CLOUD=amazon manifests_apply
217
+ $ CLOUD=digitalocean manifests_apply
276
218
```
277
219
278
220
Get the AWS DNS URL for the Contour Load Balancer.
221
+
279
222
```
280
223
$ export KUBECONFIG=.kubeconfig-amazon
281
224
$ kubectl get svc -n auth
282
225
```
283
226
284
- Once the the Contour Load Balancer has an external URL, we need to create a ` CNAME `
285
- record:
286
- ```
287
- DNS name: *.eks.mydomain.company.net
288
- Record resource type: CNAME
289
- Canonical name: $CONTOUR_AWS_URL
290
- ```
291
-
292
227
When components have their TLS secrets, you will then be able to login to the
293
- Gangway portal on EKS and download your Kubeconfig. Again, when trying this
294
- Kubeconfig, you will initially be greeted with an "unauthorized" error message
295
- until RBAC permissions have been granted to this user.
228
+ Gangway portal on Amazon/DigitalOcean and download your Kubeconfig. Again, when
229
+ trying this Kubeconfig, you will initially be greeted with an "unauthorized"
230
+ error message until RBAC permissions have been granted to this user.
0 commit comments