@@ -12,11 +12,11 @@ separate providers (GKE and EKS), deploying:
12
12
- [ Cert-Manager] ( https://github.com/jetstack/cert-manager ) to issue and manage
13
13
certificates.
14
14
15
- It will also demonstrate how to enable different authentication methods that dex
16
- supports, namely, username and password, and Github , however [ more are
15
+ It will also demonstrate how to enable different authentication methods that Dex
16
+ supports, namely, username and password, and GitHub , however [ more are
17
17
available.] ( https://github.com/dexidp/dex#connectors )
18
18
19
- ## Perquisites
19
+ ## Prerequisites
20
20
The tutorial will be using Cert-Manager to generate certificates signed by
21
21
[ Let's Encrypt] ( https://letsencrypt.org/ ) for components in both GKE and EKS
22
22
using a DNS challenge. Although not the only way to generate certificates, the
@@ -25,7 +25,7 @@ project, and records of sub-domains of this domain will be created to assign DNS
25
25
to the components. A Google Cloud Service Account will be created to manage
26
26
these DNS challenges and it's secrets passed to Cert-Manager.
27
27
28
- A Service Account has been created for terraform with it's secrets stored at
28
+ A Service Account has been created for Terraform with its secrets stored at
29
29
` ~/.config/gcloud/terraform-admin.json ` . The Service Account needs at least
30
30
these IAM Roles attached:
31
31
```
@@ -41,7 +41,7 @@ Project IAM Admin
41
41
You have an AWS account with permissions to create an EKS cluster and other
42
42
relevent permissions to create a fully fledged cluster, including creating
43
43
load balancers, instance pools etc. Typically, these environment variables must
44
- be set when running terraform and deploying the manifests before OIDC
44
+ be set when running ` terraform ` and deploying the manifests before OIDC
45
45
authentication has been set up:
46
46
```
47
47
AWS_SECRET_ACCESS_KEY
@@ -51,8 +51,8 @@ AWS_ACCESS_KEY_ID
51
51
52
52
## Infrastructure
53
53
First the GKE and EKS cluster will be created, along with secrets to be used for
54
- OIDC authentication for each cluster. The amazon terraform module has dependant
55
- resources on the google module, so the google module must be created first.
54
+ OIDC authentication for each cluster. The Amazon Terraform module has dependant
55
+ resources on the Google module, so the Google module must be created first.
56
56
57
57
```
58
58
CLOUD=google make terraform_apply
@@ -77,14 +77,14 @@ gke.mydomain.company.net
77
77
eks.mydomain.company.net
78
78
```
79
79
80
- Populate each configuration file with it's corresponding domain and Let's
80
+ Populate each configuration file with its corresponding domain and Let's
81
81
Encrypt contract email.
82
82
83
83
### GKE
84
84
85
- Since the GKE cluster will be hosting Dex, the OIDC issuer, it's
85
+ Since the GKE cluster will be hosting Dex, the OIDC issuer, its
86
86
configuration file must contain how or what users will use to authenticate. Here
87
- we will show two methods, username and password, and Github .
87
+ we will show two methods, username and password, and GitHub .
88
88
89
89
Usernames and passwords can be populated with the following block within the
90
90
` dex ` block.
@@ -106,7 +106,7 @@ htpasswd -bnBC 10 "" MyVerySecurePassword | tr -d ':'
106
106
```
107
107
108
108
Dex also supports multiple 'connectors' that enable third party applications to
109
- provide OAuth to it's system. For Github , this involves creating an 'OAuth App'.
109
+ provide OAuth to it's system. For GitHub , this involves creating an 'OAuth App'.
110
110
The ` Authorization callback URL ` should be populated with the Dex callback URL, i.e.
111
111
` https://dex.gke.mydomain.company.net/callback ` .
112
112
The resulting ` Client ID ` and ` Client Secret ` can then be used to populate the
@@ -125,13 +125,13 @@ configuration file:
125
125
},
126
126
```
127
127
128
- You can find more information on github OAuth apps
128
+ You can find more information on GitHub OAuth apps
129
129
[ here.] ( https://developer.github.com/v3/oauth/ )
130
130
131
- Finally, Dex needs to be configured to also accept the gangway client in the EKS
132
- cluster. To do this, we add a Dex Client block in the configuration. We need to
133
- populate it's redirect URL as well as the client ID and client secret using
134
- values that were created in the ` ./manifests/amazon-config.json ` by terraform .
131
+ Finally, Dex needs to be configured to also accept the Gangway client in the EKS
132
+ cluster. To do this, we add a ` dex. Client` block in the configuration. We need to
133
+ populate its redirect URL as well as the client ID and client secret using
134
+ values that were created in the ` ./manifests/amazon-config.json ` by Terraform .
135
135
The resulting block should would look like:
136
136
137
137
```
@@ -181,7 +181,7 @@ The resulting `gke-config.jsonnet` file should look similar to
181
181
### EKS
182
182
183
183
The EKS cluster will not be hosting the dex server so only needs to be
184
- configured with it's domain, Dex's domain and the Let's Encrypt contact email.
184
+ configured with its domain, Dex's domain and the Let's Encrypt contact email.
185
185
The resuting ` eks-config.jsonnet ` file should look similar to:
186
186
187
187
```
@@ -223,7 +223,7 @@ Verify that the ingress has been configured to what you were expecting.
223
223
$ kubectl get ingressroutes -n auth
224
224
```
225
225
226
- You should now see the DNS challenge attempting to be furfilled by Cert-Manager
226
+ You should now see the DNS challenge attempting to be fullfilled by Cert-Manager
227
227
in your DNS Zone details in the Google Cloud console.
228
228
229
229
Once complete, three TLS secrets will be generated, ` gangway-tls ` , ` dex-tls ` ,
@@ -233,20 +233,20 @@ and `kube-oidc-proxy-tls`.
233
233
$ kubectl get -n auth secret
234
234
```
235
235
236
- You can save these certifcates locally, and resotre them any time using:
236
+ You can save these certifcates locally, and restore them any time using:
237
237
```
238
238
$ make manifests_backup_certificates
239
239
$ make manifests_restore_certificates
240
240
```
241
241
242
- An A record can now be created so the DNS can be resolved to the Contour Load
243
- Balancer public IP Adress . Take a note of the external-IP address exposed:
242
+ An ` A ` record can now be created so that DNS can be resolve the Contour Load
243
+ Balancer public IP address . Take a note of the external-IP address exposed:
244
244
245
245
```
246
246
$ kubectl get svc contour -n auth
247
247
```
248
248
249
- Create an A record set with a wild card sub-domain to your domain , with some
249
+ Create a wildcard ` A ` record (matching all sub-domains) , with some
250
250
reasonable TTL pointing to the exposed IP address of the Contour Load Balancer.
251
251
252
252
```
@@ -275,14 +275,14 @@ $ export CLOUD=amazon
275
275
$ make manifests_apply
276
276
```
277
277
278
- Get the AWS DNS URL for the contour Load Balancer.
278
+ Get the AWS DNS URL for the Contour Load Balancer.
279
279
```
280
280
$ export KUBECONFIG=.kubeconfig-amazon
281
281
$ kubectl get svc -n auth
282
282
```
283
283
284
- Once the the contour LoadBalancer has an external URL, we need to create a CNAME
285
- record set to resolve the DNS.
284
+ Once the the Contour Load Balancer has an external URL, we need to create a ` CNAME `
285
+ record:
286
286
```
287
287
DNS name: *.eks.mydomain.company.net
288
288
Record resource type: CNAME
@@ -291,5 +291,5 @@ Canonical name: $CONTOUR_AWS_URL
291
291
292
292
When components have their TLS secrets, you will then be able to login to the
293
293
Gangway portal on EKS and download your Kubeconfig. Again, when trying this
294
- Kubeconfig, you should initially be greeted with unauthorized to that resource
294
+ Kubeconfig, you will initially be greeted with an " unauthorized" error message
295
295
until RBAC permissions have been granted to this user.
0 commit comments