Skip to content
This repository was archived by the owner on May 17, 2024. It is now read-only.

Commit a11c3ed

Browse files
authored
Merge pull request #52 from jetstack/long-tutorial
Adds tutorial
2 parents eb27ef4 + e7728f6 commit a11c3ed

File tree

2 files changed

+298
-0
lines changed

2 files changed

+298
-0
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ The following is a diagram of the request flow for a user request.
2424

2525
## Tutorial
2626

27+
Directions on how to deploy OIDC authentication with multi-cluster can be found
28+
[here.](./demo/README.md)
29+
2730
### Quickstart
2831

2932
Deployment yamls can be found in `./demo/yaml` and will require configuration to

demo/README.md

Lines changed: 295 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,295 @@
1+
# Multi-Cluster Tutorial
2+
3+
This document will walk-through how to create two managed Kubernetes clusters on
4+
separate providers (GKE and EKS), deploying:
5+
- [Dex](https://github.com/dexidp/dex) as the OIDC issuer for both clusters.
6+
- [Gangway](https://github.com/heptiolabs/gangway) web server to authenticate
7+
users to Dex and help generate Kubeconfig files.
8+
- [kube-oidc-proxy](https://github.com/jetstack/kube-oidc-proxy) to expose both
9+
clusters to OIDC authentication.
10+
- [Contour](https://github.com/heptio/contour) as the ingress controller with
11+
TLS SNI passthrough enabled.
12+
- [Cert-Manager](https://github.com/jetstack/cert-manager) to issue and manage
13+
certificates.
14+
15+
It will also demonstrate how to enable different authentication methods that dex
16+
supports, namely, username and password, and Github, however [more are
17+
available.](https://github.com/dexidp/dex#connectors)
18+
19+
## Perquisites
20+
The tutorial will be using Cert-Manager to generate certificates signed by
21+
[Let's Encrypt](https://letsencrypt.org/) for components in both GKE and EKS
22+
using a DNS challenge. Although not the only way to generate certificates, the
23+
tutorial assumes that a domain will be used which belongs to your Google Cloud
24+
project, and records of sub-domains of this domain will be created to assign DNS
25+
to the components. A Google Cloud Service Account will be created to manage
26+
these DNS challenges and it's secrets passed to Cert-Manager.
27+
28+
A Service Account has been created for terraform with it's secrets stored at
29+
`~/.config/gcloud/terraform-admin.json`. The Service Account needs at least
30+
these IAM Roles attached:
31+
```
32+
Compute Admin
33+
Kubernetes Engine Admin
34+
DNS Administrator
35+
Security Reviewer
36+
Service Account Admin
37+
Service Account Key Admin
38+
Project IAM Admin
39+
```
40+
41+
You have an AWS account with permissions to create an EKS cluster and other
42+
relevent permissions to create a fully fledged cluster, including creating
43+
load balancers, instance pools etc. Typically, these environment variables must
44+
be set when running terraform and deploying the manifests before OIDC
45+
authentication has been set up:
46+
```
47+
AWS_SECRET_ACCESS_KEY
48+
AWS_SESSION_TOKEN
49+
AWS_ACCESS_KEY_ID
50+
```
51+
52+
## Infrastructure
53+
First the GKE and EKS cluster will be created, along with secrets to be used for
54+
OIDC authentication for each cluster. The amazon terraform module has dependant
55+
resources on the google module, so the google module must be created first.
56+
57+
```
58+
CLOUD=google make terraform_apply
59+
CLOUD=amazon make terraform_apply
60+
```
61+
62+
This will create a standard Kubernetes cluster in both EKS and GKE, a Service
63+
Account to manage Google Cloud DNS records for DNS challenges and OIDC secrets
64+
for both clusters. It should generate a JSON configuration file for both
65+
clusters in `./manifests/google-config.json` and `./manifests/amazon.json`
66+
respectively.
67+
68+
69+
## Configuration
70+
Copy `config.dist.jsonnet` to both `gke-config.jsonnet` and `eks-config.jsonnet`.
71+
These two files will hold configuration for setting up the OIDC authentication
72+
in both clusters as well as assigning DNS. Firstly, determine what sub-domain
73+
will be used for either cluster, using a domain you own in your Google Cloud
74+
Project, e.g.
75+
```
76+
gke.mydomain.company.net
77+
eks.mydomain.company.net
78+
```
79+
80+
Populate each configuration file with it's corresponding domain and Let's
81+
Encrypt contract email.
82+
83+
### GKE
84+
85+
Since the GKE cluster will be hosting Dex, the OIDC issuer, it's
86+
configuration file must contain how or what users will use to authenticate. Here
87+
we will show two methods, username and password, and Github.
88+
89+
Usernames and passwords can be populated with the following block within the
90+
`dex` block.
91+
92+
```
93+
dex+: {
94+
users: [
95+
$.dex.Password('[email protected]', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
96+
],
97+
},
98+
```
99+
100+
The username will be the username used by the user to authenticate and the user
101+
identity used for RBAC within Kubernetes. The password is a bcrypt encryption
102+
hash of the plain text password. This can be generated by the following:
103+
104+
```
105+
htpasswd -bnBC 10 "" MyVerySecurePassword | tr -d ':'
106+
```
107+
108+
Dex also supports multiple 'connectors' that enable third party applications to
109+
provide OAuth to it's system. For Github, this involves creating an 'OAuth App'.
110+
The `Authorization callback URL` should be populated with the Dex callback URL, i.e.
111+
`https://dex.gke.mydomain.company.net/callback`.
112+
The resulting `Client ID` and `Client Secret` can then be used to populate the
113+
configuration file:
114+
```
115+
dex+: {
116+
connectors: [
117+
$.dex.Connector('github', 'GitHub', 'github', {
118+
clientID: 'myGithubAppClientID',
119+
clientSecret: 'myGithubAppClientSecret',
120+
orgs: [{
121+
name: 'company',
122+
}],
123+
}),
124+
],
125+
},
126+
```
127+
128+
You can find more information on github OAuth apps
129+
[here.](https://developer.github.com/v3/oauth/)
130+
131+
Finally, Dex needs to be configured to also accept the gangway client in the EKS
132+
cluster. To do this, we add a Dex Client block in the configuration. We need to
133+
populate it's redirect URL as well as the client ID and client secret using
134+
values that were created in the `./manifests/amazon-config.json` by terraform.
135+
The resulting block should would look like:
136+
137+
```
138+
eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
139+
secret: 'my_client_secret_in_./manifests/amazon-config.json',
140+
redirectURIs: [
141+
'https://gangway.eks.mydomain.company.net/callback',
142+
],
143+
},
144+
```
145+
146+
The resulting `gke-config.jsonnet` file should look similar to
147+
148+
```
149+
(import './manifests/main.jsonnet') {
150+
base_domain: 'gke.mydomain.company.net',
151+
152+
cert_manager+: {
153+
letsencrypt_contact_email:: '[email protected]',
154+
},
155+
156+
dex+: {
157+
users: [
158+
$.dex.Password('[email protected]', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
159+
],
160+
161+
connectors: [
162+
$.dex.Connector('github', 'GitHub', 'github', {
163+
clientID: 'myGithubAppClientID',
164+
clientSecret: 'myGithubAppClientSecret',
165+
orgs: [{
166+
name: 'company',
167+
}],
168+
}),
169+
],
170+
},
171+
172+
eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
173+
secret: 'my_client_secret_in_./manifests/amazon-config.json',
174+
redirectURIs: [
175+
'https://gangway.eks.mydomain.company.net/callback',
176+
],
177+
},
178+
}
179+
```
180+
181+
### EKS
182+
183+
The EKS cluster will not be hosting the dex server so only needs to be
184+
configured with it's domain, Dex's domain and the Let's Encrypt contact email.
185+
The resuting `eks-config.jsonnet` file should look similar to:
186+
187+
```
188+
(import './manifests/main.jsonnet') {
189+
base_domain: 'eks.mydomain.company.net',
190+
dex_domain: 'dex.gke.mydomain.company.net',
191+
cert_manager+: {
192+
letsencrypt_contact_email:: '[email protected]',
193+
},
194+
}
195+
```
196+
197+
## Deployment
198+
199+
Once the configuration files have been created the manifests can be deployed.
200+
Copy or create a symbolic link from the `gke-config.jsonnet` file to
201+
`config.jsonnet` and apply.
202+
203+
```
204+
$ ln -s gke-config.jsonnet config.jsonnet
205+
$ export CLOUD=google
206+
$ make manifests_apply
207+
```
208+
209+
You should then see the components deployed to the cluster in the `auth`
210+
namespace.
211+
212+
```
213+
export KUBECONFIG=.kubeconfig-google
214+
$ kubectl get po -n auth
215+
NAME READY STATUS RESTARTS AGE
216+
contour-55c46d7969-f9gfl 2/2 Running 0 46s
217+
dex-7455744797-p8pql 0/1 ContainerCreating 0 12s
218+
gangway-77dfdb68d-x84hj 0/1 ContainerCreating 0 11s
219+
```
220+
221+
Verify that the ingress has been configured to what you were expecting.
222+
```
223+
$ kubectl get ingressroutes -n auth
224+
```
225+
226+
You should now see the DNS challenge attempting to be furfilled by Cert-Manager
227+
in your DNS Zone details in the Google Cloud console.
228+
229+
Once complete, three TLS secrets will be generated, `gangway-tls`, `dex-tls`,
230+
and `kube-oidc-proxy-tls`.
231+
232+
```
233+
$ kubectl get -n auth secret
234+
```
235+
236+
You can save these certifcates locally, and resotre them any time using:
237+
```
238+
$ make manifests_backup_certificates
239+
$ make manifests_restore_certificates
240+
```
241+
242+
An A record can now be created so the DNS can be resolved to the Contour Load
243+
Balancer public IP Adress. Take a note of the external-IP address exposed:
244+
245+
```
246+
$ kubectl get svc contour -n auth
247+
```
248+
249+
Create an A record set with a wild card sub-domain to your domain, with some
250+
reasonable TTL pointing to the exposed IP address of the Contour Load Balancer.
251+
252+
```
253+
DNS name: *.gke.mydomain.company.net
254+
Record resource type: A
255+
IPv4 address: $CONTOUR_IP
256+
```
257+
258+
You can check that the DNS record has been propagated by trying to resolve it
259+
using:
260+
```
261+
$ host https://gangway.gke.mydomain.company.net
262+
```
263+
264+
Once propagated, you can then visit the Gangway URL, follow the instructions and
265+
download your Kubeconfig with OIDC authentication, pointing to the
266+
kube-oidc-proxy. Trying the Kubeconfig, you should be greeted with an error
267+
message that your OIDC username does not have enough RBAC permissions to access
268+
that resource.
269+
270+
The EKS cluster manifests can now be deployed using the `eks-config.jsonnet`.
271+
272+
```
273+
$ rm config.jsonnet && ln -s eks-config.jsonnet config.jsonnet
274+
$ export CLOUD=amazon
275+
$ make manifests_apply
276+
```
277+
278+
Get the AWS DNS URL for the contour Load Balancer.
279+
```
280+
$ export KUBECONFIG=.kubeconfig-amazon
281+
$ kubectl get svc -n auth
282+
```
283+
284+
Once the the contour LoadBalancer has an external URL, we need to create a CNAME
285+
record set to resolve the DNS.
286+
```
287+
DNS name: *.eks.mydomain.company.net
288+
Record resource type: CNAME
289+
Canonical name: $CONTOUR_AWS_URL
290+
```
291+
292+
When components have their TLS secrets, you will then be able to login to the
293+
Gangway portal on EKS and download your Kubeconfig. Again, when trying this
294+
Kubeconfig, you should initially be greeted with unauthorized to that resource
295+
until RBAC permissions have been granted to this user.

0 commit comments

Comments
 (0)