You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 31, 2019. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+49-30Lines changed: 49 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,9 +20,9 @@ Terraform is used to _provision_ the cloud infrastructure and any required local
20
20
21
21
- Virtual Cloud Network (VCN) with dedicated subnets for etcd, masters, and workers in each availability domain
22
22
- Dedicated compute instances for etcd, Kubernetes master and worker nodes in each availability domain
23
-
- TCP/SSL OCI Load Balancer to to distribute traffic to the Kubernetes masters
24
-
- Private OCI Load Balancer to distribute traffic to the etcd cluster
25
-
-_Optional_ NAT instance for Internet-bound traffic when the input variable `network_access` is set to `private`
23
+
-Public or Private TCP/SSL OCI Load Balancer to to distribute traffic to the Kubernetes Master(s) (see `k8s_master_lb_access`)
24
+
- Private OCI Load Balancer to distribute traffic to the node(s) in the etcd cluster
25
+
-_Optional_ NAT instance for Internet-bound traffic when the input variable `control_plane_subnet_access` is set to `private`
26
26
- 2048-bit SSH RSA Key-Pair for compute instances when not overridden by `ssh_private_key` and `ssh_public_key_openssh` input variabless
27
27
- Self-signed CA and TLS cluster certificates when not overridden by the input variables `ca_cert`, `ca_key`, etc.
28
28
@@ -84,10 +84,11 @@ The Kubernetes cluster will be running after the configuration is applied succes
84
84
### Access the Kubernetes API server
85
85
86
86
87
-
##### Access the cluster using kubectl
87
+
##### Access the cluster using kubectl, continuous build pipelines, or other clients
88
88
89
-
If you've chosen to configure a _public_ networks (i.e. `network_access=public`), you can use `kubectl` to
90
-
interact with your cluster from your local machine using the kubeconfig found in the ./generated folder or using the `kubeconfig` Terraform output variable.
89
+
If you've chosen to configure a _public_ Load Balancer for your Kubernetes Master(s) (i.e. `control_plane_subnet_access=public` or
90
+
`control_plane_subnet_access=private`_and_`k8s_master_lb_access=public`), you can interact with your cluster using kubectl, continuous build
91
+
pipelines, or any other client over the Internet. A working kubeconfig can be found in the ./generated folder or generated on the fly using the `kubeconfig` Terraform output variable.
91
92
92
93
```bash
93
94
# warning: 0.0.0.0/0 is wide open. Consider limiting HTTPs ingress to smaller set of IPs.
@@ -102,7 +103,8 @@ $ kubectl cluster-info
102
103
$ kubectl get nodes
103
104
```
104
105
105
-
If you've chosen to configure a _private_ networks (i.e. `network_access=private`), you'll need to first SSH into the NAT instance, then to one of the private nodes in the cluster (similar to how you would use a bastion host):
106
+
If you've chosen to configure a strictly _private_ cluster (i.e. `control_plane_subnet_access=private`_and_`k8s_master_lb_access=private`),
107
+
access to the cluster will be limited to the NAT instance(s) similar to how you would use a bastion host e.g.
106
108
107
109
```bash
108
110
$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0
Note, for easier access, consider setting up an SSH tunnel between your local host and the NAT instance.
121
+
Note, for easier access, consider setting up an SSH tunnel between your local host and a NAT instance.
123
122
124
123
##### Access the cluster using Kubernetes Dashboard
125
124
126
-
To access the Kubernetes Dashboard, use `kubectl proxy`:
125
+
Assuming `kubectl` has access to the Kubernetes Master Load Balancer, you can use use `kubectl proxy` to access the
126
+
Dashboard:
127
127
128
128
```
129
129
kubectl proxy &
@@ -167,7 +167,8 @@ kubernetes-dashboard is running at https://129.146.22.175:443/ui
167
167
168
168
##### SSH into OCI Instances
169
169
170
-
If you've chosen to configure a public cluster, you can open access SSH access to your master nodes by adding the following to your `terraform.tfvars` file:
170
+
If you've chosen to launch your control plane instance in _public_ subnets (i.e. `control_plane_subnet_access=public`), you can open
171
+
access SSH access to your master nodes by adding the following to your `terraform.tfvars` file:
171
172
172
173
```bash
173
174
# warning: 0.0.0.0/0 is wide open. remember to undo this.
If you've chosen to configure a private cluster (i.e. `network_access=private`), you'll need to first SSH into the NAT instance, then to a worker, master, or etcd node:
195
+
If you've chosen to launch your control plane instance in _private_ subnets (i.e. `control_plane_subnet_access=private`), you'll
196
+
need to first SSH into a NAT instance, then to a worker, master, or etcd node:
195
197
196
198
```bash
197
199
$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0
@@ -221,15 +223,17 @@ region | us-phoenix-1 | String value of
network_access | public | Clusters access can be `public` or `private`
229
233
230
234
##### _Public_ Network Access (default)
231
235
232
-
If `network_access=public`, instances in the cluster's control plane will be provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.
236
+
When `control_plane_subnet_access=public` and `k8s_master_lb_access=public`, control plane instances and the Kubernetes Master Load Balancer are provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.
233
237
234
238
The following input variables are used to configure the inbound security rules on the public etcd, master, and worker subnets:
If `network_access=private`, instances in the cluster's control plane and their Load Balancers will be provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic. If your worker nodes need to accept incoming traffic from the Internet, an additional Load Balancer will need to be provisioned in the public subnet to route traffic to workers in the private subnets.
251
+
When `control_plane_subnet_access=private` and `k8s_master_lb_access=private`, control plane instances and the Kubernetes Master Load Balancer
252
+
are provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to
253
+
perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic.
254
+
If your worker nodes need to accept incoming traffic from the Internet, an additional front-end Load Balancer will
255
+
need to be provisioned in the public subnet to route traffic to workers in the private subnets.
248
256
249
-
The following input variables are used to configure the inbound security rules for the NAT instance and any other
250
-
instance or front-end Load Balancer in the public subnet:
257
+
The following input variables are used to configure the inbound security rules for the NAT instance(s) and any other instance or front-end Load Balancer in the public subnet:
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including the NAT instance)
261
+
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including NAT instances)
255
262
public_subnet_http_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 80 on instances in the public subnet
256
263
public_subnet_https_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 443 on instances in the public subnet
257
264
natInstanceShape | VM.Standard1.1 | OCI shape for the optional NAT instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
258
-
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when network_access=private)
259
-
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when network_access=private)
260
-
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when network_access=private)
265
+
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when control_plane_subnet_access=private)
266
+
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when control_plane_subnet_access=private)
267
+
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when control_plane_subnet_access=private)
268
+
269
+
*Note*
270
+
271
+
When `control_plane_subnet_access=private`, you do not need to set the etcd, master, and worker security rules since they already
272
+
allow all inbound traffic between instances in the VCN.
273
+
274
+
##### _Private_ and _Public_ Network Access
275
+
276
+
It is also valid to set `control_plane_subnet_access=private` while keeping `k8s_master_lb_access=public`. In this scenario, instances in the
277
+
cluster's control plane will still provisioned in _private_ subnets and require NAT instance(s). However, the Load
278
+
Balancer for your back-end Kubernetes Master(s) will be launched in a public subnet and will therefore be accessible
279
+
over the Internet if the inbound security rules allow.
261
280
262
281
*Note*
263
282
264
-
If `network_access=private`, you do not need to set the etcd, master, and worker security rules since they already allow all inbound traffic between instances in the VCN.
283
+
When `control_plane_subnet_access=private`, you still cannot SSH directly into your instances without going through a NAT instance.
265
284
266
285
#### Instance Shape and Placement Configuration
267
286
name | default | description
@@ -279,8 +298,8 @@ etcdAd1Count | 1 | number of etcd n
279
298
etcdAd2Count | 0 | number of etcd nodes to create in Availability Domain 2
280
299
etcdAd3Count | 0 | number of etcd nodes to create in Availability Domain 3
281
300
etcd_lb_enabled | "true" | enable/disable the etcd load balancer. "true" use the etcd load balancer ip, "false" use a list of etcd instance ips
0 commit comments