Skip to content
This repository was archived by the owner on Oct 31, 2019. It is now read-only.

Commit a7469de

Browse files
committed
Add support for keeping instances private, but still give clients outside access through a public LB.
1 parent 6347db8 commit a7469de

File tree

13 files changed

+123
-97
lines changed

13 files changed

+123
-97
lines changed

README.md

Lines changed: 49 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ Terraform is used to _provision_ the cloud infrastructure and any required local
2020

2121
- Virtual Cloud Network (VCN) with dedicated subnets for etcd, masters, and workers in each availability domain
2222
- Dedicated compute instances for etcd, Kubernetes master and worker nodes in each availability domain
23-
- TCP/SSL OCI Load Balancer to to distribute traffic to the Kubernetes masters
24-
- Private OCI Load Balancer to distribute traffic to the etcd cluster
25-
- _Optional_ NAT instance for Internet-bound traffic when the input variable `network_access` is set to `private`
23+
- Public or Private TCP/SSL OCI Load Balancer to to distribute traffic to the Kubernetes Master(s) (see `k8s_master_lb_access`)
24+
- Private OCI Load Balancer to distribute traffic to the node(s) in the etcd cluster
25+
- _Optional_ NAT instance for Internet-bound traffic when the input variable `control_plane_subnet_access` is set to `private`
2626
- 2048-bit SSH RSA Key-Pair for compute instances when not overridden by `ssh_private_key` and `ssh_public_key_openssh` input variabless
2727
- Self-signed CA and TLS cluster certificates when not overridden by the input variables `ca_cert`, `ca_key`, etc.
2828

@@ -84,10 +84,11 @@ The Kubernetes cluster will be running after the configuration is applied succes
8484
### Access the Kubernetes API server
8585

8686

87-
##### Access the cluster using kubectl
87+
##### Access the cluster using kubectl, continuous build pipelines, or other clients
8888

89-
If you've chosen to configure a _public_ networks (i.e. `network_access=public`), you can use `kubectl` to
90-
interact with your cluster from your local machine using the kubeconfig found in the ./generated folder or using the `kubeconfig` Terraform output variable.
89+
If you've chosen to configure a _public_ Load Balancer for your Kubernetes Master(s) (i.e. `control_plane_subnet_access=public` or
90+
`control_plane_subnet_access=private` _and_ `k8s_master_lb_access=public`), you can interact with your cluster using kubectl, continuous build
91+
pipelines, or any other client over the Internet. A working kubeconfig can be found in the ./generated folder or generated on the fly using the `kubeconfig` Terraform output variable.
9192

9293
```bash
9394
# warning: 0.0.0.0/0 is wide open. Consider limiting HTTPs ingress to smaller set of IPs.
@@ -102,7 +103,8 @@ $ kubectl cluster-info
102103
$ kubectl get nodes
103104
```
104105

105-
If you've chosen to configure a _private_ networks (i.e. `network_access=private`), you'll need to first SSH into the NAT instance, then to one of the private nodes in the cluster (similar to how you would use a bastion host):
106+
If you've chosen to configure a strictly _private_ cluster (i.e. `control_plane_subnet_access=private` _and_ `k8s_master_lb_access=private`),
107+
access to the cluster will be limited to the NAT instance(s) similar to how you would use a bastion host e.g.
106108

107109
```bash
108110
$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0
@@ -111,19 +113,17 @@ $ terraform output ssh_private_key > generated/instances_id_rsa
111113
$ chmod 600 generated/instances_id_rsa
112114
$ scp -i generated/instances_id_rsa generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP:/home/opc/
113115
$ ssh -i generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP
114-
```
115-
116-
```bash
117116
nat$ ssh -i /home/opc/instances_id_rsa opc@K8SMASTER_INSTANCE_PRIVATE_IP
118117
master$ kubectl cluster-info
119118
master$ kubectl get nodes
120119
```
121120

122-
Note, for easier access, consider setting up an SSH tunnel between your local host and the NAT instance.
121+
Note, for easier access, consider setting up an SSH tunnel between your local host and a NAT instance.
123122

124123
##### Access the cluster using Kubernetes Dashboard
125124

126-
To access the Kubernetes Dashboard, use `kubectl proxy`:
125+
Assuming `kubectl` has access to the Kubernetes Master Load Balancer, you can use use `kubectl proxy` to access the
126+
Dashboard:
127127

128128
```
129129
kubectl proxy &
@@ -167,7 +167,8 @@ kubernetes-dashboard is running at https://129.146.22.175:443/ui
167167

168168
##### SSH into OCI Instances
169169

170-
If you've chosen to configure a public cluster, you can open access SSH access to your master nodes by adding the following to your `terraform.tfvars` file:
170+
If you've chosen to launch your control plane instance in _public_ subnets (i.e. `control_plane_subnet_access=public`), you can open
171+
access SSH access to your master nodes by adding the following to your `terraform.tfvars` file:
171172

172173
```bash
173174
# warning: 0.0.0.0/0 is wide open. remember to undo this.
@@ -191,7 +192,8 @@ $ terraform output worker_public_ips
191192
$ ssh -i `pwd`/generated/instances_id_rsa opc@K8SWORKER_INSTANCE_PUBLIC_IP
192193
```
193194

194-
If you've chosen to configure a private cluster (i.e. `network_access=private`), you'll need to first SSH into the NAT instance, then to a worker, master, or etcd node:
195+
If you've chosen to launch your control plane instance in _private_ subnets (i.e. `control_plane_subnet_access=private`), you'll
196+
need to first SSH into a NAT instance, then to a worker, master, or etcd node:
195197

196198
```bash
197199
$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0
@@ -221,15 +223,17 @@ region | us-phoenix-1 | String value of
221223
### Optional Input Variables:
222224

223225

224-
#### Cluster Access Configuration
226+
#### Network Access Configuration
227+
228+
name | default | description
229+
------------------------------------|-------------|------------
230+
control_plane_subnet_access | public | Whether instances in the control plane are launched in a public or private subnets
231+
k8s_master_lb_access | public | Whether the Kubernetes Master Load Balancer is launched in a public or private subnets
225232

226-
name | default | description
227-
------------------------------------|-------------------------|------------
228-
network_access | public | Clusters access can be `public` or `private`
229233

230234
##### _Public_ Network Access (default)
231235

232-
If `network_access=public`, instances in the cluster's control plane will be provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.
236+
When `control_plane_subnet_access=public` and `k8s_master_lb_access=public`, control plane instances and the Kubernetes Master Load Balancer are provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.
233237

234238
The following input variables are used to configure the inbound security rules on the public etcd, master, and worker subnets:
235239

@@ -244,24 +248,39 @@ worker_nodeport_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation
244248

245249
##### _Private_ Network Access
246250

247-
If `network_access=private`, instances in the cluster's control plane and their Load Balancers will be provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic. If your worker nodes need to accept incoming traffic from the Internet, an additional Load Balancer will need to be provisioned in the public subnet to route traffic to workers in the private subnets.
251+
When `control_plane_subnet_access=private` and `k8s_master_lb_access=private`, control plane instances and the Kubernetes Master Load Balancer
252+
are provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to
253+
perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic.
254+
If your worker nodes need to accept incoming traffic from the Internet, an additional front-end Load Balancer will
255+
need to be provisioned in the public subnet to route traffic to workers in the private subnets.
248256

249-
The following input variables are used to configure the inbound security rules for the NAT instance and any other
250-
instance or front-end Load Balancer in the public subnet:
257+
The following input variables are used to configure the inbound security rules for the NAT instance(s) and any other instance or front-end Load Balancer in the public subnet:
251258

252259
name | default | description
253260
------------------------------------|-------------------------|------------
254-
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including the NAT instance)
261+
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including NAT instances)
255262
public_subnet_http_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 80 on instances in the public subnet
256263
public_subnet_https_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 443 on instances in the public subnet
257264
natInstanceShape | VM.Standard1.1 | OCI shape for the optional NAT instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
258-
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when network_access=private)
259-
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when network_access=private)
260-
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when network_access=private)
265+
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when control_plane_subnet_access=private)
266+
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when control_plane_subnet_access=private)
267+
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when control_plane_subnet_access=private)
268+
269+
*Note*
270+
271+
When `control_plane_subnet_access=private`, you do not need to set the etcd, master, and worker security rules since they already
272+
allow all inbound traffic between instances in the VCN.
273+
274+
##### _Private_ and _Public_ Network Access
275+
276+
It is also valid to set `control_plane_subnet_access=private` while keeping `k8s_master_lb_access=public`. In this scenario, instances in the
277+
cluster's control plane will still provisioned in _private_ subnets and require NAT instance(s). However, the Load
278+
Balancer for your back-end Kubernetes Master(s) will be launched in a public subnet and will therefore be accessible
279+
over the Internet if the inbound security rules allow.
261280

262281
*Note*
263282

264-
If `network_access=private`, you do not need to set the etcd, master, and worker security rules since they already allow all inbound traffic between instances in the VCN.
283+
When `control_plane_subnet_access=private`, you still cannot SSH directly into your instances without going through a NAT instance.
265284

266285
#### Instance Shape and Placement Configuration
267286
name | default | description
@@ -279,8 +298,8 @@ etcdAd1Count | 1 | number of etcd n
279298
etcdAd2Count | 0 | number of etcd nodes to create in Availability Domain 2
280299
etcdAd3Count | 0 | number of etcd nodes to create in Availability Domain 3
281300
etcd_lb_enabled | "true" | enable/disable the etcd load balancer. "true" use the etcd load balancer ip, "false" use a list of etcd instance ips
282-
etcdLBShape | 100Mbps | etcd OCI Load Balancer shape / bandwidth
283-
k8sMasterLBShape | 100Mbps | master OCI Load Balancer shape / bandwidth
301+
etcdLBShape | 100Mbps | etcd cluster OCI Load Balancer shape / bandwidth
302+
k8sMasterLBShape | 100Mbps | Kubernetes Master OCI Load Balancer shape / bandwidth
284303

285304
#### TLS Certificates & SSH key pair
286305
name | default | description
@@ -443,7 +462,7 @@ master_docker_max_log_size = "100m"
443462
## Known issues and limitations
444463
* Scaling or replacing etcd members in or out after the initial deployment is currently unsupported
445464
* Creating a service with `--type=LoadBalancer` is currently unsupported
446-
* Failover or HA configuration for the NAT instance is currently unsupported
465+
* Failover or HA configuration for NAT instance(s) is currently unsupported
447466

448467
## Contributing
449468

instances/k8sworker/main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ resource "oci_core_instance" "TFInstanceK8sWorker" {
2121
tags = "group:k8s-worker"
2222
}
2323

24-
# TODO handle scenario when network_access = "private"
24+
# TODO handle scenario when control_plane_subnet_access = "private"
2525
provisioner "remote-exec" {
2626
when = "destroy"
2727

k8s-oci.tf

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ module "vcn" {
2424
additional_k8smaster_security_lists_ids = "${var.additional_k8s_master_security_lists_ids}"
2525
additional_k8sworker_security_lists_ids = "${var.additional_k8s_worker_security_lists_ids}"
2626
additional_public_security_lists_ids = "${var.additional_public_security_lists_ids}"
27-
network_access = "${var.network_access}"
27+
control_plane_subnet_access = "${var.control_plane_subnet_access}"
2828
etcd_ssh_ingress = "${var.etcd_ssh_ingress}"
2929
etcd_cluster_ingress = "${var.etcd_cluster_ingress}"
3030
master_ssh_ingress = "${var.master_ssh_ingress}"
@@ -353,11 +353,13 @@ module "etcd-private-lb" {
353353
}
354354

355355
module "k8smaster-public-lb" {
356-
source = "network/loadbalancers/k8smaster"
357-
compartment_ocid = "${var.compartment_ocid}"
358-
is_private = "${var.network_access == "private" ? "true": "false"}"
359-
k8smaster_subnet_0_id = "${module.vcn.k8smaster_subnet_ad1_id}"
360-
k8smaster_subnet_1_id = "${var.network_access == "private" ? "": module.vcn.k8smaster_subnet_ad2_id}"
356+
source = "network/loadbalancers/k8smaster"
357+
compartment_ocid = "${var.compartment_ocid}"
358+
is_private = "${var.k8s_master_lb_access == "private" ? "true": "false"}"
359+
360+
# Handle case where var.k8s_master_lb_access=public, but var.control_plane_subnet_access=private
361+
k8smaster_subnet_0_id = "${var.k8s_master_lb_access == "private" ? module.vcn.k8smaster_subnet_ad1_id: coalesce(join(" ", list(module.vcn.public_subnet_ad1_id)), join(" ", list(module.vcn.k8smaster_subnet_ad1_id)))}"
362+
k8smaster_subnet_1_id = "${var.k8s_master_lb_access == "private" ? "": coalesce(join(" ", list(module.vcn.public_subnet_ad2_id)), join(" ", list(module.vcn.k8smaster_subnet_ad2_id)))}"
361363
k8smaster_ad1_private_ips = "${module.instances-k8smaster-ad1.private_ips}"
362364
k8smaster_ad2_private_ips = "${module.instances-k8smaster-ad2.private_ips}"
363365
k8smaster_ad3_private_ips = "${module.instances-k8smaster-ad3.private_ips}"

network/vcn/datasources.tf

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,60 +12,60 @@ data "oci_core_images" "ImageOCID" {
1212

1313
# Gets a list of VNIC attachments on the NAT instance in AD 1
1414
data "oci_core_vnic_attachments" "NATInstanceAD1Vnics" {
15-
count = "${(var.network_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
15+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
1616
compartment_id = "${var.compartment_ocid}"
1717
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
1818
instance_id = "${oci_core_instance.NATInstanceAD1.id}"
1919
}
2020

2121
# Gets the OCID of the first (default) VNIC on the NAT instance in AD 1
2222
data "oci_core_vnic" "NATInstanceAD1Vnic" {
23-
count = "${(var.network_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
23+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
2424
vnic_id = "${lookup(data.oci_core_vnic_attachments.NATInstanceAD1Vnics.vnic_attachments[0],"vnic_id")}"
2525
}
2626

2727
# List Private IPs on the NAT instance in AD 1
2828
data "oci_core_private_ips" "NATInstanceAD1PrivateIPDatasource" {
29-
count = "${(var.network_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
29+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad1_enabled == "true") ? "1" : "0"}"
3030
vnic_id = "${data.oci_core_vnic.NATInstanceAD1Vnic.id}"
3131
}
3232

3333
# Gets a list of VNIC attachments on the NAT instance in AD 2
3434
data "oci_core_vnic_attachments" "NATInstanceAD2Vnics" {
35-
count = "${(var.network_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
35+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
3636
compartment_id = "${var.compartment_ocid}"
3737
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[1],"name")}"
3838
instance_id = "${oci_core_instance.NATInstanceAD2.id}"
3939
}
4040

4141
# Gets the OCID of the first (default) VNIC on the NAT instance in AD 2
4242
data "oci_core_vnic" "NATInstanceAD2Vnic" {
43-
count = "${(var.network_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
43+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
4444
vnic_id = "${lookup(data.oci_core_vnic_attachments.NATInstanceAD2Vnics.vnic_attachments[0],"vnic_id")}"
4545
}
4646

4747
# List Private IPs on the NAT instance in AD 2
4848
data "oci_core_private_ips" "NATInstanceAD2PrivateIPDatasource" {
49-
count = "${(var.network_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
49+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad2_enabled == "true") ? "1" : "0"}"
5050
vnic_id = "${data.oci_core_vnic.NATInstanceAD2Vnic.id}"
5151
}
5252

5353
# Gets a list of VNIC attachments on the NAT instance in AD 3
5454
data "oci_core_vnic_attachments" "NATInstanceAD3Vnics" {
55-
count = "${(var.network_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
55+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
5656
compartment_id = "${var.compartment_ocid}"
5757
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[2],"name")}"
5858
instance_id = "${oci_core_instance.NATInstanceAD3.id}"
5959
}
6060

6161
# Gets the OCID of the first (default) VNIC on the NAT instance in AD 3
6262
data "oci_core_vnic" "NATInstanceAD3Vnic" {
63-
count = "${(var.network_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
63+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
6464
vnic_id = "${lookup(data.oci_core_vnic_attachments.NATInstanceAD3Vnics.vnic_attachments[0],"vnic_id")}"
6565
}
6666

6767
# List Private IPs on the NAT instance in AD 3
6868
data "oci_core_private_ips" "NATInstanceAD3PrivateIPDatasource" {
69-
count = "${(var.network_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
69+
count = "${(var.control_plane_subnet_access == "private") && (var.nat_instance_ad3_enabled == "true") ? "1" : "0"}"
7070
vnic_id = "${data.oci_core_vnic.NATInstanceAD3Vnic.id}"
7171
}

0 commit comments

Comments
 (0)