Skip to content
This repository was archived by the owner on Oct 31, 2019. It is now read-only.

Commit ab6ac47

Browse files
author
srirg
committed
adding support for multiple compartments.
1 parent 0a206d7 commit ab6ac47

File tree

9 files changed

+180
-53
lines changed

9 files changed

+180
-53
lines changed

docs/input-variables.md

Lines changed: 102 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,90 @@ fingerprint | None (required) | Fingerprint of t
1313
private_key_path | None (required) | Private key file path of the OCI user's private key
1414
region | us-phoenix-1 | String value of region to create resources
1515

16+
For the Separation of Duties we need multiple compartments then specify all the 5 compartment ocids (nat_compartment_ocid,bastion_compartment_ocid,coreservice_compartment_ocid,network_compartment_ocid,lb_compartment_ocid)
17+
1618
## Optional Input Variables:
19+
name | default | description
20+
------------------------------------|-------------------------|-----------------
21+
nat_compartment_ocid | None (Optional) | NAT Compartment's OCI OCID
22+
bastion_compartment_ocid | None (Optional) | Bastion Compartment's OCI OCID
23+
coreservice_compartment_ocid | None (Optional) | Core Service Compartment's OCI OCID
24+
network_compartment_ocid | None (Optional) | Network Compartment's OCI OCID
25+
lb_compartment_ocid | None (Optional) | LB Compartment's OCI OCID
26+
27+
The below is how the multiple compartments are organized
28+
29+
name | OCI Resources | Subnets
30+
------------------------------------|----------------------------------------|-----------------
31+
network_compartment_ocid | VCN, Internet Gateway, Route tables |
32+
nat_compartment_ocid | All NAT VMs in NATSubnetAD | publicNATSubnetAD1/2/3
33+
bastion_compartment_ocid | All Bastion VMs in BastionSubnetAD | publicBastionSubnetAD1/2/3
34+
lb_compartment_ocid | LB instances in LBSubnetAD. | publicSubnetAD1/2/3
35+
coreservice_compartment_ocid | All Master, Worker, Etcd VMs in MasterSubnetAD, WorkerSubnetAD and EtcdSubnetAD. BVs associated with Worker and Etcd instances | privateETCDSubnetAD1/2/3, privateK8SMasterSubnetAD1/2/3, privateK8SWorkerSubnetAD1/2/3
36+
37+
38+
### Network Access Configuration
39+
40+
name | default | description
41+
------------------------------------|-------------|------------
42+
control_plane_subnet_access | public | Whether instances in the control plane are launched in a public or private subnets
43+
k8s_master_lb_access | public | Whether the Kubernetes Master Load Balancer is launched in a public or private subnets
44+
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets
45+
46+
47+
#### _Public_ Network Access (default)
48+
49+
![](./images/public_cp_subnet_access.jpg)
50+
51+
When `control_plane_subnet_access=public` and `k8s_master_lb_access=public`, control plane instances and the Kubernetes Master Load Balancer are provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.
52+
53+
The following input variables are used to configure the inbound security rules on the public etcd, master, and worker subnets:
54+
55+
name | default | description
56+
------------------------------------|-------------------------|------------
57+
etcd_cluster_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the etcd cluster
58+
etcd_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to SSH to etcd nodes
59+
master_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the master(s)
60+
master_https_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the HTTPs port on the master(s)
61+
worker_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to SSH to worker(s)
62+
worker_nodeport_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access NodePorts (30000-32767) on the worker(s)
63+
64+
#### _Private_ Network Access
65+
66+
![](./images/private_cp_subnet_private_lb_access.jpg)
67+
68+
When `control_plane_subnet_access=private`, `etcd_lb_access=private` and `k8s_master_lb_access=private`, control plane instances, etcd Load Balancer and the Kubernetes Master Load Balancer
69+
are provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to
70+
perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic.
71+
If your worker nodes need to accept incoming traffic from the Internet, an additional front-end Load Balancer will
72+
need to be provisioned in the public subnet to route traffic to workers in the private subnets.
73+
74+
75+
The following input variables are used to configure the inbound security rules for the NAT instance(s) and any other instance or front-end Load Balancer in the public subnet:
76+
77+
name | default | description
78+
------------------------------------|-------------------------|------------
79+
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including NAT instances)
80+
public_subnet_http_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 80 on instances in the public subnet
81+
public_subnet_https_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 443 on instances in the public subnet
82+
natInstanceShape | VM.Standard1.1 | OCI shape for the optional NAT instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
83+
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when control_plane_subnet_access=private)
84+
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when control_plane_subnet_access=private)
85+
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when control_plane_subnet_access=private)
86+
87+
*Note*
88+
89+
Even though we can configure a NAT instance per AD, this [diagram](./images/private_cp_subnet_public_lb_failure.jpg) illustrates that each NAT Instance is still represents a single point of failure for the private subnet that routes outbound traffic to it.
90+
91+
#### _Private_ and _Public_ Network Access
92+
93+
![](./images/private_cp_subnet_public_lb_access.jpg)
94+
95+
It is also valid to set `control_plane_subnet_access=private` while keeping `etcd_lb_access=public` and `k8s_master_lb_access=public`. In this scenario, instances in the cluster's control plane will still provisioned in _private_ subnets and require NAT instance(s). However, the Load Balancer for your etcd and back-end Kubernetes Master(s) will be launched in a public subnet and will therefore be accessible over the Internet if the inbound security rules allow.
96+
97+
*Note*
98+
99+
When `control_plane_subnet_access=private`, you still cannot SSH directly into your instances without going through a NAT instance.
17100

18101
### Compute Instance Configuration
19102
name | default | description
@@ -39,6 +122,8 @@ worker_iscsi_volume_size | unset | optional size of
39122
worker_iscsi_volume_mount | /var/lib/docker | optional mount path of iSCSI volume when worker_iscsi_volume_size is set
40123
etcd_iscsi_volume_create | false | boolean flag indicating whether or not to attach an iSCSI volume to attach to each etcd node
41124
etcd_iscsi_volume_size | 50 | size in GBs of volume when etcd_iscsi_volume_create is set
125+
etcd_maintain_private_ip | false | Assign an etcd instance a private ip based on the CIDR for that AD
126+
master_maintain_private_ip | false | Assign a master instance a private ip based on the CIDR for that AD
42127

43128
### TLS Certificates & SSH key pair
44129
name | default | description
@@ -77,6 +162,11 @@ name | default | description
77162
cloud_controller_user_ocid | user_ocid | OCID of the user calling the OCI API to create Load Balancers
78163
cloud_controller_user_fingerprint | fingerprint | Fingerprint of the OCI user calling the OCI API to create Load Balancers
79164
cloud_controller_user_private_key_path | private_key_path | Private key file path of the OCI user calling the OCI API to create Load Balancers
165+
master_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for masters
166+
worker_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for workers
167+
etcd_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for etcd nodes
168+
nat_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for NAT instances (if applicable)
169+
bastion_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for Bastion instances (if applicable)
80170

81171
#### Docker logging configuration
82172
name | default | description
@@ -99,7 +189,7 @@ name | default | description
99189
------------------------------------|-------------|------------
100190
control_plane_subnet_access | public | Whether instances in the control plane are launched in a public or private subnets
101191
k8s_master_lb_access | public | Whether the Kubernetes Master Load Balancer is launched in a public or private subnets
102-
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets
192+
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets
103193

104194
#### _Public_ Network Access (default)
105195

@@ -149,6 +239,17 @@ nat_instance_ad3_enabled | "false" | whether to provi
149239

150240
Even though we can configure a NAT instance per AD, this [diagram](./images/private_cp_subnet_public_lb_failure.jpg) illustrates that each NAT Instance is still represents a single point of failure for the private subnet that routes outbound traffic to it.
151241

242+
243+
The following input variables are used to configure the Bastion instance(s). A global security list is configured and attached to all the subnets:
244+
245+
name | default | description
246+
------------------------------------|-------------------------|------------
247+
dedicated_bastion_subnets | "true" | whether to provision dedicated subnets in each AD that are only used by Bastion instance(s) (separate subnets = separate control)
248+
bastionInstanceShape | VM.Standard1.1 | OCI shape for the optional Bastion instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
249+
bastion_instance_ad1_enabled | "true" | whether to provision a Bastion instance in AD 1 (only used when control_plane_subnet_access=private)
250+
bastion_instance_ad2_enabled | "false" | whether to provision a Bastion instance in AD 2 (only used when control_plane_subnet_access=private)
251+
bastion_instance_ad3_enabled | "false" | whether to provision a Bastion instance in AD 3 (only used when control_plane_subnet_access=private)
252+
152253
#### _Private_ and _Public_ Network Access
153254

154255
![](./images/private_cp_subnet_public_lb_access.jpg)

k8s-oci.tf

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,11 @@ module "k8s-tls" {
3333
module "vcn" {
3434
source = "./network/vcn"
3535
compartment_ocid = "${var.compartment_ocid}"
36+
network_compartment_ocid = "${(var.network_compartment_ocid != "") ? var.network_compartment_ocid : var.compartment_ocid}"
37+
lb_compartment_ocid = "${(var.lb_compartment_ocid != "") ? var.lb_compartment_ocid : var.compartment_ocid}"
38+
nat_compartment_ocid = "${(var.nat_compartment_ocid != "") ? var.nat_compartment_ocid : var.compartment_ocid}"
39+
bastion_compartment_ocid = "${(var.bastion_compartment_ocid != "") ? var.bastion_compartment_ocid : var.compartment_ocid}"
40+
coreservice_compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
3641
label_prefix = "${var.label_prefix}"
3742
tenancy_ocid = "${var.tenancy_ocid}"
3843
vcn_dns_name = "${var.vcn_dns_name}"
@@ -121,7 +126,7 @@ module "instances-etcd-ad1" {
121126
source = "./instances/etcd"
122127
count = "${var.etcdAd1Count}"
123128
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
124-
compartment_ocid = "${var.compartment_ocid}"
129+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
125130
control_plane_subnet_access = "${var.control_plane_subnet_access}"
126131
display_name_prefix = "etcd-ad1"
127132
domain_name = "${var.domain_name}"
@@ -149,7 +154,7 @@ module "instances-etcd-ad2" {
149154
source = "./instances/etcd"
150155
count = "${var.etcdAd2Count}"
151156
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[1],"name")}"
152-
compartment_ocid = "${var.compartment_ocid}"
157+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
153158
control_plane_subnet_access = "${var.control_plane_subnet_access}"
154159
display_name_prefix = "etcd-ad2"
155160
domain_name = "${var.domain_name}"
@@ -177,7 +182,7 @@ module "instances-etcd-ad3" {
177182
source = "./instances/etcd"
178183
count = "${var.etcdAd3Count}"
179184
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[2],"name")}"
180-
compartment_ocid = "${var.compartment_ocid}"
185+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
181186
control_plane_subnet_access = "${var.control_plane_subnet_access}"
182187
display_name_prefix = "etcd-ad3"
183188
docker_ver = "${var.docker_ver}"
@@ -211,7 +216,7 @@ module "instances-k8smaster-ad1" {
211216
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
212217
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
213218
k8s_apiserver_token_admin = "${module.k8s-tls.api_server_admin_token}"
214-
compartment_ocid = "${var.compartment_ocid}"
219+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
215220
control_plane_subnet_access = "${var.control_plane_subnet_access}"
216221
display_name_prefix = "k8s-master-ad1"
217222
docker_ver = "${var.docker_ver}"
@@ -253,7 +258,7 @@ module "instances-k8smaster-ad2" {
253258
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
254259
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[1],"name")}"
255260
k8s_apiserver_token_admin = "${module.k8s-tls.api_server_admin_token}"
256-
compartment_ocid = "${var.compartment_ocid}"
261+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
257262
control_plane_subnet_access = "${var.control_plane_subnet_access}"
258263
display_name_prefix = "k8s-master-ad2"
259264
docker_ver = "${var.docker_ver}"
@@ -295,7 +300,7 @@ module "instances-k8smaster-ad3" {
295300
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
296301
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[2],"name")}"
297302
k8s_apiserver_token_admin = "${module.k8s-tls.api_server_admin_token}"
298-
compartment_ocid = "${var.compartment_ocid}"
303+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
299304
control_plane_subnet_access = "${var.control_plane_subnet_access}"
300305
display_name_prefix = "k8s-master-ad3"
301306
docker_ver = "${var.docker_ver}"
@@ -335,7 +340,7 @@ module "instances-k8sworker-ad1" {
335340
api_server_cert_pem = "${module.k8s-tls.api_server_cert_pem}"
336341
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
337342
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
338-
compartment_ocid = "${var.compartment_ocid}"
343+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
339344
display_name_prefix = "k8s-worker-ad1"
340345
docker_ver = "${var.docker_ver}"
341346
worker_docker_max_log_size = "${var.worker_docker_max_log_size}"
@@ -372,7 +377,7 @@ module "instances-k8sworker-ad2" {
372377
api_server_cert_pem = "${module.k8s-tls.api_server_cert_pem}"
373378
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
374379
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[1],"name")}"
375-
compartment_ocid = "${var.compartment_ocid}"
380+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
376381
display_name_prefix = "k8s-worker-ad2"
377382
docker_ver = "${var.docker_ver}"
378383
worker_docker_max_log_size = "${var.worker_docker_max_log_size}"
@@ -409,7 +414,7 @@ module "instances-k8sworker-ad3" {
409414
api_server_cert_pem = "${module.k8s-tls.api_server_cert_pem}"
410415
api_server_private_key_pem = "${module.k8s-tls.api_server_private_key_pem}"
411416
availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[2],"name")}"
412-
compartment_ocid = "${var.compartment_ocid}"
417+
compartment_ocid = "${(var.coreservice_compartment_ocid != "") ? var.coreservice_compartment_ocid : var.compartment_ocid}"
413418
display_name_prefix = "k8s-worker-ad3"
414419
docker_ver = "${var.docker_ver}"
415420
worker_docker_max_log_size = "${var.worker_docker_max_log_size}"
@@ -445,7 +450,7 @@ module "instances-k8sworker-ad3" {
445450
module "etcd-lb" {
446451
source = "./network/loadbalancers/etcd"
447452
etcd_lb_enabled = "${var.etcd_lb_enabled}"
448-
compartment_ocid = "${var.compartment_ocid}"
453+
compartment_ocid = "${(var.lb_compartment_ocid != "") ? var.lb_compartment_ocid : var.compartment_ocid}"
449454
is_private = "${var.etcd_lb_access == "private" ? "true": "false"}"
450455

451456
# Handle case where var.etcd_lb_access=public, but var.control_plane_subnet_access=private
@@ -464,7 +469,7 @@ module "etcd-lb" {
464469
module "k8smaster-public-lb" {
465470
source = "./network/loadbalancers/k8smaster"
466471
master_oci_lb_enabled = "${var.master_oci_lb_enabled}"
467-
compartment_ocid = "${var.compartment_ocid}"
472+
compartment_ocid = "${(var.lb_compartment_ocid != "") ? var.lb_compartment_ocid : var.compartment_ocid}"
468473
is_private = "${var.k8s_master_lb_access == "private" ? "true": "false"}"
469474

470475
# Handle case where var.k8s_master_lb_access=public, but var.control_plane_subnet_access=private

0 commit comments

Comments
 (0)