Skip to content
This repository was archived by the owner on Oct 31, 2019. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
103 changes: 102 additions & 1 deletion docs/input-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,90 @@ fingerprint | None (required) | Fingerprint of t
private_key_path | None (required) | Private key file path of the OCI user's private key
region | us-phoenix-1 | String value of region to create resources

For the Separation of Duties we need multiple compartments then specify all the 5 compartment ocids (nat_compartment_ocid,bastion_compartment_ocid,coreservice_compartment_ocid,network_compartment_ocid,lb_compartment_ocid)

## Optional Input Variables:
name | default | description
------------------------------------|-------------------------|-----------------
nat_compartment_ocid | None (Optional) | NAT Compartment's OCI OCID
bastion_compartment_ocid | None (Optional) | Bastion Compartment's OCI OCID
coreservice_compartment_ocid | None (Optional) | Core Service Compartment's OCI OCID
network_compartment_ocid | None (Optional) | Network Compartment's OCI OCID
lb_compartment_ocid | None (Optional) | LB Compartment's OCI OCID

The below is how the multiple compartments are organized

name | OCI Resources | Subnets
------------------------------------|----------------------------------------|-----------------
network_compartment_ocid | VCN, Internet Gateway, Route tables |
nat_compartment_ocid | All NAT VMs in NATSubnetAD | publicNATSubnetAD1/2/3
bastion_compartment_ocid | All Bastion VMs in BastionSubnetAD | publicBastionSubnetAD1/2/3
lb_compartment_ocid | LB instances in LBSubnetAD. | publicSubnetAD1/2/3
coreservice_compartment_ocid | All Master, Worker, Etcd VMs in MasterSubnetAD, WorkerSubnetAD and EtcdSubnetAD. BVs associated with Worker and Etcd instances | privateETCDSubnetAD1/2/3, privateK8SMasterSubnetAD1/2/3, privateK8SWorkerSubnetAD1/2/3


### Network Access Configuration

name | default | description
------------------------------------|-------------|------------
control_plane_subnet_access | public | Whether instances in the control plane are launched in a public or private subnets
k8s_master_lb_access | public | Whether the Kubernetes Master Load Balancer is launched in a public or private subnets
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets


#### _Public_ Network Access (default)

![](./images/public_cp_subnet_access.jpg)

When `control_plane_subnet_access=public` and `k8s_master_lb_access=public`, control plane instances and the Kubernetes Master Load Balancer are provisioned in _public_ subnets and automatically get both a public and private IP address. If the inbound security rules allow, you can communicate with them directly via their public IPs.

The following input variables are used to configure the inbound security rules on the public etcd, master, and worker subnets:

name | default | description
------------------------------------|-------------------------|------------
etcd_cluster_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the etcd cluster
etcd_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to SSH to etcd nodes
master_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the master(s)
master_https_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access the HTTPs port on the master(s)
worker_ssh_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to SSH to worker(s)
worker_nodeport_ingress | 10.0.0.0/16 (VCN only) | A CIDR notation IP range that is allowed to access NodePorts (30000-32767) on the worker(s)

#### _Private_ Network Access

![](./images/private_cp_subnet_private_lb_access.jpg)

When `control_plane_subnet_access=private`, `etcd_lb_access=private` and `k8s_master_lb_access=private`, control plane instances, etcd Load Balancer and the Kubernetes Master Load Balancer
are provisioned in _private_ subnets. In this scenario, we will also set up an instance in a public subnet to
perform Network Address Translation (NAT) for instances in the private subnets so they can send outbound traffic.
If your worker nodes need to accept incoming traffic from the Internet, an additional front-end Load Balancer will
need to be provisioned in the public subnet to route traffic to workers in the private subnets.


The following input variables are used to configure the inbound security rules for the NAT instance(s) and any other instance or front-end Load Balancer in the public subnet:

name | default | description
------------------------------------|-------------------------|------------
public_subnet_ssh_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed to SSH to instances in the public subnet (including NAT instances)
public_subnet_http_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 80 on instances in the public subnet
public_subnet_https_ingress | 0.0.0.0/0 | A CIDR notation IP range that is allowed access to port 443 on instances in the public subnet
natInstanceShape | VM.Standard1.1 | OCI shape for the optional NAT instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
nat_instance_ad1_enabled | true | whether to provision a NAT instance in AD 1 (only used when control_plane_subnet_access=private)
nat_instance_ad2_enabled | false | whether to provision a NAT instance in AD 2 (only used when control_plane_subnet_access=private)
nat_instance_ad3_enabled | false | whether to provision a NAT instance in AD 3 (only used when control_plane_subnet_access=private)

*Note*

Even though we can configure a NAT instance per AD, this [diagram](./images/private_cp_subnet_public_lb_failure.jpg) illustrates that each NAT Instance is still represents a single point of failure for the private subnet that routes outbound traffic to it.

#### _Private_ and _Public_ Network Access

![](./images/private_cp_subnet_public_lb_access.jpg)

It is also valid to set `control_plane_subnet_access=private` while keeping `etcd_lb_access=public` and `k8s_master_lb_access=public`. In this scenario, instances in the cluster's control plane will still provisioned in _private_ subnets and require NAT instance(s). However, the Load Balancer for your etcd and back-end Kubernetes Master(s) will be launched in a public subnet and will therefore be accessible over the Internet if the inbound security rules allow.

*Note*

When `control_plane_subnet_access=private`, you still cannot SSH directly into your instances without going through a NAT instance.

### Compute Instance Configuration
name | default | description
Expand All @@ -39,6 +122,8 @@ worker_iscsi_volume_size | unset | optional size of
worker_iscsi_volume_mount | /var/lib/docker | optional mount path of iSCSI volume when worker_iscsi_volume_size is set
etcd_iscsi_volume_create | false | boolean flag indicating whether or not to attach an iSCSI volume to attach to each etcd node
etcd_iscsi_volume_size | 50 | size in GBs of volume when etcd_iscsi_volume_create is set
etcd_maintain_private_ip | false | Assign an etcd instance a private ip based on the CIDR for that AD
master_maintain_private_ip | false | Assign a master instance a private ip based on the CIDR for that AD

### TLS Certificates & SSH key pair
name | default | description
Expand Down Expand Up @@ -77,6 +162,11 @@ name | default | description
cloud_controller_user_ocid | user_ocid | OCID of the user calling the OCI API to create Load Balancers
cloud_controller_user_fingerprint | fingerprint | Fingerprint of the OCI user calling the OCI API to create Load Balancers
cloud_controller_user_private_key_path | private_key_path | Private key file path of the OCI user calling the OCI API to create Load Balancers
master_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for masters
worker_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for workers
etcd_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for etcd nodes
nat_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for NAT instances (if applicable)
bastion_ol_image_name | Oracle-Linux-7.4-2018.01.10-0 | Image name of an Oracle-Linux-7.X image to use for Bastion instances (if applicable)

#### Docker logging configuration
name | default | description
Expand All @@ -99,7 +189,7 @@ name | default | description
------------------------------------|-------------|------------
control_plane_subnet_access | public | Whether instances in the control plane are launched in a public or private subnets
k8s_master_lb_access | public | Whether the Kubernetes Master Load Balancer is launched in a public or private subnets
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets
etcd_lb_access | private | Whether the etcd Load Balancer is launched in a public or private subnets

#### _Public_ Network Access (default)

Expand Down Expand Up @@ -149,6 +239,17 @@ nat_instance_ad3_enabled | "false" | whether to provi

Even though we can configure a NAT instance per AD, this [diagram](./images/private_cp_subnet_public_lb_failure.jpg) illustrates that each NAT Instance is still represents a single point of failure for the private subnet that routes outbound traffic to it.


The following input variables are used to configure the Bastion instance(s). A global security list is configured and attached to all the subnets:

name | default | description
------------------------------------|-------------------------|------------
dedicated_bastion_subnets | "true" | whether to provision dedicated subnets in each AD that are only used by Bastion instance(s) (separate subnets = separate control)
bastionInstanceShape | VM.Standard1.1 | OCI shape for the optional Bastion instance. Size according to the amount of expected _outbound_ traffic from nodes and pods
bastion_instance_ad1_enabled | "true" | whether to provision a Bastion instance in AD 1 (only used when control_plane_subnet_access=private)
bastion_instance_ad2_enabled | "false" | whether to provision a Bastion instance in AD 2 (only used when control_plane_subnet_access=private)
bastion_instance_ad3_enabled | "false" | whether to provision a Bastion instance in AD 3 (only used when control_plane_subnet_access=private)

#### _Private_ and _Public_ Network Access

![](./images/private_cp_subnet_public_lb_access.jpg)
Expand Down
Loading