|
| 1 | +# Example Operations |
| 2 | + |
| 3 | +## Deploying a new cluster using terraform apply |
| 4 | + |
| 5 | +Override any of the above input variables in your terraform.vars and run the plan and apply commands: |
| 6 | + |
| 7 | +```bash |
| 8 | +# verify what will change |
| 9 | +$ terraform plan |
| 10 | + |
| 11 | +# scale workers |
| 12 | +$ terraform apply |
| 13 | +``` |
| 14 | + |
| 15 | +## Scaling k8s workers (in or out) using terraform apply |
| 16 | + |
| 17 | +To scale workers in or out, adjust the `k8sWorkerAd1Count`, `k8sWorkerAd2Count`, or `k8sWorkerAd3Count` input |
| 18 | +variables in terraform.vars and run the plan and apply commands: |
| 19 | + |
| 20 | +```bash |
| 21 | +# verify changes |
| 22 | +$ terraform plan |
| 23 | + |
| 24 | +# scale workers (use -target=module.instances-k8sworker-adX to only target workers in a particular AD) |
| 25 | +$ terraform apply |
| 26 | +``` |
| 27 | + |
| 28 | +When scaling worker nodes _up_, you will need to wait for the node initialization to finish asynchronously before |
| 29 | +the new nodes will be seen with `kubectl get nodes` |
| 30 | + |
| 31 | +When scaling worker nodes _down_, the instances/k8sworker module's user_data code will take care of running `kubectl drain` and `kubectl delete node` on the nodes being terminated. |
| 32 | + |
| 33 | +## Scaling k8s masters (in or out) using terraform apply |
| 34 | + |
| 35 | +To scale the masters in or out, adjust the `k8sMasterAd1Count`, `k8sMasterAd2Count`, or `k8sMasterAd3Count` input variables in terraform.vars and run the plan and apply commands: |
| 36 | + |
| 37 | +```bash |
| 38 | +# verify changes |
| 39 | +$ terraform plan |
| 40 | + |
| 41 | +# scale master nodes |
| 42 | +$ terraform apply |
| 43 | +``` |
| 44 | + |
| 45 | +Similar to the initial deployment, you will need to wait for the node initialization to finish asynchronously. |
| 46 | + |
| 47 | +## Scaling etcd nodes (in or out) using terraform apply |
| 48 | + |
| 49 | +Scaling the etcd nodes in or out after the initial deployment is not currently supported. Terminating all the nodes in the etcd cluster will result in data loss. |
| 50 | + |
| 51 | +## Replacing worker nodes using terraform taint |
| 52 | + |
| 53 | +We can use `terraform taint` to worker instances in a particular AD as "tainted", which will cause |
| 54 | + them to be destroyed and recreated on the next apply. This can be a useful strategy for reverting local changes or |
| 55 | + regenerating a misbehaving worker. |
| 56 | + |
| 57 | +```bash |
| 58 | +# taint all workers in a particular AD |
| 59 | +terraform taint -module=instances-k8sworker-ad1 oci_core_instance.TFInstanceK8sWorker |
| 60 | +# optionally taint workers in AD2 and AD3 or do so in a subsequent apply |
| 61 | +# terraform taint -module=instances-k8sworker-ad2 oci_core_instance.TFInstanceK8sWorker |
| 62 | +# terraform taint -module=instances-k8sworker-ad3 oci_core_instance.TFInstanceK8sWorker |
| 63 | + |
| 64 | +# preview changes |
| 65 | +$ terraform plan |
| 66 | + |
| 67 | +# replace workers |
| 68 | +$ terraform apply |
| 69 | +``` |
| 70 | + |
| 71 | +## Replacing masters using terraform taint |
| 72 | + |
| 73 | +We can also use `terraform taint` to master instances in a particular AD as "tainted", which will cause |
| 74 | + them to be destroyed and recreated on the next apply. This can be a useful strategy for reverting local |
| 75 | + changes or regenerating a misbehaving master. |
| 76 | + |
| 77 | +```bash |
| 78 | +# taint all masters in a particular AD |
| 79 | +terraform taint -module=instances-k8smaster-ad1 oci_core_instance.TFInstanceK8sMaster |
| 80 | +# optionally taint masters in AD2 and AD3 or do so in a subsequent apply |
| 81 | +# terraform taint -module=instances-k8smaster-ad2 oci_core_instance.TFInstanceK8sMaster |
| 82 | +# terraform taint -module=instances-k8smaster-ad3 oci_core_instance.TFInstanceK8sMaster |
| 83 | + |
| 84 | +# preview changes |
| 85 | +$ terraform plan |
| 86 | + |
| 87 | +# replace workers |
| 88 | +$ terraform apply |
| 89 | +``` |
| 90 | + |
| 91 | +## Upgrading Kubernetes Version |
| 92 | + |
| 93 | +There are a few ways of moving to a new version of Kubernetes in your cluster. |
| 94 | + |
| 95 | +The easiest way to upgrade to a new Kubernetes version is to use the scripts to do a fresh cluster install using an updated `k8s_ver` inpput variable. The downside with this option is that the new cluster will not have your existing cluster state and deployments. |
| 96 | + |
| 97 | +The other options involve using the `k8s_ver` input variable to _replace_ master and worker instances in your _existing_ cluster. We can replace master and worker instances in the cluster since Kubernetes masters and workers are stateless. This option can either be done all at once or incrementally. |
| 98 | + |
| 99 | +#### Option 1: Do a clean install (easiest overall approach) |
| 100 | + |
| 101 | +Set the `k8s_ver` and follow the original instructions in the [README](../README.md) do install a new cluster. The `label_prefix` variable is useful for installing multiple clusters in a compartment. |
| 102 | + |
| 103 | +#### Option 2: Upgrade cluster all at once (easiest upgrade) |
| 104 | + |
| 105 | +The example `terraform apply` command below will destroy then re-create all master and worker instances using as much parallelism as possible. It's the easiest and quickest upgrade scenario, but will result in some downtime for the workers and masters while they are being re-created. The single example `terraform apply` below will: |
| 106 | + |
| 107 | +1. destroy all worker nodes |
| 108 | +2. destroy all master nodes |
| 109 | +3. destroy all master load-balancer backends that point to old master instances |
| 110 | +4. re-create master instances using Kubernetes 1.7.5 |
| 111 | +5. re-create worker nodes using Kubernetes 1.7.5 |
| 112 | +6. re-create master load-balancer backends to point to new master node instances |
| 113 | + |
| 114 | +```bash |
| 115 | +# preview upgrade/replace |
| 116 | +$ terraform plan -var k8s_ver=1.7.5 |
| 117 | + |
| 118 | +# perform upgrade/replace |
| 119 | +$ terraform apply -var k8s_ver=1.7.5 |
| 120 | +``` |
| 121 | + |
| 122 | +#### Option 3: Upgrade cluster instances incrementally (most complicated, most control over roll-out) |
| 123 | + |
| 124 | +##### First, upgrade master nodes by AD |
| 125 | + |
| 126 | +If you would rather update the cluster incrementally, we start by upgrading the master nodes in each AD. In this scenario, each `terraform apply` will: |
| 127 | + |
| 128 | +1. destroy all master instances in a particular AD |
| 129 | +2. destroy all master load-balancer backends that point to deleted master instances |
| 130 | +3. re-create master instances in the AD using Kubernetes 1.7.5 |
| 131 | +4. re-create master load-balancer backends to point to new master node instances |
| 132 | + |
| 133 | +For example, here is the command to upgrade all the master instances in AD1: |
| 134 | + |
| 135 | +```bash |
| 136 | +# preview upgrade of all masters and their LB backends in AD1 |
| 137 | +$ terraform plan -var k8s_ver=1.7.5 -target=module.instances-k8smaster-ad1 -target=module.k8smaster-public-lb |
| 138 | + |
| 139 | +# perform upgrade/replace masters |
| 140 | +$ terraform apply -var k8s_ver=1.7.5 -target=module.instances-k8smaster-ad1 -target=module.k8smaster-public-lb |
| 141 | +``` |
| 142 | + |
| 143 | +Be sure to repeat this command for each AD you have masters on. |
| 144 | + |
| 145 | +##### Next, upgrade worker nodes by AD |
| 146 | + |
| 147 | +After upgrading all the master nodes, we upgrade the worker nodes in each AD. Each `terraform apply` will: |
| 148 | + |
| 149 | +1. drain all worker nodes in a particular AD to your nodes in AD2 and AD3 |
| 150 | +2. destroy all worker nodes in a particular AD |
| 151 | +3. re-create worker nodes in a particular AD using Kubernetes 1.7.5 |
| 152 | + |
| 153 | +For example, here is the command to upgrade the master instances in AD1: |
| 154 | + |
| 155 | +```bash |
| 156 | +# preview upgrade of all workers in a particular AD to K8s |
| 157 | +$ terraform plan -var k8s_ver=1.7.5 -target=module.instances-k8sworker-ad1 |
| 158 | + |
| 159 | +# perform upgrade/replace workers |
| 160 | +$ terraform apply -var k8s_ver=1.7.5 -target=module.instances-k8sworker-ad1 |
| 161 | +``` |
| 162 | + |
| 163 | +Like before, repeat `terraform apply` on each AD you have workers on. Note that if you have more than one worker in an AD, you can upgrade worker nodes individually using the subscript operator e.g. |
| 164 | + |
| 165 | +```bash |
| 166 | +# preview upgrade of a single worker in a particular AD to K8s 1.7.5 |
| 167 | +$ terraform plan -var k8s_ver=1.7.5 -target=module.instances-k8smaster-ad1.oci_core_instance.TFInstanceK8sMaster[1] |
| 168 | + |
| 169 | +# perform upgrade/replace of worker |
| 170 | +$ terraform apply -var k8s_ver=1.7.5 -target=module.instances-k8sworker-ad1 |
| 171 | +``` |
| 172 | + |
| 173 | +## Replacing etcd cluster members using terraform taint |
| 174 | + |
| 175 | +Replacing etcd cluster members after the initial deployment is not currently supported. |
| 176 | + |
| 177 | +## Deleting a cluster using terraform destroy |
| 178 | + |
| 179 | +```bash |
| 180 | +$ terraform destroy |
| 181 | +``` |
0 commit comments