|
5 | 5 | ```javascript |
6 | 6 | {{#include ../../../examples/workers/vars-workers-drain.auto.tfvars:4:}} |
7 | 7 | ``` |
| 8 | + |
| 9 | +## Example |
| 10 | + |
| 11 | +``` |
| 12 | +Terraform will perform the following actions: |
| 13 | +
|
| 14 | + # module.workers_only.module.utilities[0].null_resource.drain_workers[0] will be created |
| 15 | + + resource "null_resource" "drain_workers" { |
| 16 | + + id = (known after apply) |
| 17 | + + triggers = { |
| 18 | + + "drain_commands" = jsonencode( |
| 19 | + [ |
| 20 | + + "kubectl drain --timeout=900s --ignore-daemonsets=true --delete-emptydir-data=true -l oke.oraclecloud.com/pool.name=oke-vm-draining", |
| 21 | + ] |
| 22 | + ) |
| 23 | + + "drain_pools" = jsonencode( |
| 24 | + [ |
| 25 | + + "oke-vm-draining", |
| 26 | + ] |
| 27 | + ) |
| 28 | + } |
| 29 | + } |
| 30 | +
|
| 31 | +Plan: 1 to add, 0 to change, 0 to destroy. |
| 32 | +``` |
| 33 | + |
| 34 | +``` |
| 35 | +module.workers_only.module.utilities[0].null_resource.drain_workers[0] (remote-exec): node/10.200.220.157 cordoned |
| 36 | +module.workers_only.module.utilities[0].null_resource.drain_workers[0] (remote-exec): WARNING: ignoring DaemonSet-managed Pods: kube-system/csi-oci-node-99x74, kube-system/kube-flannel-ds-spvsp, kube-system/kube-proxy-6m2kk, ... |
| 37 | +module.workers_only.module.utilities[0].null_resource.drain_workers[0] (remote-exec): node/10.200.220.157 drained |
| 38 | +module.workers_only.module.utilities[0].null_resource.drain_workers[0]: Creation complete after 18s [id=7686343707387113624] |
| 39 | +
|
| 40 | +Apply complete! Resources: 1 added, 0 changed, 0 destroyed. |
| 41 | +``` |
| 42 | + |
| 43 | +Observe that the node(s) are now disabled for scheduling, and free of workloads other than DaemonSet-managed Pods when `worker_drain_ignore_daemonsets = true` (default): |
| 44 | +```shell |
| 45 | +kubectl get nodes -l oke.oraclecloud.com/pool.name=oke-vm-draining |
| 46 | +NAME STATUS ROLES AGE VERSION |
| 47 | +10.200.220.157 Ready,SchedulingDisabled node 24m v1.26.2 |
| 48 | + |
| 49 | +kubectl get pods --all-namespaces --field-selector spec.nodeName=10.200.220.157 |
| 50 | +NAMESPACE NAME READY STATUS RESTARTS AGE |
| 51 | +kube-system csi-oci-node-99x74 1/1 Running 0 50m |
| 52 | +kube-system kube-flannel-ds-spvsp 1/1 Running 0 50m |
| 53 | +kube-system kube-proxy-6m2kk 1/1 Running 0 50m |
| 54 | +kube-system proxymux-client-2r6lk 1/1 Running 0 50m |
| 55 | +``` |
| 56 | + |
| 57 | +Run the following command to uncordon a previously drained worker pool. The `drain = true` setting should be removed from the `worker_pools` entry to avoid re-draining the pool when running Terraform in the future. |
| 58 | +```shell |
| 59 | +kubectl uncordon -l oke.oraclecloud.com/pool.name=oke-vm-draining |
| 60 | +node/10.200.220.157 uncordoned |
| 61 | +``` |
| 62 | + |
| 63 | +## References |
| 64 | +* [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) |
| 65 | +* [`kubectl drain`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain) |
| 66 | +* [Deleting a Worker Node](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengdeletingworkernodes.htm) |
0 commit comments