|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * hosted_control_planes/hcp-managing.adoc |
| 4 | + |
| 5 | +:_content-type: PROCEDURE |
| 6 | +[id="scale-down-data-plane_{context}"] |
| 7 | += Scaling down the data plane to zero |
| 8 | + |
| 9 | +If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero. |
| 10 | + |
| 11 | +[NOTE] |
| 12 | +==== |
| 13 | +Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down. |
| 14 | +==== |
| 15 | + |
| 16 | +.Procedure |
| 17 | + |
| 18 | +. Set the `kubeconfig` file to access the hosted cluster by running the following command: |
| 19 | ++ |
| 20 | +[source,terminal] |
| 21 | +---- |
| 22 | +$ export KUBECONFIG=<install_directory>/auth/kubeconfig |
| 23 | +---- |
| 24 | + |
| 25 | +. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command: |
| 26 | ++ |
| 27 | +[source,terminal] |
| 28 | +---- |
| 29 | +$ oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE> |
| 30 | +---- |
| 31 | + |
| 32 | +. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command: |
| 33 | ++ |
| 34 | +[source,terminal] |
| 35 | +---- |
| 36 | +$ oc edit NodePool <nodepool> -o yaml --namespace <HOSTED_CLUSTER_NAMESPACE> |
| 37 | +---- |
| 38 | ++ |
| 39 | +.Example output |
| 40 | +[source,yaml] |
| 41 | +---- |
| 42 | +apiVersion: hypershift.openshift.io/v1alpha1 |
| 43 | +kind: NodePool |
| 44 | +metadata: |
| 45 | +# ... |
| 46 | + name: nodepool-1 |
| 47 | + namespace: clusters |
| 48 | +# ... |
| 49 | +spec: |
| 50 | + arch: amd64 |
| 51 | + clusterName: clustername <1> |
| 52 | + management: |
| 53 | + autoRepair: false |
| 54 | + replace: |
| 55 | + rollingUpdate: |
| 56 | + maxSurge: 1 |
| 57 | + maxUnavailable: 0 |
| 58 | + strategy: RollingUpdate |
| 59 | + upgradeType: Replace |
| 60 | + nodeDrainTimeout: 0s <2> |
| 61 | +# ... |
| 62 | +---- |
| 63 | +<1> Defines the name of your hosted cluster. |
| 64 | +<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process. |
| 65 | ++ |
| 66 | +[NOTE] |
| 67 | +==== |
| 68 | +To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`. |
| 69 | +==== |
| 70 | + |
| 71 | +. Scale down the `NodePool` resource associated to your hosted cluster by running the following command: |
| 72 | ++ |
| 73 | +[source,terminal] |
| 74 | +---- |
| 75 | +$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0 |
| 76 | +---- |
| 77 | ++ |
| 78 | +[NOTE] |
| 79 | +==== |
| 80 | +After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource. |
| 81 | +==== |
| 82 | + |
| 83 | +. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command: |
| 84 | ++ |
| 85 | +[source,terminal] |
| 86 | +---- |
| 87 | +$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1 |
| 88 | +---- |
| 89 | ++ |
| 90 | +After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state. |
0 commit comments