Skip to content

Commit 5cb7b2f

Browse files
Merge pull request #63784 from xenolinux/scale-down-hcp
2 parents cda9024 + d3f4cd8 commit 5cb7b2f

File tree

2 files changed

+91
-1
lines changed

2 files changed

+91
-1
lines changed

hosted_control_planes/hcp-managing.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,5 +18,5 @@ include::modules/hosted-control-planes-pause-reconciliation.adoc[leveloffset=+1]
1818
//using service-level DNS for control plane services
1919
include::modules/hosted-control-planes-metrics-sets.adoc[leveloffset=+1]
2020
//automated machine management
21+
include::modules/scale-down-data-plane.adoc[leveloffset=+1]
2122
include::modules/delete-hosted-cluster.adoc[leveloffset=+1]
22-

modules/scale-down-data-plane.adoc

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-managing.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="scale-down-data-plane_{context}"]
7+
= Scaling down the data plane to zero
8+
9+
If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero.
10+
11+
[NOTE]
12+
====
13+
Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down.
14+
====
15+
16+
.Procedure
17+
18+
. Set the `kubeconfig` file to access the hosted cluster by running the following command:
19+
+
20+
[source,terminal]
21+
----
22+
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
23+
----
24+
25+
. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command:
26+
+
27+
[source,terminal]
28+
----
29+
$ oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>
30+
----
31+
32+
. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command:
33+
+
34+
[source,terminal]
35+
----
36+
$ oc edit NodePool <nodepool> -o yaml --namespace <HOSTED_CLUSTER_NAMESPACE>
37+
----
38+
+
39+
.Example output
40+
[source,yaml]
41+
----
42+
apiVersion: hypershift.openshift.io/v1alpha1
43+
kind: NodePool
44+
metadata:
45+
# ...
46+
name: nodepool-1
47+
namespace: clusters
48+
# ...
49+
spec:
50+
arch: amd64
51+
clusterName: clustername <1>
52+
management:
53+
autoRepair: false
54+
replace:
55+
rollingUpdate:
56+
maxSurge: 1
57+
maxUnavailable: 0
58+
strategy: RollingUpdate
59+
upgradeType: Replace
60+
nodeDrainTimeout: 0s <2>
61+
# ...
62+
----
63+
<1> Defines the name of your hosted cluster.
64+
<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process.
65+
+
66+
[NOTE]
67+
====
68+
To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`.
69+
====
70+
71+
. Scale down the `NodePool` resource associated to your hosted cluster by running the following command:
72+
+
73+
[source,terminal]
74+
----
75+
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0
76+
----
77+
+
78+
[NOTE]
79+
====
80+
After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource.
81+
====
82+
83+
. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command:
84+
+
85+
[source,terminal]
86+
----
87+
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1
88+
----
89+
+
90+
After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state.

0 commit comments

Comments
 (0)