You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -11,7 +12,14 @@ The cluster autoscaler adjusts the size of an {product-title} cluster to meet it
11
12
12
13
The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
13
14
14
-
The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources.
15
+
The cluster autoscaler computes the total
16
+
ifndef::openshift-dedicated,openshift-rosa[]
17
+
memory, CPU, and GPU
18
+
endif::[]
19
+
ifdef::openshift-dedicated,openshift-rosa[]
20
+
memory and CPU
21
+
endif::[]
22
+
on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources.
Copy file name to clipboardExpand all lines: modules/rosa-scaling-worker-nodes.adoc
+21Lines changed: 21 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,8 @@ Worker nodes can be scaled manually if you do not want to configure node autosca
11
11
12
12
.Procedure
13
13
14
+
ifdef::openshift-rosa[]
15
+
14
16
. To get a list of the machine pools in a cluster, enter the following command. Each cluster has a default machine pool that is created when you create a cluster.
15
17
+
16
18
[source,terminal]
@@ -48,3 +50,22 @@ The response output shows the number of worker nodes, or replicas, as `Compute`
48
50
. Optional: To view this change in {cluster-manager-url}:
49
51
.. Select the cluster.
50
52
.. From the *Overview* tab, in the `Details` pane, review the `Compute` node number.
53
+
54
+
endif::[]
55
+
56
+
57
+
ifdef::openshift-dedicated[]
58
+
59
+
. From the {cluster-manager-url}, navigate to the *Clusters* page and select the cluster that you want to scale worker nodes manually for.
60
+
. On the selected cluster, select the *Machine pools* tab.
61
+
. Click the Options menu {kebab} at the end of the machine pool that you want to manually scale, and select *Scale*.
62
+
. On the *Edit node count* dialog, edit the node count.
63
+
+
64
+
[NOTE]
65
+
====
66
+
Your subscription determines the number of nodes you can select.
Applying autoscaling to an {product-title} cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster.
74
+
75
+
[IMPORTANT]
76
+
====
77
+
You can configure the cluster autoscaler only in clusters where the Machine API is operational.
This section describes how to manage worker nodes with {product-title} (ROSA).
8
+
This section describes how to manage worker nodes with
9
+
ifndef::openshift-rosa[]
10
+
{product-title}.
11
+
endif::[]
12
+
ifdef::openshift-rosa[]
13
+
{product-title} (ROSA).
14
+
endif::[]
15
+
9
16
10
17
The majority of changes for worker nodes are configured on machine pools. A _machine pool_ is a group of worker nodes in a cluster that have the same configuration, providing ease of management. You can edit the configuration of worker nodes for options such as scaling, instance type, labels, and taints.
0 commit comments