You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/nodes-pods-autoscaling-custom-about.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ To use the custom metrics autoscaler, you create a `ScaledObject` or `ScaledJob`
12
12
13
13
[NOTE]
14
14
====
15
-
You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.
15
+
You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.
16
16
====
17
17
18
18
The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the `minReplicaCount` value in the custom metrics autoscaler CR to `0`, the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the _activation phase_. After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the _scaling phase_.
Copy file name to clipboardExpand all lines: modules/nodes-pods-autoscaling-custom-adding.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
To add a custom metrics autoscaler, create a `ScaledObject` custom resource for a deployment, stateful set, or custom resource. Create a `ScaledJob` custom resource for a job.
10
10
11
-
You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.
11
+
You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.
12
12
13
13
// If you want to scale based on a custom trigger and CPU/Memory, you can create multiple triggers in the scaled object or scaled job.
<1> Specifies Prometheus as the scaler/trigger type.
54
54
<2> Specifies the address of the Prometheus server. This example uses {product-title} monitoring.
55
55
<3> Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if {product-title} monitoring as a source for the metrics.
56
56
<4> Specifies the name to identify the metric in the `external.metrics.k8s.io` API. If you are using more than one trigger, all metric names must be unique.
57
-
<5> Specifies the value to start scaling for.
57
+
<5> Specifies the value to start scaling for. Must be specified as a quoted string value.
58
58
<6> Specifies the Prometheus query to use.
59
59
<7> Specifies the authentication method to use. Prometheus scalers support bearer authentication (`bearer`), basic authentication (`basic`), or TLS authentication (`tls`). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret.
60
60
<8> Optional: Passes the `X-Scope-OrgID` header to multi-tenant link:https://cortexmetrics.io/[Cortex] or link:https://grafana.com/oss/mimir/[Mimir] storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return.
@@ -94,13 +94,13 @@ spec:
94
94
- type: cpu <1>
95
95
metricType: Utilization <2>
96
96
metadata:
97
-
value: "60" <3>
98
-
containerName: "api" <4>
97
+
value: '60' <3>
98
+
containerName: api <4>
99
99
100
100
----
101
101
<1> Specifies CPU as the scaler/trigger type.
102
102
<2> Specifies the type of metric to use, either `Utilization` or `AverageValue`.
103
-
<3> Specifies the value to trigger scaling actions upon:
103
+
<3> Specifies the value to trigger scaling actions. Must be specified as a quoted string value.
104
104
* When using `Utilization`, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
105
105
* When using `AverageValue`, the target value is the average of the metrics across all relevant pods.
106
106
<4> Optional. Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. Here, only the container named `api` is to be scaled.
@@ -134,12 +134,12 @@ spec:
134
134
- type: memory <1>
135
135
metricType: Utilization <2>
136
136
metadata:
137
-
value: "60" <3>
138
-
containerName: "api" <4>
137
+
value: '60' <3>
138
+
containerName: api <4>
139
139
----
140
140
<1> Specifies memory as the scaler/trigger type.
141
141
<2> Specifies the type of metric to use, either `Utilization` or `AverageValue`.
142
-
<3> Specifies the value to trigger scaling actions for:
142
+
<3> Specifies the value to trigger scaling actions. Must be specified as a quoted string value.
143
143
* When using `Utilization`, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
144
144
* When using `AverageValue`, the target value is the average of the metrics across all relevant pods.
145
145
<4> Optional. Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. Here, only the container named `api` is to be scaled.
<2> Specifies the name of the Kafka topic on which Kafka is processing the offset lag.
192
192
<3> Specifies a comma-separated list of Kafka brokers to connect to.
193
193
<4> Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag.
194
-
<5> Optional: Specifies the average target value to trigger scaling actions. The default is `5`.
195
-
<6> Optional: Specifies the target value for the activation phase.
194
+
<5> Optional: Specifies the average target value to trigger scaling actions. Must be specified as a quoted string value. The default is `5`.
195
+
<6> Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. The default is `0`.
196
196
<7> Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: `latest` and `earliest`. The default is `latest`.
197
197
<8> Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic.
198
198
* If `true`, the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers.
@@ -203,6 +203,6 @@ spec:
203
203
<10> Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the previous polling cycle.
204
204
* If `true`, the scaler excludes partition lag in these partitions.
205
205
* If `false`, the trigger includes all consumer lag in all partitions. This is the default.
206
-
<11> Optional: Specifies the version of your Kafka brokers. The default is `1.0.0`.
207
-
<12> Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. The default is to consider all partitions.
206
+
<11> Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is `1.0.0`.
207
+
<12> Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions.
Copy file name to clipboardExpand all lines: nodes/pods/nodes-pods-autoscaling-custom.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,9 @@ toc::[]
8
8
9
9
As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory.
10
10
11
+
:FeatureName: Scaling by using a scaled job
12
+
include::snippets/technology-preview.adoc[]
13
+
11
14
The Custom Metrics Autoscaler Operator for Red Hat OpenShift is an optional operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics.
12
15
13
16
[NOTE]
@@ -17,9 +20,6 @@ The custom metrics autoscaler currently supports only the Prometheus, CPU, memor
17
20
18
21
// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring.
0 commit comments