Skip to content

Commit de0c4bf

Browse files
author
Michael Burke
committed
CMA edits per Zbynek Roubalik
1 parent ff96ebc commit de0c4bf

7 files changed

+27
-24
lines changed

modules/nodes-pods-autoscaling-custom-about.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To use the custom metrics autoscaler, you create a `ScaledObject` or `ScaledJob`
1212

1313
[NOTE]
1414
====
15-
You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.
15+
You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.
1616
====
1717

1818
The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the `minReplicaCount` value in the custom metrics autoscaler CR to `0`, the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the _activation phase_. After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the _scaling phase_.

modules/nodes-pods-autoscaling-custom-adding.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
To add a custom metrics autoscaler, create a `ScaledObject` custom resource for a deployment, stateful set, or custom resource. Create a `ScaledJob` custom resource for a job.
1010

11-
You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.
11+
You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.
1212

1313
// If you want to scale based on a custom trigger and CPU/Memory, you can create multiple triggers in the scaled object or scaled job.
1414

modules/nodes-pods-autoscaling-custom-creating-job.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@
88

99
You can create a custom metrics autoscaler for any `Job` object.
1010

11+
:FeatureName: Scaling by using a scaled job
12+
include::snippets/technology-preview.adoc[]
13+
1114
.Prerequisites
1215

1316
* The Custom Metrics Autoscaler Operator must be installed.

modules/nodes-pods-autoscaling-custom-creating-workload.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ spec:
123123
metricName: http_requests_total
124124
threshold: '5'
125125
query: sum(rate(http_requests_total{job="test-app"}[1m]))
126-
authModes: "basic"
126+
authModes: basic
127127
- authenticationRef: <17>
128128
name: prom-triggerauthentication
129129
metadata:

modules/nodes-pods-autoscaling-custom-pausing.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ metadata:
4444
generation: 1
4545
name: scaledobject
4646
namespace: my-project
47-
resourceVersion: "65729"
48-
uid: f5aec682-acdf-4232-a783-58b5b82f5dd0
47+
resourceVersion: '65729'
48+
uid: 'f5aec682-acdf-4232-a783-58b5b82f5dd0'
4949
----
5050
<1> Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling.
5151

modules/nodes-pods-autoscaling-custom-trigger.adoc

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -45,16 +45,16 @@ spec:
4545
metricName: http_requests_total <4>
4646
threshold: '5' <5>
4747
query: sum(rate(http_requests_total{job="test-app"}[1m])) <6>
48-
authModes: "basic" <7>
48+
authModes: basic <7>
4949
cortexOrgID: my-org <8>
5050
ignoreNullValues: false <9>
51-
unsafeSsl: "false" <10>
51+
unsafeSsl: false <10>
5252
----
5353
<1> Specifies Prometheus as the scaler/trigger type.
5454
<2> Specifies the address of the Prometheus server. This example uses {product-title} monitoring.
5555
<3> Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if {product-title} monitoring as a source for the metrics.
5656
<4> Specifies the name to identify the metric in the `external.metrics.k8s.io` API. If you are using more than one trigger, all metric names must be unique.
57-
<5> Specifies the value to start scaling for.
57+
<5> Specifies the value to start scaling for. Must be specified as a quoted string value.
5858
<6> Specifies the Prometheus query to use.
5959
<7> Specifies the authentication method to use. Prometheus scalers support bearer authentication (`bearer`), basic authentication (`basic`), or TLS authentication (`tls`). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret.
6060
<8> Optional: Passes the `X-Scope-OrgID` header to multi-tenant link:https://cortexmetrics.io/[Cortex] or link:https://grafana.com/oss/mimir/[Mimir] storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return.
@@ -94,13 +94,13 @@ spec:
9494
- type: cpu <1>
9595
metricType: Utilization <2>
9696
metadata:
97-
value: "60" <3>
98-
containerName: "api" <4>
97+
value: '60' <3>
98+
containerName: api <4>
9999
100100
----
101101
<1> Specifies CPU as the scaler/trigger type.
102102
<2> Specifies the type of metric to use, either `Utilization` or `AverageValue`.
103-
<3> Specifies the value to trigger scaling actions upon:
103+
<3> Specifies the value to trigger scaling actions. Must be specified as a quoted string value.
104104
* When using `Utilization`, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
105105
* When using `AverageValue`, the target value is the average of the metrics across all relevant pods.
106106
<4> Optional. Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. Here, only the container named `api` is to be scaled.
@@ -134,12 +134,12 @@ spec:
134134
- type: memory <1>
135135
metricType: Utilization <2>
136136
metadata:
137-
value: "60" <3>
138-
containerName: "api" <4>
137+
value: '60' <3>
138+
containerName: api <4>
139139
----
140140
<1> Specifies memory as the scaler/trigger type.
141141
<2> Specifies the type of metric to use, either `Utilization` or `AverageValue`.
142-
<3> Specifies the value to trigger scaling actions for:
142+
<3> Specifies the value to trigger scaling actions. Must be specified as a quoted string value.
143143
* When using `Utilization`, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
144144
* When using `AverageValue`, the target value is the average of the metrics across all relevant pods.
145145
<4> Optional. Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. Here, only the container named `api` is to be scaled.
@@ -179,20 +179,20 @@ spec:
179179
bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 <3>
180180
consumerGroup: my-group <4>
181181
lagThreshold: '10' <5>
182-
activationLagThreshold <6>
183-
offsetResetPolicy: 'latest' <7>
182+
activationLagThreshold: '5' <6>
183+
offsetResetPolicy: latest <7>
184184
allowIdleConsumers: true <8>
185185
scaleToZeroOnInvalidOffset: false <9>
186186
excludePersistentLag: false <10>
187-
version: 1.0.0 <11>
187+
version: '1.0.0' <11>
188188
partitionLimitation: '1,2,10-20,31' <12>
189189
----
190190
<1> Specifies Kafka as the scaler/trigger type.
191191
<2> Specifies the name of the Kafka topic on which Kafka is processing the offset lag.
192192
<3> Specifies a comma-separated list of Kafka brokers to connect to.
193193
<4> Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag.
194-
<5> Optional: Specifies the average target value to trigger scaling actions. The default is `5`.
195-
<6> Optional: Specifies the target value for the activation phase.
194+
<5> Optional: Specifies the average target value to trigger scaling actions. Must be specified as a quoted string value. The default is `5`.
195+
<6> Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. The default is `0`.
196196
<7> Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: `latest` and `earliest`. The default is `latest`.
197197
<8> Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic.
198198
* If `true`, the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers.
@@ -203,6 +203,6 @@ spec:
203203
<10> Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the previous polling cycle.
204204
* If `true`, the scaler excludes partition lag in these partitions.
205205
* If `false`, the trigger includes all consumer lag in all partitions. This is the default.
206-
<11> Optional: Specifies the version of your Kafka brokers. The default is `1.0.0`.
207-
<12> Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. The default is to consider all partitions.
206+
<11> Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is `1.0.0`.
207+
<12> Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions.
208208

nodes/pods/nodes-pods-autoscaling-custom.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ toc::[]
88

99
As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory.
1010

11+
:FeatureName: Scaling by using a scaled job
12+
include::snippets/technology-preview.adoc[]
13+
1114
The Custom Metrics Autoscaler Operator for Red Hat OpenShift is an optional operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics.
1215

1316
[NOTE]
@@ -17,9 +20,6 @@ The custom metrics autoscaler currently supports only the Prometheus, CPU, memor
1720

1821
// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring.
1922

20-
:FeatureName: The custom metrics autoscaler
21-
include::snippets/technology-preview.adoc[leveloffset=+0]
22-
2323
// The following include statements pull in the module files that comprise
2424
// the assembly. Include any combination of concept, procedure, or reference
2525
// modules required to cover the user story. You can also include other

0 commit comments

Comments
 (0)