Skip to content

Commit a1c1801

Browse files
authored
Update following PRR review.
1 parent b7f9d25 commit a1c1801

File tree

1 file changed

+117
-20
lines changed
  • keps/sig-autoscaling/4951-configurable-hpa-tolerance

1 file changed

+117
-20
lines changed

keps/sig-autoscaling/4951-configurable-hpa-tolerance/README.md

Lines changed: 117 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -353,13 +353,15 @@ https://storage.googleapis.com/k8s-triage/index.html
353353
We expect no non-infra related flakes in the last month as a GA graduation criteria.
354354
-->
355355

356-
We will add the follow [e2e autoscaling tests]:
356+
Existing e2e tests ensure the autoscaling behavior uses the default tolerance when no
357+
configurable tolerance is specified.
357358

358-
- For both scale up and scale down:
359-
- Workload does not scale because the metric ratio is in tolerance
360-
([PR](https://github.com/kubernetes/kubernetes/pull/130797/commits/4db8e8cc1dc2e5683c878b3ef29cb2e0fbe70f80#diff-832ab9989fa2683f7848ae1607c9a9aaa2bd245e5374efa0c5a87ba8edab464a)).
361-
- Workload scales successfully because the metric ratio is out of tolerance.
362-
- Autoscaling uses the default when no tolerances are set.
359+
The new [e2e autoscaling tests] covering this feature are:
360+
361+
- [Test with large configurable tolerance](https://github.com/kubernetes/kubernetes/blob/07142400ecd02126602ffaa6f91712cd3f1e170c/test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go#L509): [SIG autoscaling](https://testgrid.k8s.io/sig-autoscaling-hpa#gci-gce-autoscaling-hpa-cpu-alpha-beta-pull&include-filter-by-regex=HPAConfigurableTolerance.*large%20configurable%20tolerance), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=HPAConfigurableTolerance.*large%20configurable%20tolerance)
362+
363+
Before the graduation to beta, we will add an integration test verifying the autoscaling
364+
behavior when smaller and larger than default tolerances are set on an HPA.
363365

364366
[e2e autoscaling tests]: https://github.com/kubernetes/kubernetes/tree/master/test/e2e/autoscaling
365367

@@ -436,7 +438,7 @@ in back-to-back releases.
436438

437439
- All tests described in the [`e2e tests` section](#e2e-tests) are implemented
438440
and linked in this KEP.
439-
- We have monitored negative user feedback and addressed relevant concerns.
441+
- We have monitored for negative user feedback and addressed relevant concerns.
440442

441443
### Upgrade / Downgrade Strategy
442444

@@ -551,7 +553,7 @@ You can take a look at one potential example of such test in:
551553
https://github.com/kubernetes/kubernetes/pull/97058/files#diff-7826f7adbc1996a05ab52e3f5f02429e94b68ce6bce0dc534d1be636154fded3R246-R282
552554
-->
553555

554-
[Unit tests have been added](https://github.com/kubernetes/kubernetes/pull/130797/commits/a41284d9fa3a3d5a5e8760db6e9fd4f7e5e6fca6#diff-98f8520444a477d01c5cc2e56f92939d5fb07893a234b8fee5b67c7c147a20e0) to verify that HPAs with and without the new fields are
556+
[Unit tests have been added](https://github.com/kubernetes/kubernetes/blob/07142400ecd02126602ffaa6f91712cd3f1e170c/pkg/apis/autoscaling/validation/validation_test.go#L1648) to verify that HPAs with and without the new fields are
555557
properly validated, both when the feature gate is enabled or not.
556558

557559
### Rollout, Upgrade and Rollback Planning
@@ -594,9 +596,96 @@ Longer term, we may want to require automated upgrade/rollback tests, but we
594596
are missing a bunch of machinery and tooling and can't do that now.
595597
-->
596598

597-
I have manually tested a cluster upgrade, and this feature is in alpha without
598-
(to the best of our knowledge) any user reporting an issue. GKE has automated
599-
upgrade/downgrade tests that did not report any issue.
599+
The upgrade→downgrade→upgrade testing was done manually using a 1.33 cluster with the following steps:
600+
601+
1. Start the cluster with the HPA enabled:
602+
603+
```sh
604+
kind create cluster --name configurable-tolerance --image kindest/node:v1.33.0 --config config.yaml
605+
```
606+
with the following `config.yaml` file content:
607+
```yaml
608+
kind: Cluster
609+
apiVersion: kind.x-k8s.io/v1alpha4
610+
featureGates:
611+
"HPAConfigurableTolerance": true
612+
nodes:
613+
- role: control-plane
614+
- role: worker
615+
```
616+
617+
Install metrics-server:
618+
```sh
619+
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.2/components.yaml
620+
kubectl patch -n kube-system deployment metrics-server --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'
621+
```
622+
623+
Create a deployment starting Pods that consume a 50% CPU utilization, and an associated HPA with a very large tolerance:
624+
```sh
625+
kubectl apply -f configurable-tolerance-test.yaml
626+
```
627+
with the following `configurable-tolerance-test.yaml` file content:
628+
```yaml
629+
apiVersion: apps/v1
630+
kind: Deployment
631+
metadata:
632+
name: cpu-stress-deployment
633+
labels:
634+
app: cpu-stressor
635+
spec:
636+
replicas: 1
637+
selector:
638+
matchLabels:
639+
app: cpu-stressor
640+
template:
641+
metadata:
642+
labels:
643+
app: cpu-stressor
644+
spec:
645+
containers:
646+
- name: cpu-stressor
647+
image: alpine:latest
648+
command: ["/bin/sh"]
649+
args: # Load: 1% (10 milliCPU)
650+
- "-c"
651+
- "apk add --no-cache stress-ng && stress-ng --cpu 1 --cpu-load 1 --cpu-method=crc16 --timeout 3600s"
652+
resources:
653+
requests:
654+
cpu: "20m"
655+
---
656+
apiVersion: autoscaling/v2
657+
kind: HorizontalPodAutoscaler
658+
metadata:
659+
name: cpu-stress-hpa
660+
spec:
661+
scaleTargetRef:
662+
apiVersion: apps/v1
663+
kind: Deployment
664+
name: cpu-stress-deployment
665+
minReplicas: 1
666+
maxReplicas: 5
667+
metrics:
668+
- type: Resource
669+
resource:
670+
name: cpu
671+
target:
672+
type: Utilization
673+
averageUtilization: 10
674+
behavior:
675+
scaleUp:
676+
tolerance: 20. # 2000%
677+
```
678+
679+
Check that, after a 5 minutes, `kubectl describe hpa cpu-stress-hpa` displays `ScalingLimited: False` (i.e.
680+
the HPA doesn't recommend to scale up because of the large tolerance).
681+
682+
2. Simulate downgrade by disabling the feature for api server and control-plane (update the `config.yaml` file
683+
to set it to false). Follow the procedure described in step 1, and observe that this time
684+
`kubectl describe hpa cpu-stress-hpa` displays `ScalingLimited: True`.
685+
686+
4. Simulate downgrade by re-enabling the feature for api server and control-plane. Follow the procedure described
687+
in step 1, and observe that the HPA description mentions `ScalingLimited: False`, demonstrates that the feature
688+
is working again.
600689
601690
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
602691
@@ -646,7 +735,7 @@ values. Users can get both values using
646735
and use them to verify that scaling events are triggered when their ratio is out
647736
of tolerance.
648737
649-
The [controller-manager logs have been updated](https://github.com/kubernetes/kubernetes/pull/130797/commits/2dd9eda47ffd5556ff90446e91d22ddbecc05d2c#diff-f1c5a31aa8fb8e3fd64b6aa13d3358b504e6e25030f249f1652e244c105eafc7R846)
738+
The [controller-manager logs have been updated](https://github.com/kubernetes/kubernetes/blob/07142400ecd02126602ffaa6f91712cd3f1e170c/pkg/controller/podautoscaler/horizontal.go#L846)
650739
to help users understand the behavior of the autoscaler. The data added to the
651740
logs includes the tolerance used for each scaling decision.
652741
@@ -667,7 +756,9 @@ These goals will help you determine what you need to measure (SLIs) in the next
667756
question.
668757
-->
669758

670-
N/A.
759+
Although the absolute value of the `horizontal_pod_autoscaler_controller_metric_computation_duration_seconds`
760+
metric depends on HPAs configuration, it should be unimpacted by this feature. This metric should not vary
761+
by more than 5%.
671762

672763
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
673764

@@ -679,8 +770,7 @@ This KEP is not expected to have any impact on SLIs/SLOs as it doesn't introduce
679770
a new HPA behavior, but merely allows users to easily change the value of a
680771
parameter that's otherwise difficult to update.
681772

682-
Standard HPA metrics (e.g.
683-
`horizontal_pod_autoscaler_controller_metric_computation_duration_seconds`) can
773+
The standard HPA metric `horizontal_pod_autoscaler_controller_metric_computation_duration_seconds` can
684774
be used to verify the HPA controller health.
685775

686776
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
@@ -857,13 +947,19 @@ For each of them, fill in the following information by copying the below templat
857947
- Testing: Are there any tests for failure mode? If not, describe why.
858948
-->
859949
860-
We do not expect any new failure mode. (While setting inappropriate `tolerance`
861-
values may cause HPAs to react too slowly or too fast, the feature is working as
862-
intended.)
950+
We do not expect any new failure mode. (While setting `tolerance` below 10% can cause HPAs
951+
to scale up and down as frequently as every 30s, and higher values might stop scaling altogether
952+
if the metric remains within the tolerance band, the feature is still working as intended.
953+
To make HPAs respond faster, decrease the tolerance value. Conversely, to make them respond
954+
slower, increase the tolerance value.)
863955
864956
###### What steps should be taken if SLOs are not being met to determine the problem?
865957
866-
N/A.
958+
If possible increase the log level for kube-controller-manager and check controller logs:
959+
1. Search for "Proposing desired replicas", verify that the tolerance is set as expected,
960+
and check (using `kubectl describe hpa`) if the ratio between the _current_ and _desired_
961+
metric values is in tolerance.
962+
3. Look for warnings and errors which might point where the problem lies.
867963
868964
## Implementation History
869965
@@ -881,6 +977,7 @@ Major milestones might include:
881977
2025-01-21: KEP PR merged.
882978
2025-03-24: [Implementation PR](https://github.com/kubernetes/kubernetes/pull/130797) merged.
883979
2025-05-15: Kubernetes v1.33 released (includes this feature).
980+
2025-05-16: This KEP updated for beta graduation.
884981
885982
## Drawbacks
886983
@@ -909,4 +1006,4 @@ new subproject, repos requested, or GitHub details. Listing these here allows a
9091006
SIG to get the process for these resources started right away.
9101007
-->
9111008
912-
N/A.
1009+
N/A.

0 commit comments

Comments
 (0)