Skip to content

Commit 49ecd13

Browse files
authored
Merge pull request #23987 from chaitanyaenr/cluster_scale_down
Add info on throttling during cluster scale down
2 parents 6feedf0 + d5cd310 commit 49ecd13

File tree

1 file changed

+12
-1
lines changed

1 file changed

+12
-1
lines changed

modules/recommended-scale-practices.adoc

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,15 @@ Cloud providers might implement a quota for API services. Therefore, gradually s
1818

1919
The controller might not be able to create the machines if the replicas in the machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which {product-title} is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which {product-title} is deployed has API request limits and excessive queries might lead to machine creation failures due to cloud platform limitations.
2020

21-
Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines.
21+
Enable machine health checks when scaling to large node counts. In case of failures,
22+
the health checks monitor the condition and automatically repair unhealthy machines.
23+
24+
[NOTE]
25+
====
26+
When scaling large and dense clusters to lower node counts, it might take large
27+
amounts of time as the process involves draining or evicting the objects running on
28+
the nodes being terminated in parallel. Also, the client might start to throttle the
29+
requests if there are too many objects to evict. The default client QPS and burst
30+
rates are currently set to `5` and `10` respectively and they cannot be modified
31+
in {product-title}.
32+
====

0 commit comments

Comments
 (0)