Skip to content

Commit 6d17dda

Browse files
authored
Merge pull request #64061 from jab-rh/fixup-max-pods
Simplify table note for max pods on a cluster
2 parents ee2ac46 + 3b12e9b commit 6d17dda

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

modules/openshift-cluster-maximums-major-releases.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Red Hat does not provide direct guidance on sizing your {product-title} cluster.
6969
--
7070
1. Pause pods were deployed to stress the control plane components of {product-title} at 2000 node scale. The ability to scale to similar numbers will vary depending upon specific deployment and workload parameters.
7171
2. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements.
72-
3. This was tested on a cluster with 31 servers: 3 control planes, 2 infrastructure nodes, and 26 worker nodes. The default `maxPods` is still 250. To get to 2,500 pods per node, the cluster must be created with `maxPods` set to `2500` using a custom kubelet config. If you need 2,500 user pods, you need a `hostPrefix` of `20` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on the storage backend from where PVC are allocated. In our tests, only {rh-storage} v4 (OCS v4) was able to satisfy 2,500 of pods per node.
72+
3. This was tested on a cluster with 31 servers: 3 control planes, 2 infrastructure nodes, and 26 worker nodes. If you need 2,500 user pods, you need both a `hostPrefix` of `20`, which allocates a network large enough for each node to contain more than 2000 pods, and a custom kubelet config with `maxPods` set to `2500`. For more information, see link:https://cloud.redhat.com/blog/running-2500-pods-per-node-on-ocp-4.13[Running 2500 pods per node on OCP 4.13].
7373
4. The maximum tested pods per node is 2,500 for clusters using the `OVNKubernetes` network plugin. The maximum tested pods per node for the `OpenShiftSDN` network plugin is 500 pods.
7474
5. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage.
7575
6. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.

0 commit comments

Comments
 (0)