Skip to content

Commit c86f6ce

Browse files
authored
Merge pull request #62307 from jab-rh/update-pods-per-node
OSDOCS-6864: Adjust pods per node to reflect OVN-Kubernetes limits
2 parents 0bf4d51 + 69d993e commit c86f6ce

File tree

1 file changed

+12
-11
lines changed

1 file changed

+12
-11
lines changed

modules/openshift-cluster-maximums-major-releases.adoc

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,18 @@ Red Hat does not provide direct guidance on sizing your {product-title} cluster.
2323
| 150,000
2424

2525
| Number of pods per node
26-
| 500 ^[3]^
26+
| 2,500 ^[3][4]^
2727

2828
| Number of pods per core
2929
| There is no default value.
3030

31-
| Number of namespaces ^[4]^
31+
| Number of namespaces ^[5]^
3232
| 10,000
3333

3434
| Number of builds
3535
| 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy
3636

37-
| Number of pods per namespace ^[5]^
37+
| Number of pods per namespace ^[6]^
3838
| 25,000
3939

4040
| Number of routes and back ends per Ingress Controller
@@ -46,7 +46,7 @@ Red Hat does not provide direct guidance on sizing your {product-title} cluster.
4646
| Number of config maps
4747
| 90,000
4848

49-
| Number of services ^[6]^
49+
| Number of services ^[7]^
5050
| 10,000
5151

5252
| Number of services per namespace
@@ -55,25 +55,26 @@ Red Hat does not provide direct guidance on sizing your {product-title} cluster.
5555
| Number of back-ends per service
5656
| 5,000
5757

58-
| Number of deployments per namespace ^[5]^
58+
| Number of deployments per namespace ^[6]^
5959
| 2,000
6060

6161
| Number of build configs
6262
| 12,000
6363

6464
| Number of custom resource definitions (CRD)
65-
| 512 ^[7]^
65+
| 512 ^[8]^
6666

6767
|===
6868
[.small]
6969
--
7070
1. Pause pods were deployed to stress the control plane components of {product-title} at 2000 node scale. The ability to scale to similar numbers will vary depending upon specific deployment and workload parameters.
7171
2. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements.
72-
3. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom kubelet config. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only {rh-storage} v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document.
73-
4. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage.
74-
5. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
75-
6. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.
76-
7. {product-title} has a limit of 512 total custom resource definitions (CRD), including those installed by {product-title}, products integrating with {product-title} and user created CRDs. If there are more than 512 CRDs created, then there is a possibility that `oc` commands requests may be throttled.
72+
3. This was tested on a cluster with 31 servers: 3 control planes, 2 infrastructure nodes, and 26 worker nodes. The default `maxPods` is still 250. To get to 2,500 pods per node, the cluster must be created with `maxPods` set to `2500` using a custom kubelet config. If you need 2,500 user pods, you need a `hostPrefix` of `20` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on the storage backend from where PVC are allocated. In our tests, only {rh-storage} v4 (OCS v4) was able to satisfy 2,500 of pods per node.
73+
4. The maximum tested pods per node is 2,500 for clusters using the `OVNKubernetes` network plugin. The maximum tested pods per node for the `OpenShiftSDN` network plugin is 500 pods.
74+
5. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage.
75+
6. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
76+
7. Each service port and each service back-end has a corresponding entry in `iptables`. The number of back-ends of a given service impact the size of the `Endpoints` objects, which impacts the size of data that is being sent all over the system.
77+
8. {product-title} has a limit of 512 total custom resource definitions (CRD), including those installed by {product-title}, products integrating with {product-title} and user-created CRDs. If there are more than 512 CRDs created, then there is a possibility that `oc` command requests might be throttled.
7778
--
7879

7980
[id="cluster-maximums-major-releases-example-scenario_{context}"]

0 commit comments

Comments
 (0)