You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/openshift-cluster-maximums-major-releases.adoc
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,15 +64,20 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A
64
64
| 40,000
65
65
| 40,000
66
66
67
+
| Number of custom resource definitions (CRD)
68
+
| There is no default value.
69
+
| 512 ^7^
70
+
67
71
|===
68
72
[.small]
69
73
--
70
-
1. Pause pods were deployed to stress the control plane components of OpenShift at 2000 node scale.
74
+
1. Pause pods were deployed to stress the control plane components of {product-title} at 2000 node scale.
71
75
2. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements.
72
-
3. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom kubelet config. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document.
73
-
4. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.
76
+
3. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom kubelet config. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only {rh-storage} v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document.
77
+
4. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage.
74
78
5. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
75
79
6. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.
80
+
7. {product-title} has a limit of 512 total custom resource definitions (CRD), including those installed by {product-title}, products integrating with {product-title} and user created CRDs. If there are more than 512 CRDs created, then there is a possibility that `oc` commands requests may be throttled.
0 commit comments