You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -58,74 +58,155 @@ Control and infrastructure nodes are also provided by Red Hat. There are at leas
58
58
Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This is necessary to run link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[processes required by the underlying platform]. This includes system daemons such as udev, kubelet, container runtime, and so on, and also accounts for kernel reservations. {OCP} core systems such as audit log aggregation, metrics collection, DNS, image registry, SDN, and so on might consume additional allocatable resources to maintain the stability and maintainability of the cluster. The additional resources consumed might vary based on usage.
59
59
====
60
60
61
-
[id="aws-compute-types_{context}"]
62
-
== AWS compute types
61
+
[id="aws-compute-types-ccs_{context}"]
62
+
== AWS compute types for Customer Cloud Subscription clusters
63
63
64
64
{product-title} offers the following worker node types and sizes on AWS:
65
65
66
66
.General purpose
67
-
68
-
* m5.xlarge (4 vCPU, 16 GiB)
69
-
* m5.2xlarge (8 vCPU, 32 GiB)
70
-
* m5.4xlarge (16 vCPU, 64 GiB)
71
-
* m5.8xlarge (32 vCPU, 128 GiB)
72
-
* m5.12xlarge (48 vCPU, 192 GiB)
73
-
* m5.16xlarge (64 vCPU, 256 GiB)
74
-
* m5.24xlarge (96 vCPU, 384 GiB)
67
+
[%collapsible]
68
+
====
69
+
- m5.xlarge (4 vCPU, 16 GiB)
70
+
- m5.2xlarge (8 vCPU, 32 GiB)
71
+
- m5.4xlarge (16 vCPU, 64 GiB)
72
+
- m5.8xlarge (32 vCPU, 128 GiB)
73
+
- m5.12xlarge (48 vCPU, 192 GiB)
74
+
- m5.16xlarge (64 vCPU, 256 GiB)
75
+
- m5.24xlarge (96 vCPU, 384 GiB)
76
+
====
75
77
76
78
.Memory-optimized
77
-
78
-
* r5.xlarge (4 vCPU, 32 GiB)
79
-
* r5.2xlarge (8 vCPU, 64 GiB)
80
-
* r5.4xlarge (16 vCPU, 128 GiB)
81
-
* r5.8xlarge (32 vCPU, 256 GiB)
82
-
* r5.12xlarge (48 vCPU, 384 GiB)
83
-
* r5.16xlarge (64 vCPU, 512 GiB)
84
-
* r5.24xlarge (96 vCPU, 768 GiB)
79
+
[%collapsible]
80
+
====
81
+
- r5.xlarge (4 vCPU, 32 GiB)
82
+
- r5.2xlarge (8 vCPU, 64 GiB)
83
+
- r5.4xlarge (16 vCPU, 128 GiB)
84
+
- r5.8xlarge (32 vCPU, 256 GiB)
85
+
- r5.12xlarge (48 vCPU, 384 GiB)
86
+
- r5.16xlarge (64 vCPU, 512 GiB)
87
+
- r5.24xlarge (96 vCPU, 768 GiB)
88
+
- r6i.xlarge (4 vCPU, 32 GiB)
89
+
- r6i.2xlarge (8 vCPU, 64 GiB)
90
+
- r6i.4xlarge (16 vCPU, 128 GiB)
91
+
- r6i.8xlarge (32 vCPU, 256 GiB)
92
+
- r6i.12xlarge (48 vCPU, 384 GiB)
93
+
- r6i.16xlarge (64 vCPU, 512 GiB)
94
+
- r6i.24xlarge (96 vCPU, 768 GiB)
95
+
- r6i.32xlarge (128 vCPU, 1,024 GiB)
96
+
- z1d.xlarge (4 vCPU, 32 GiB)
97
+
- z1d.2xlarge (8 vCPU, 64 GiB)
98
+
- z1d.3xlarge (12 vCPU, 96 GiB)
99
+
- z1d.6xlarge (24 vCPU, 192 GiB)
100
+
- z1d.12xlarge (48 vCPU, 384 GiB)
101
+
====
85
102
86
103
.Compute-optimized
104
+
[%collapsible]
105
+
====
106
+
- c5.xlarge (4 vCPU, 8 GiB)
107
+
- c5.2xlarge (8 vCPU, 16 GiB)
108
+
- c5.4xlarge (16 vCPU, 32 GiB)
109
+
- c5.9xlarge (36 vCPU, 72 GiB)
110
+
- c5.12xlarge (48 vCPU, 96 GiB)
111
+
- c5.18xlarge (72 vCPU, 144 GiB)
112
+
- c5.24xlarge (96 vCPU, 192 GiB)
113
+
====
114
+
115
+
.Storage-optimized compute types
116
+
[%collapsible]
117
+
====
118
+
- i3.xlarge (4 vCPU, 30.5 GiB)
119
+
- i3.2xlarge (8 vCPU, 61 GiB)
120
+
- i3.4xlarge (16 vCPU, 122 GiB)
121
+
- i3.8xlarge (32 vCPU, 244 GiB)
122
+
- i3.16xlarge (64 vCPU, 488 GiB)
123
+
- i3en.xlarge (4 vCPU, 32 GiB)
124
+
- i3en.2xlarge (8 vCPU, 64 GiB)
125
+
- i3en.3xlarge (12 vCPU, 96 GiB)
126
+
- i3en.6xlarge (24 vCPU, 192 GiB)
127
+
- i3en.12xlarge (48 vCPU, 384 GiB)
128
+
- i3en.24xlarge (96 vCPU, 768 GiB)
129
+
====
87
130
88
-
* c5.2xlarge (8 vCPU, 16 GiB)
89
-
* c5.4xlarge (16 vCPU, 32 GiB)
90
-
* c5.9xlarge (36 vCPU, 72 GiB)
91
-
* c5.12xlarge (48 vCPU, 96 GiB)
92
-
* c5.18xlarge (72 vCPU, 144 GiB)
93
-
* c5.24xlarge (96 vCPU, 192 GiB)
131
+
[id="aws-compute-types-non-ccs_{context}"]
132
+
== AWS compute types for standard clusters
133
+
134
+
{product-title} offers the following worker node types and sizes on AWS:
135
+
136
+
.General purpose
137
+
[%collapsible]
138
+
====
139
+
- m5.xlarge (4 vCPU, 16 GiB)
140
+
- m5.2xlarge (8 vCPU, 32 GiB)
141
+
- m5.4xlarge (16 vCPU, 64 GiB)
142
+
- m5.8xlarge (32 vCPU, 128 GiB)
143
+
- m5.12xlarge (48 vCPU, 192 GiB)
144
+
- m5.16xlarge (64 vCPU, 256 GiB)
145
+
- m5.24xlarge (96 vCPU, 384 GiB)
146
+
====
147
+
148
+
.Memory-optimized
149
+
[%collapsible]
150
+
====
151
+
- r5.xlarge (4 vCPU, 32 GiB)
152
+
- r5.2xlarge (8 vCPU, 64 GiB)
153
+
- r5.4xlarge (16 vCPU, 128 GiB)
154
+
- r5.8xlarge (32 vCPU, 256 GiB)
155
+
- r5.12xlarge (48 vCPU, 384 GiB)
156
+
- r5.16xlarge (64 vCPU, 512 GiB)
157
+
- r5.24xlarge (96 vCPU, 768 GiB)
158
+
====
159
+
160
+
.Compute-optimized
161
+
[%collapsible]
162
+
====
163
+
- c5.2xlarge (8 vCPU, 16 GiB)
164
+
- c5.4xlarge (16 vCPU, 32 GiB)
165
+
- c5.9xlarge (36 vCPU, 72 GiB)
166
+
- c5.12xlarge (48 vCPU, 96 GiB)
167
+
- c5.18xlarge (72 vCPU, 144 GiB)
168
+
- c5.24xlarge (96 vCPU, 192 GiB)
169
+
====
94
170
95
171
[id="gcp-compute-types_{context}"]
96
172
== Google Cloud compute types
97
173
98
174
{product-title} offers the following worker node types and sizes on Google Cloud that are chosen to have a common CPU and memory capacity that are the same as other cloud instance types:
0 commit comments