You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/instructions.adoc
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,12 +87,12 @@ If you want to use bastion host, set the parameter *create_bastion* to *true* in
87
87
88
88
There are 2 additional parameters for the bastion:
89
89
90
-
* package_update
91
-
* package_upgrade
90
+
* bastion_package_update
91
+
* bastion_package_upgrade
92
92
93
-
_package_update_ will update the apt database *if* you choose Ubuntu as the Linux distribution for the bastion host.
93
+
_bastion_package_update_ will update the apt database *if* you choose Ubuntu as the Linux distribution for the bastion host.
94
94
95
-
_package_upgrade_ will upgrade the bastion compute instance on first boot. If you choose Ubuntu for bastion host and you set _package_upgrade_ to *true*, you should also set the _package_update_ to *true*.
95
+
_bastion_package_upgrade_ will upgrade the bastion compute instance on first boot. If you choose Ubuntu for bastion host and you set _bastion_package_upgrade_ to *true*, you should also set the _bastion_package_update_ to *true*.
96
96
97
97
****
98
98
N.B. It is good and recommended practice to upgrade your bastion host to the latest packages to minimize the possibility of vulnerabilities. However, it will also take slightly longer before the bastion host is available.
@@ -146,13 +146,14 @@ Calico enables network policy in Kubernetes clusters across the cloud. To instal
146
146
147
147
{uri-metricserver}[Kubernetes Metrics Server] can be installed by setting the parameter *install_metricserver = true* in terraform.tfvars. By default, the latest version is installed in kube-system namespace. This is required if you need to use Horizontal Pod Autoscaling.
148
148
149
-
=== Scaling the number of worker nodes
149
+
=== Scaling the node pools
150
150
151
-
Set the parameter *node_pool_quantity_per_subnet* to the desired quantity.For single AD region a minimum quantity of 2 will get created. This is helpful to utilize the OCI Fault Domains in single AD regions. Refer to {uri-topology}#fault-domains[Fault Domain].
151
+
There are 2 ways you can scale the node pools:
152
152
153
-
=== Scaling the number of node pools
153
+
* add more node pools
154
+
* increase the number of workers in a subnet per node pool.
154
155
155
-
Set the parameter *node_pools* to the desired quantity. Refer to {uri-topology}#node-pools[Nodepool].
156
+
Set the parameter *node_pools* to the desired quantities to scale the node pools accordingly. Refer to {uri-topology}#node-pools[Nodepool].
Copy file name to clipboardExpand all lines: docs/terraformoptions.adoc
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -236,7 +236,7 @@ subnets = {
236
236
|bastion_shape
237
237
|The shape of bastion instance.
238
238
|
239
-
|VM.Standard2.1
239
+
|VM.Standard.E2.1
240
240
241
241
|bastion_access
242
242
|CIDR block in the form of a string to which ssh access to the bastion must be restricted to. *_ANYWHERE_* is equivalent to 0.0.0.0/0 and allows ssh access from anywhere.
@@ -326,9 +326,19 @@ availability_domains = {
326
326
|LATEST
327
327
328
328
|node_pools
329
-
|The number of node pools to create. Refer to {uri-topology}[topology] for more thorough examples.
330
-
|
331
-
|1
329
+
|The number, shape and quantities per subnets of node pools to create. Refer to {uri-topology}[topology] for more thorough examples.
330
+
|e.g.
331
+
[source]
332
+
----
333
+
node_pools = {
334
+
"np1" = ["VM.Standard2.1", 1]
335
+
}
336
+
----
337
+
|----
338
+
node_pools = {
339
+
"np1" = ["VM.Standard2.1", 1]
340
+
}
341
+
----
332
342
333
343
|node_pool_name_prefix
334
344
|A string prefixed to the node pool name.
@@ -350,16 +360,6 @@ availability_domains = {
350
360
|
351
361
|7.6
352
362
353
-
|node_pool_node_shape
354
-
|The shape of worker nodes to provision.
355
-
|
356
-
|VM.Standard2.1
357
-
358
-
|node_pool_quantity_per_subnet
359
-
|Number of worker nodes by worker subnets in a node pool. Refer to {uri-topology}[topology] for more thorough examples.
360
-
|
361
-
|1
362
-
363
363
|nodepool_topology
364
364
a|The number of Availability Domains the node pools should span. Use 1 for single-AD regions and 3 for multiple-AD regions. _Topology 2 is experimental and is only used in multiple-AD regions_.
Copy file name to clipboardExpand all lines: docs/topology.adoc
+81-12Lines changed: 81 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -193,26 +193,58 @@ A node pool is a set of hosts within a cluster that all have the same configurat
193
193
194
194
Node pools enable you to create pools of machines within a cluster that have different configurations. For example, you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.
195
195
196
-
****
197
-
*N.B. As of this version, all node pools have the same configuration. You can manually add node pools of different configuration (e.g. shapes) after the cluster is created.*
198
-
****
196
+
When using this project to create the node pools, the following is done:
199
197
198
+
* a number of node pools are created. The number of node pools created is equal to the number of elements in the node_pools parameter e.g.
200
199
201
-
When using this project to create the node pools, the following is done:
200
+
----
201
+
node_pools = {
202
+
"np1" = ["VM.Standard2.1", 1]
203
+
"np2" = ["VM.Standard2.2", 1]
204
+
}
205
+
----
206
+
207
+
will create 2 node pools (np1 and np2) whereas
208
+
209
+
----
210
+
node_pools = {
211
+
"np1" = ["VM.Standard2.1", 1]
212
+
"np2" = ["VM.Standard2.2", 1]
213
+
"np3" = ["VM.Standard2.4", 1]
214
+
}
215
+
----
202
216
203
-
* a number of node pools are created. This is controlled by the node_pools parameter. By default, this value is 1.
217
+
will create 3 node pools (np1, np2 and np3).
204
218
205
219
* the node pool names are generated by combining a prefix and the node pool number. The prefix is set by the node_pool_name_prefix parameter and has a default value of "np". The node pool names will therefore have names like np-1, np-2 and so on.
206
220
207
221
* the Kubernetes version is set automatically to the same version as the cluster.
208
222
209
223
* the image used is an Oracle Linux image with the version specified. You can also specify your own image OCID. However, note that these 2 are mutually exclusive i.e. either use Operating System and version *_or_* specify the OCID of your custom image.
210
224
211
-
* the {uri-oci-shape}[shape] of the worker node determines the compute capacity of the worker node. By default, this is VM.Standard2.1, giving you 1 OCPU, 15GB Memory, 1 Gbps in network bandwidth and 2 VNICs.
225
+
* the {uri-oci-shape}[shape] of the worker node determines the compute capacity of the worker node. This is controlled by the first element in the tuple for the node pool. By default, this is VM.Standard2.1, giving you 1 OCPU, 15GB Memory, 1 Gbps in network bandwidth and 2 VNICs e.g.
226
+
227
+
----
228
+
node_pools = {
229
+
"np1" = ["VM.Standard2.1", 1]
230
+
"np2" = ["BM.Standard2.52", 1]
231
+
}
232
+
----
233
+
234
+
In the above example, workers in node pool np1 will all have a shape of VM.Standard2.1 whereas workers in node pool np2 will all have a shape of BM.Standard.2.52.
212
235
213
236
* the subnets the node pool will span i.e. the subnets where the worker nodes will be created. See below for more explanation.
214
237
215
-
* the number of worker nodes per subnet that will be created for this node pool. This is controlled by the node_pool_quantity_per_subnet parameter.
238
+
* the number of worker nodes per subnet that will be created for this node pool. This is controlled by the 2nd element in the tuple for each node pool e.g.
239
+
240
+
----
241
+
node_pools = {
242
+
"np1" = ["VM.Standard2.1", 1]
243
+
"np2" = ["VM.Standard2.2", 3]
244
+
}
245
+
----
246
+
247
+
will create a node pool (np1) with 1 worker node per subnet and a 2nd node pool (np2) with 3 worker nodes per subnet.
216
248
217
249
* the public ssh key used is the same as that used for the bastion host.
218
250
@@ -226,31 +258,68 @@ When using Topology 3, this ensures that the node pool spans all 3 worker subnet
226
258
227
259
==== Number of Node Pools
228
260
229
-
The number of node pools created is controlled by the node_pools parameter. The diagram below shows a cluster with 1 node pool and 1 worker node per subnet using topology 3 i.e. node_pools=1, node_pool_quantity_per_subnet=1 and nodepool_topology=3.
261
+
The number, shape and size of the node pools created is controlled by the number of entries in the node_pools parameter. The diagram below shows a cluster with 1 node pool and 1 worker node per subnet using topology 3 i.e.
262
+
263
+
----
264
+
node_pools = {
265
+
"np1" = ["VM.Standard2.1", 1]
266
+
}
267
+
nodepool_topology=3
268
+
----
269
+
230
270
231
271
.1 Node Pool with 1 worker node per subnet (other details removed for convenience)
232
272
image::images/np311.png[align="center"]
233
273
234
274
{bl}
235
275
236
-
You can increase the number of node pools by setting node_pools=5, node_pool_quantity_per_subnet=1 and nodepool_topology=3.
276
+
You can increase the number of node pools by adding more entries in the node_pools e.g.
277
+
278
+
----
279
+
node_pools = {
280
+
"np1" = ["VM.Standard2.1", 1]
281
+
"np2" = ["VM.Standard2.1", 1]
282
+
"np3" = ["VM.Standard2.1", 1]
283
+
"np4" = ["VM.Standard2.1", 1]
284
+
"np5" = ["VM.Standard2.1", 1]
285
+
}
286
+
nodepool_topology=3
287
+
----
237
288
238
289
.5 Node Pools with 1 worker node per subnet
239
290
image::images/np351.png[align="center"]
240
291
241
292
==== Worker Nodes per subnet
242
293
243
-
You can also change the number of worker nodes per subnet. For example, setting the node_pools=1 and node_pool_quantity_per_subnet=2 and nodepool_topology=3 will result in the following cluster:
294
+
You can also change the number of worker nodes per subnet e.g.
295
+
296
+
----
297
+
node_pools = {
298
+
"np1" = ["VM.Standard2.1", 2]
299
+
}
300
+
nodepool_topology=3
301
+
----
302
+
303
+
will result in the following cluster:
244
304
245
305
.1 Node Pool with 2 worker nodes per subnet
246
306
image::images/np312.png[align="center"]
247
307
248
308
{bl}
249
309
250
-
Similarly, you can change both node pools and number of worker nodes per subnet:
310
+
Similarly, you can support mixed workloads by adding node pools of different shapes and number of worker nodes per subnet:
0 commit comments