Skip to content

Commit 1207679

Browse files
feat: Introduce support for managing the default worker pool (#499)
Co-authored-by: Vincent Burckhardt <[email protected]>
1 parent 0b9ab5e commit 1207679

File tree

19 files changed

+145
-68
lines changed

19 files changed

+145
-68
lines changed

README.md

Lines changed: 36 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,9 +105,42 @@ module "ocp_base" {
105105
}
106106
]
107107
}
108+
worker_pools = [
109+
{
110+
subnet_prefix = "default"
111+
pool_name = "default"
112+
machine_type = "bx2.4x16"
113+
workers_per_zone = 2
114+
}
115+
]
108116
}
109117
```
110118

119+
### Default Worker Pool
120+
121+
You can manage the default worker pool using Terraform, and make changes to it through this module. This option is enabled by default. Under the hood, the default worker pool is imported as a `ibm_container_vpc_worker_pool` resource. Advanced users may opt-out of this option by setting `import_default_worker_pool_on_create` parameter to `false`. For most use cases it is recommended to keep this variable to `true`.
122+
123+
- **Important**: If the default worker pool is handled as a stand-alone ibm_container_vpc_worker_pool (which is the default behavior), you must manually remove all worker pools from the Terraform state prior to running a `terraform destroy` command on the module to avoid an error due to this [limitation](https://cloud.ibm.com/docs/containers?topic=containers-faqs#smallest_cluster).
124+
- Terraform CLI. Example for a cluster with 2 worker pools: one named 'default' and the other named 'secondarypool'.
125+
```sh
126+
$ terraform state list | grep ibm_container_vpc_worker_pool
127+
> module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["default"]
128+
> module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["secondarypool"]
129+
> ...
130+
131+
$ terraform state rm "module.ocp_base.ibm_container_vpc_worker_pool.all_pools[\"default\"]"
132+
$ terraform state rm "module.ocp_base.ibm_container_vpc_worker_pool.all_pools[\"secondarypool\"]"
133+
$ ...
134+
```
135+
- Schematics. Example for a cluster with 2 worker pools: one named 'default' and the other named 'secondarypool'.
136+
```sh
137+
$ ibmcloud schematics workspace state rm --id <workspace_id> --address "module.ocp_base.ibm_container_vpc_worker_pool.all_pools[\"default\"]"
138+
$ ibmcloud schematics workspace state rm --id <workspace_id> --address "module.ocp_base.ibm_container_vpc_worker_pool.all_pools[\"secondarypool\"]"
139+
$ ...
140+
```
141+
142+
- **Important**: If the default worker pool is handled as a stand-alone ibm_container_vpc_worker_pool (which is the default behavior), if you wish to make any change to the default worker pool which requires the re-creation of the default pool (for example a change to the worker node `operating_system`), then set the variable `allow_default_worker_pool_replacement` to true, perform the apply, and set it back for false. This is ONLY needed for changes that require the recreation of existing worker nodes in the default pool, and is NOT needed for scenario such as changing the number of workers in the default worker pool. This approach is due to a limitation in the terraform provider that may be lifted in the future.
143+
111144
### Advanced security group options
112145

113146
The Terraform module provides options to attach additional security groups to the worker nodes, VPE, and load balancer associated with the cluster.
@@ -195,7 +228,7 @@ Optionally, you need the following permissions to attach Access Management tags
195228
| Name | Version |
196229
|------|---------|
197230
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3.0 |
198-
| <a name="requirement_ibm"></a> [ibm](#requirement\_ibm) | >= 1.67.0, < 2.0.0 |
231+
| <a name="requirement_ibm"></a> [ibm](#requirement\_ibm) | >= 1.68.0, < 2.0.0 |
199232
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.16.1, < 3.0.0 |
200233
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 3.2.1, < 4.0.0 |
201234

@@ -245,6 +278,7 @@ Optionally, you need the following permissions to attach Access Management tags
245278
| <a name="input_additional_lb_security_group_ids"></a> [additional\_lb\_security\_group\_ids](#input\_additional\_lb\_security\_group\_ids) | Additional security groups to add to the load balancers associated with the cluster. Ensure that the number\_of\_lbs is set to the number of LBs associated with the cluster. This comes in addition to the IBM maintained security group. | `list(string)` | `[]` | no |
246279
| <a name="input_additional_vpe_security_group_ids"></a> [additional\_vpe\_security\_group\_ids](#input\_additional\_vpe\_security\_group\_ids) | Additional security groups to add to all existing load balancers. This comes in addition to the IBM maintained security group. | <pre>object({<br> master = optional(list(string), [])<br> registry = optional(list(string), [])<br> api = optional(list(string), [])<br> })</pre> | `{}` | no |
247280
| <a name="input_addons"></a> [addons](#input\_addons) | Map of OCP cluster add-on versions to install (NOTE: The 'vpc-block-csi-driver' add-on is installed by default for VPC clusters and 'ibm-storage-operator' is installed by default in OCP 4.15 and later, however you can explicitly specify it here if you wish to choose a later version than the default one). For full list of all supported add-ons and versions, see https://cloud.ibm.com/docs/containers?topic=containers-supported-cluster-addon-versions | <pre>object({<br> debug-tool = optional(string)<br> image-key-synchronizer = optional(string)<br> openshift-data-foundation = optional(string)<br> vpc-file-csi-driver = optional(string)<br> static-route = optional(string)<br> cluster-autoscaler = optional(string)<br> vpc-block-csi-driver = optional(string)<br> ibm-storage-operator = optional(string)<br> })</pre> | `{}` | no |
281+
| <a name="input_allow_default_worker_pool_replacement"></a> [allow\_default\_worker\_pool\_replacement](#input\_allow\_default\_worker\_pool\_replacement) | (Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm\_container\_vpc\_worker\_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true. | `bool` | `false` | no |
248282
| <a name="input_attach_ibm_managed_security_group"></a> [attach\_ibm\_managed\_security\_group](#input\_attach\_ibm\_managed\_security\_group) | Specify whether to attach the IBM-defined default security group (whose name is kube-<clusterid>) to all worker nodes. Only applicable if custom\_security\_group\_ids is set. | `bool` | `true` | no |
249283
| <a name="input_cluster_config_endpoint_type"></a> [cluster\_config\_endpoint\_type](#input\_cluster\_config\_endpoint\_type) | Specify which type of endpoint to use for for cluster config access: 'default', 'private', 'vpe', 'link'. 'default' value will use the default endpoint of the cluster. | `string` | `"default"` | no |
250284
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | The name that will be assigned to the provisioned cluster | `string` | n/a | yes |
@@ -257,6 +291,7 @@ Optionally, you need the following permissions to attach Access Management tags
257291
| <a name="input_existing_cos_id"></a> [existing\_cos\_id](#input\_existing\_cos\_id) | The COS id of an already existing COS instance to use for OpenShift internal registry storage. Only required if 'enable\_registry\_storage' and 'use\_existing\_cos' are true | `string` | `null` | no |
258292
| <a name="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage) | Flag indicating whether or not to delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
259293
| <a name="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
294+
| <a name="input_import_default_worker_pool_on_create"></a> [import\_default\_worker\_pool\_on\_create](#input\_import\_default\_worker\_pool\_on\_create) | (Advanced users) Whether to handle the default worker pool as a stand-alone ibm\_container\_vpc\_worker\_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource. | `bool` | `true` | no |
260295
| <a name="input_kms_config"></a> [kms\_config](#input\_kms\_config) | Use to attach a KMS instance to the cluster. If account\_id is not provided, defaults to the account in use. | <pre>object({<br> crk_id = string<br> instance_id = string<br> private_endpoint = optional(bool, true) # defaults to true<br> account_id = optional(string) # To attach KMS instance from another account<br> wait_for_apply = optional(bool, true) # defaults to true so terraform will wait until the KMS is applied to the master, ready and deployed<br> })</pre> | `null` | no |
261296
| <a name="input_manage_all_addons"></a> [manage\_all\_addons](#input\_manage\_all\_addons) | Instructs Terraform to manage all cluster addons, even if addons were installed outside of the module. If set to 'true' this module will destroy any addons that were installed by other sources. | `bool` | `false` | no |
262297
| <a name="input_number_of_lbs"></a> [number\_of\_lbs](#input\_number\_of\_lbs) | The number of LBs to associated the additional\_lb\_security\_group\_names security group with. | `number` | `1` | no |

common-dev-assets

examples/add_rules_to_sg/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "IBM-Cloud/ibm"
9-
version = "1.67.0"
9+
version = "1.68.0"
1010
}
1111
}
1212
}

examples/advanced/main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ locals {
109109
{
110110
subnet_prefix = "zone-1"
111111
pool_name = "default" # ibm_container_vpc_cluster automatically names default pool "default" (See https://github.com/IBM-Cloud/terraform-provider-ibm/issues/2849)
112-
machine_type = "bx2.4x16"
112+
machine_type = "mx2.4x32"
113113
workers_per_zone = 1
114114
enableAutoscaling = true
115115
minSize = 1

examples/advanced/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "IBM-Cloud/ibm"
9-
version = ">= 1.67.0"
9+
version = ">= 1.68.0"
1010
}
1111
kubernetes = {
1212
source = "hashicorp/kubernetes"

examples/basic/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "IBM-Cloud/ibm"
9-
version = "1.67.0"
9+
version = "1.68.0"
1010
}
1111
}
1212
}

examples/cross_kms_support/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "IBM-Cloud/ibm"
9-
version = ">= 1.67.0"
9+
version = ">= 1.68.0"
1010
}
1111
}
1212
}

examples/custom_sg/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "IBM-Cloud/ibm"
9-
version = ">= 1.67.0"
9+
version = ">= 1.68.0"
1010
}
1111
}
1212
}

examples/fscloud/main.tf

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -251,21 +251,22 @@ module "custom_sg" {
251251
}
252252

253253
module "ocp_fscloud" {
254-
source = "../../modules/fscloud"
255-
cluster_name = var.prefix
256-
resource_group_id = module.resource_group.resource_group_id
257-
region = var.region
258-
force_delete_storage = true
259-
vpc_id = module.vpc.vpc_id
260-
vpc_subnets = local.cluster_vpc_subnets
261-
existing_cos_id = module.cos_fscloud.cos_instance_id
262-
worker_pools = local.worker_pools
263-
tags = var.resource_tags
264-
access_tags = var.access_tags
265-
ocp_version = var.ocp_version
266-
additional_lb_security_group_ids = [module.custom_sg["custom-lb-sg"].security_group_id]
267-
use_private_endpoint = true
268-
ocp_entitlement = var.ocp_entitlement
254+
source = "../../modules/fscloud"
255+
cluster_name = var.prefix
256+
resource_group_id = module.resource_group.resource_group_id
257+
region = var.region
258+
force_delete_storage = true
259+
vpc_id = module.vpc.vpc_id
260+
vpc_subnets = local.cluster_vpc_subnets
261+
existing_cos_id = module.cos_fscloud.cos_instance_id
262+
worker_pools = local.worker_pools
263+
tags = var.resource_tags
264+
access_tags = var.access_tags
265+
ocp_version = var.ocp_version
266+
import_default_worker_pool_on_create = false
267+
additional_lb_security_group_ids = [module.custom_sg["custom-lb-sg"].security_group_id]
268+
use_private_endpoint = true
269+
ocp_entitlement = var.ocp_entitlement
269270
kms_config = {
270271
instance_id = var.hpcs_instance_guid
271272
crk_id = local.cluster_hpcs_cluster_key_id

examples/fscloud/version.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ terraform {
66
required_providers {
77
ibm = {
88
source = "ibm-cloud/ibm"
9-
version = ">= 1.67.0"
9+
version = ">= 1.68.0"
1010
}
1111
logdna = {
1212
source = "logdna/logdna"

0 commit comments

Comments
 (0)