Skip to content

Commit 17585fc

Browse files
authored
feat: Added new submodule for managing workers only. See [worker_pool](https://github.com/terraform-ibm-modules/terraform-ibm-base-ocp-vpc/tree/main/modules/worker_pool) (#819)
1 parent 33e757a commit 17585fc

File tree

12 files changed

+384
-125
lines changed

12 files changed

+384
-125
lines changed

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ Optionally, the module supports advanced security group management for the worke
2828
* [Submodules](./modules)
2929
* [fscloud](./modules/fscloud)
3030
* [kube-audit](./modules/kube-audit)
31+
* [worker-pool](./modules/worker-pool)
3132
* [Examples](./examples)
3233
* [2 MZR clusters in same VPC example](./examples/multiple_mzr_clusters)
3334
* [Advanced example (mzr, auto-scale, kms, taints)](./examples/advanced)
@@ -296,6 +297,7 @@ Optionally, you need the following permissions to attach Access Management tags
296297
| <a name="module_cbr_rule"></a> [cbr\_rule](#module\_cbr\_rule) | terraform-ibm-modules/cbr/ibm//modules/cbr-rule-module | 1.33.7 |
297298
| <a name="module_cos_instance"></a> [cos\_instance](#module\_cos\_instance) | terraform-ibm-modules/cos/ibm | 10.5.1 |
298299
| <a name="module_existing_secrets_manager_instance_parser"></a> [existing\_secrets\_manager\_instance\_parser](#module\_existing\_secrets\_manager\_instance\_parser) | terraform-ibm-modules/common-utilities/ibm//modules/crn-parser | 1.2.0 |
300+
| <a name="module_worker_pools"></a> [worker\_pools](#module\_worker\_pools) | ./modules/worker-pool | n/a |
299301
300302
### Resources
301303
@@ -308,8 +310,6 @@ Optionally, you need the following permissions to attach Access Management tags
308310
| [ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
309311
| [ibm_container_vpc_cluster.cluster](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
310312
| [ibm_container_vpc_cluster.cluster_with_upgrade](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
311-
| [ibm_container_vpc_worker_pool.autoscaling_pool](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_worker_pool) | resource |
312-
| [ibm_container_vpc_worker_pool.pool](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_worker_pool) | resource |
313313
| [ibm_iam_authorization_policy.ocp_secrets_manager_iam_auth_policy](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/iam_authorization_policy) | resource |
314314
| [ibm_resource_tag.cluster_access_tag](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/resource_tag) | resource |
315315
| [ibm_resource_tag.cos_access_tag](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/resource_tag) | resource |
@@ -322,7 +322,6 @@ Optionally, you need the following permissions to attach Access Management tags
322322
| [ibm_container_addons.existing_addons](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_addons) | data source |
323323
| [ibm_container_cluster_config.cluster_config](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_cluster_config) | data source |
324324
| [ibm_container_cluster_versions.cluster_versions](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_cluster_versions) | data source |
325-
| [ibm_container_vpc_worker_pool.all_pools](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_vpc_worker_pool) | data source |
326325
| [ibm_is_lbs.all_lbs](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_lbs) | data source |
327326
| [ibm_is_virtual_endpoint_gateway.api_vpe](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_virtual_endpoint_gateway) | data source |
328327
| [ibm_is_virtual_endpoint_gateway.master_vpe](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_virtual_endpoint_gateway) | data source |

examples/advanced/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ The following resources are provisioned by this example:
88
- A VPC with subnets across 3 zones.
99
- A public gateway for all the three zones
1010
- A multi-zone (3 zone) KMS encrypted OCP VPC cluster, with worker pools in each zone.
11+
- An additional worker pool named `workerpool` is created and attached to the cluster using the `worker-pool` submodule.
1112
- Auto scaling enabled for the default worker pool.
1213
- Taints against the workers in zone-2 and zone-3.
1314
- Enable Kubernetes API server audit logs.

examples/advanced/main.tf

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,15 @@ locals {
152152
effect = "NoExecute"
153153
}]
154154
}
155+
worker_pool = [
156+
{
157+
subnet_prefix = "zone-1"
158+
pool_name = "workerpool"
159+
machine_type = "bx2.4x16"
160+
operating_system = "REDHAT_8_64"
161+
workers_per_zone = 2
162+
}
163+
]
155164
}
156165

157166
module "ocp_base" {
@@ -186,6 +195,19 @@ data "ibm_container_cluster_config" "cluster_config" {
186195
config_dir = "${path.module}/../../kubeconfig"
187196
}
188197

198+
########################################################################################################################
199+
# Worker Pool
200+
########################################################################################################################
201+
202+
module "worker_pool" {
203+
source = "../../modules/worker-pool"
204+
resource_group_id = module.resource_group.resource_group_id
205+
vpc_id = ibm_is_vpc.vpc.id
206+
cluster_id = module.ocp_base.cluster_id
207+
vpc_subnets = local.cluster_vpc_subnets
208+
worker_pools = local.worker_pool
209+
}
210+
189211
########################################################################################################################
190212
# Kube Audit
191213
########################################################################################################################

main.tf

Lines changed: 16 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,6 @@
77
locals {
88
# ibm_container_vpc_cluster automatically names default pool "default" (See https://github.com/IBM-Cloud/terraform-provider-ibm/issues/2849)
99
default_pool = element([for pool in var.worker_pools : pool if pool.pool_name == "default"], 0)
10-
# all_standalone_pools are the pools managed by a 'standalone' ibm_container_vpc_worker_pool resource
11-
all_standalone_pools = [for pool in var.worker_pools : pool if !var.ignore_worker_pool_size_changes]
12-
all_standalone_autoscaling_pools = [for pool in var.worker_pools : pool if var.ignore_worker_pool_size_changes]
1310

1411
default_ocp_version = "${data.ibm_container_cluster_versions.cluster_versions.default_openshift_version}_openshift"
1512
ocp_version = var.ocp_version == null || var.ocp_version == "default" ? local.default_ocp_version : "${var.ocp_version}_openshift"
@@ -466,114 +463,15 @@ data "ibm_container_cluster_config" "cluster_config" {
466463
endpoint_type = var.cluster_config_endpoint_type != "default" ? var.cluster_config_endpoint_type : null # null value represents default
467464
}
468465

469-
##############################################################################
470-
# Worker Pools
471-
##############################################################################
472-
473-
locals {
474-
additional_pool_names = var.ignore_worker_pool_size_changes ? [for pool in local.all_standalone_autoscaling_pools : pool.pool_name] : [for pool in local.all_standalone_pools : pool.pool_name]
475-
pool_names = toset(flatten([["default"], local.additional_pool_names]))
476-
}
477-
478-
data "ibm_container_vpc_worker_pool" "all_pools" {
479-
depends_on = [ibm_container_vpc_worker_pool.autoscaling_pool, ibm_container_vpc_worker_pool.pool]
480-
for_each = local.pool_names
481-
cluster = local.cluster_id
482-
worker_pool_name = each.value
483-
}
484-
485-
resource "ibm_container_vpc_worker_pool" "pool" {
486-
for_each = { for pool in local.all_standalone_pools : pool.pool_name => pool }
487-
vpc_id = var.vpc_id
488-
resource_group_id = var.resource_group_id
489-
cluster = local.cluster_id
490-
worker_pool_name = each.value.pool_name
491-
flavor = each.value.machine_type
492-
operating_system = each.value.operating_system
493-
worker_count = each.value.workers_per_zone
494-
secondary_storage = each.value.secondary_storage
495-
entitlement = var.ocp_entitlement
496-
labels = each.value.labels
497-
crk = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.crk
498-
kms_instance_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_instance_id
499-
kms_account_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_account_id
500-
501-
security_groups = each.value.additional_security_group_ids
502-
503-
dynamic "zones" {
504-
for_each = each.value.subnet_prefix != null ? var.vpc_subnets[each.value.subnet_prefix] : each.value.vpc_subnets
505-
content {
506-
subnet_id = zones.value.id
507-
name = zones.value.zone
508-
}
509-
}
510-
511-
# Apply taints to worker pools i.e. all_standalone_pools
512-
dynamic "taints" {
513-
for_each = var.worker_pools_taints == null ? [] : concat(var.worker_pools_taints["all"], lookup(var.worker_pools_taints, each.value["pool_name"], []))
514-
content {
515-
effect = taints.value.effect
516-
key = taints.value.key
517-
value = taints.value.value
518-
}
519-
}
520-
521-
timeouts {
522-
# Extend create and delete timeout to 2h
523-
delete = "2h"
524-
create = "2h"
525-
}
526-
527-
# The default workerpool has to be imported as it will already exist on cluster create
528-
import_on_create = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
529-
orphan_on_delete = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
530-
}
531-
532-
# copy of the pool resource above which ignores changes to the worker pool for use in autoscaling scenarios
533-
resource "ibm_container_vpc_worker_pool" "autoscaling_pool" {
534-
for_each = { for pool in local.all_standalone_autoscaling_pools : pool.pool_name => pool }
535-
vpc_id = var.vpc_id
536-
resource_group_id = var.resource_group_id
537-
cluster = local.cluster_id
538-
worker_pool_name = each.value.pool_name
539-
flavor = each.value.machine_type
540-
operating_system = each.value.operating_system
541-
worker_count = each.value.workers_per_zone
542-
secondary_storage = each.value.secondary_storage
543-
entitlement = var.ocp_entitlement
544-
labels = each.value.labels
545-
crk = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.crk
546-
kms_instance_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_instance_id
547-
kms_account_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_account_id
548-
549-
security_groups = each.value.additional_security_group_ids
550-
551-
lifecycle {
552-
ignore_changes = [worker_count]
553-
}
554-
555-
dynamic "zones" {
556-
for_each = each.value.subnet_prefix != null ? var.vpc_subnets[each.value.subnet_prefix] : each.value.vpc_subnets
557-
content {
558-
subnet_id = zones.value.id
559-
name = zones.value.zone
560-
}
561-
}
562-
563-
# Apply taints to worker pools i.e. all_standalone_pools
564-
565-
dynamic "taints" {
566-
for_each = var.worker_pools_taints == null ? [] : concat(var.worker_pools_taints["all"], lookup(var.worker_pools_taints, each.value["pool_name"], []))
567-
content {
568-
effect = taints.value.effect
569-
key = taints.value.key
570-
value = taints.value.value
571-
}
572-
}
573-
574-
# The default workerpool has to be imported as it will already exist on cluster create
575-
import_on_create = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
576-
orphan_on_delete = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
466+
module "worker_pools" {
467+
source = "./modules/worker-pool"
468+
vpc_id = var.vpc_id
469+
resource_group_id = var.resource_group_id
470+
cluster_id = local.cluster_id
471+
vpc_subnets = var.vpc_subnets
472+
worker_pools = var.worker_pools
473+
ignore_worker_pool_size_changes = var.ignore_worker_pool_size_changes
474+
allow_default_worker_pool_replacement = var.allow_default_worker_pool_replacement
577475
}
578476

579477
##############################################################################
@@ -605,7 +503,7 @@ resource "null_resource" "confirm_network_healthy" {
605503
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
606504
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
607505
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
608-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool]
506+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
609507

610508
provisioner "local-exec" {
611509
command = "${path.module}/scripts/confirm_network_healthy.sh"
@@ -659,7 +557,7 @@ resource "ibm_container_addons" "addons" {
659557
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
660558
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
661559
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
662-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
560+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
663561
cluster = local.cluster_id
664562
resource_group_id = var.resource_group_id
665563

@@ -732,7 +630,7 @@ resource "kubernetes_config_map_v1_data" "set_autoscaling" {
732630
##############################################################################
733631

734632
data "ibm_is_lbs" "all_lbs" {
735-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
633+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
736634
count = length(var.additional_lb_security_group_ids) > 0 ? 1 : 0
737635
}
738636

@@ -768,19 +666,19 @@ locals {
768666

769667
data "ibm_is_virtual_endpoint_gateway" "master_vpe" {
770668
count = length(var.additional_vpe_security_group_ids["master"])
771-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
669+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
772670
name = local.vpes_to_attach_to_sg["master"]
773671
}
774672

775673
data "ibm_is_virtual_endpoint_gateway" "api_vpe" {
776674
count = length(var.additional_vpe_security_group_ids["api"])
777-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
675+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
778676
name = local.vpes_to_attach_to_sg["api"]
779677
}
780678

781679
data "ibm_is_virtual_endpoint_gateway" "registry_vpe" {
782680
count = length(var.additional_vpe_security_group_ids["registry"])
783-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
681+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
784682
name = local.vpes_to_attach_to_sg["registry"]
785683
}
786684

@@ -872,7 +770,7 @@ module "existing_secrets_manager_instance_parser" {
872770

873771
resource "ibm_iam_authorization_policy" "ocp_secrets_manager_iam_auth_policy" {
874772
count = var.enable_secrets_manager_integration && !var.skip_ocp_secrets_manager_iam_auth_policy ? 1 : 0
875-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool]
773+
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
876774
source_service_name = "containers-kubernetes"
877775
source_resource_instance_id = local.cluster_id
878776
target_service_name = "secrets-manager"

0 commit comments

Comments
 (0)