Skip to content
Merged
Show file tree
Hide file tree
Changes from 33 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
e881082
feat: add support for install deps
Aashiq-J Nov 14, 2025
9e40a6f
test
Aashiq-J Nov 14, 2025
733a0a7
add variables
Aashiq-J Nov 17, 2025
647d147
Merge branch 'main' of https://github.com/terraform-ibm-modules/terra…
Aashiq-J Nov 17, 2025
ce8cf9e
Merge branch 'main' into install-deps
Aashiq-J Nov 18, 2025
c94e876
update test
Aashiq-J Nov 18, 2025
b0c4803
Merge branch 'main' into install-deps
Aashiq-J Nov 24, 2025
c8f9b34
fix test
Aashiq-J Nov 24, 2025
6d4f942
udpate submodule
Aashiq-J Nov 24, 2025
ba81ae2
update readme
Aashiq-J Nov 24, 2025
612138b
update test
Aashiq-J Nov 24, 2025
9b40676
update script
Aashiq-J Nov 25, 2025
bb310b0
update test
Aashiq-J Nov 25, 2025
f577343
review changes
Aashiq-J Nov 26, 2025
febc4c3
fix
Aashiq-J Nov 26, 2025
05221ac
Merge branch 'main' of https://github.com/terraform-ibm-modules/terra…
Aashiq-J Nov 26, 2025
54d5a4a
update path
Aashiq-J Nov 26, 2025
072276b
test
Aashiq-J Nov 28, 2025
94438e6
test
Aashiq-J Nov 28, 2025
6970512
test
Aashiq-J Nov 28, 2025
f7a4f4c
test
Aashiq-J Nov 28, 2025
68d9001
update
Aashiq-J Dec 1, 2025
f859afd
Merge branch 'main' into install-deps
Aashiq-J Dec 1, 2025
0ca3be5
formatting
Aashiq-J Dec 1, 2025
a5bffb0
update readme
Aashiq-J Dec 1, 2025
9b8e3b6
add triggers
Aashiq-J Dec 1, 2025
b2700c1
review changes
Aashiq-J Dec 1, 2025
9617423
SKIP UPGRADE TEST
Aashiq-J Dec 1, 2025
16cd543
review change
Aashiq-J Dec 1, 2025
55efa37
update path
Aashiq-J Dec 1, 2025
d8ad15a
test kube-audit
Aashiq-J Dec 1, 2025
5cf6740
test path kube-audit
Aashiq-J Dec 1, 2025
c04417a
move script
Aashiq-J Dec 2, 2025
5cca92d
revert kube-audit changes
Aashiq-J Dec 2, 2025
02912da
remove data block
Aashiq-J Dec 2, 2025
546c18a
Merge branch 'main' into install-deps
Aashiq-J Dec 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,12 @@ Optionally, the module supports advanced security group management for the worke

### Before you begin

- Ensure that you have an up-to-date version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started).
- Ensure that you have an up-to-date version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli).
- Ensure that you have an up-to-date version of the [IBM Cloud VPC Infrastructure service CLI](https://cloud.ibm.com/docs/vpc?topic=vpc-vpc-reference). Only required if providing additional security groups with the `var.additional_lb_security_group_ids`.
- Ensure that you have an up-to-date version of the [jq](https://jqlang.github.io/jq).
- Ensure that you have an up-to-date version of the [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
- Ensure that you have an up-to-date version of [curl](https://curl.se/docs/manpage.html).
- Ensure that you have an up-to-date version of [tar](https://www.gnu.org/software/tar/).
- [OPTIONAL] Ensure that you have an up-to-date version of the [jq](https://jqlang.github.io/jq).
- [OPTIONAL] Ensure that you have an up-to-date version of the [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).

By default, the module automatically downloads the required dependencies if they are not already installed. You can disable this behavior by setting `install_required_binaries` to `false`. When enabled, the module fetches dependencies from official online binaries (requires public internet).

<!-- Below content is automatically populated via pre-commit hook -->
<!-- BEGIN OVERVIEW HOOK -->
Expand Down Expand Up @@ -323,6 +324,7 @@ Optionally, you need the following permissions to attach Access Management tags
| [kubernetes_config_map_v1_data.set_autoscaling](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map_v1_data) | resource |
| [null_resource.config_map_status](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.confirm_network_healthy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.ocp_console_management](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [time_sleep.wait_for_auth_policy](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
| [ibm_container_addons.existing_addons](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_addons) | data source |
Expand Down Expand Up @@ -359,6 +361,7 @@ Optionally, you need the following permissions to attach Access Management tags
| <a name="input_existing_secrets_manager_instance_crn"></a> [existing\_secrets\_manager\_instance\_crn](#input\_existing\_secrets\_manager\_instance\_crn) | CRN of the Secrets Manager instance where Ingress certificate secrets are stored. If 'enable\_secrets\_manager\_integration' is set to true then this value is required. | `string` | `null` | no |
| <a name="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage) | Flag indicating whether or not to delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
| <a name="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. Set to false to skip running this script. | `bool` | `true` | no |
| <a name="input_kms_config"></a> [kms\_config](#input\_kms\_config) | Use to attach a KMS instance to the cluster. If account\_id is not provided, defaults to the account in use. | <pre>object({<br/> crk_id = string<br/> instance_id = string<br/> private_endpoint = optional(bool, true) # defaults to true<br/> account_id = optional(string) # To attach KMS instance from another account<br/> wait_for_apply = optional(bool, true) # defaults to true so terraform will wait until the KMS is applied to the master, ready and deployed<br/> })</pre> | `null` | no |
| <a name="input_manage_all_addons"></a> [manage\_all\_addons](#input\_manage\_all\_addons) | Instructs Terraform to manage all cluster addons, even if addons were installed outside of the module. If set to 'true' this module destroys any addons that were installed by other sources. | `bool` | `false` | no |
| <a name="input_number_of_lbs"></a> [number\_of\_lbs](#input\_number\_of\_lbs) | The number of LBs to associated the `additional_lb_security_group_names` security group with. | `number` | `1` | no |
Expand Down
1 change: 0 additions & 1 deletion examples/advanced/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,6 @@ module "kube_audit" {
cluster_resource_group_id = module.resource_group.resource_group_id
audit_log_policy = "WriteRequestBodies"
region = var.region
ibmcloud_api_key = var.ibmcloud_api_key
}


Expand Down
39 changes: 32 additions & 7 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ locals {

# for versions older than 4.15, this value must be null, or provider gives error
disable_outbound_traffic_protection = startswith(local.ocp_version, "4.14") ? null : var.disable_outbound_traffic_protection

binaries_path = "/tmp"
}

# Local block to verify validations for OCP AI Addon.
Expand Down Expand Up @@ -101,6 +103,20 @@ locals {
default_wp_validation = local.rhcos_check ? true : tobool("If RHCOS is used with this cluster, the default worker pool should be created with RHCOS.")
}

resource "null_resource" "install_required_binaries" {
count = var.install_required_binaries && (var.verify_worker_network_readiness || var.enable_ocp_console != null || lookup(var.addons, "cluster-autoscaler", null) != null) ? 1 : 0
triggers = {
verify_worker_network_readiness = var.verify_worker_network_readiness
cluster_autoscaler = lookup(var.addons, "cluster-autoscaler", null) != null
enable_ocp_console = var.enable_ocp_console
}
provisioner "local-exec" {
# Using the script from the kube-audit module to avoid code duplication.
command = "${path.module}/modules/kube-audit/scripts/install-binaries.sh ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
}
}

# Lookup the current default kube version
data "ibm_container_cluster_versions" "cluster_versions" {
resource_group_id = var.resource_group_id
Expand Down Expand Up @@ -478,10 +494,14 @@ resource "null_resource" "confirm_network_healthy" {
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
depends_on = [null_resource.install_required_binaries, ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]

triggers = {
verify_worker_network_readiness = var.verify_worker_network_readiness
}

provisioner "local-exec" {
command = "${path.module}/scripts/confirm_network_healthy.sh"
command = "${path.module}/scripts/confirm_network_healthy.sh ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
Expand All @@ -494,9 +514,12 @@ resource "null_resource" "confirm_network_healthy" {
##############################################################################
resource "null_resource" "ocp_console_management" {
count = var.enable_ocp_console != null ? 1 : 0
depends_on = [null_resource.confirm_network_healthy]
depends_on = [null_resource.install_required_binaries, null_resource.confirm_network_healthy]
triggers = {
enable_ocp_console = var.enable_ocp_console
}
provisioner "local-exec" {
command = "${path.module}/scripts/enable_disable_ocp_console.sh"
command = "${path.module}/scripts/enable_disable_ocp_console.sh ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
Expand Down Expand Up @@ -568,10 +591,13 @@ locals {

resource "null_resource" "config_map_status" {
count = lookup(var.addons, "cluster-autoscaler", null) != null ? 1 : 0
depends_on = [ibm_container_addons.addons]
depends_on = [null_resource.install_required_binaries, ibm_container_addons.addons]

triggers = {
cluster_autoscaler = lookup(var.addons, "cluster-autoscaler", null) != null
}
provisioner "local-exec" {
command = "${path.module}/scripts/get_config_map_status.sh"
command = "${path.module}/scripts/get_config_map_status.sh ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
Expand Down Expand Up @@ -759,7 +785,6 @@ resource "time_sleep" "wait_for_auth_policy" {
create_duration = "30s"
}


resource "ibm_container_ingress_instance" "instance" {
count = var.enable_secrets_manager_integration ? 1 : 0
depends_on = [time_sleep.wait_for_auth_policy]
Expand Down
4 changes: 3 additions & 1 deletion modules/kube-audit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,13 @@ No modules.
| Name | Type |
|------|------|
| [helm_release.kube_audit](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.set_audit_log_policy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.set_audit_webhook](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [time_sleep.wait_for_kube_audit](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
| [ibm_container_cluster_config.cluster_config](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_cluster_config) | data source |
| [ibm_container_vpc_cluster.cluster](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_vpc_cluster) | data source |
| [ibm_iam_auth_token.webhook_api_key_tokendata](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/iam_auth_token) | data source |

### Inputs

Expand All @@ -88,7 +90,7 @@ No modules.
| <a name="input_cluster_config_endpoint_type"></a> [cluster\_config\_endpoint\_type](#input\_cluster\_config\_endpoint\_type) | Specify which type of endpoint to use for for cluster config access: 'default', 'private', 'vpe', 'link'. 'default' value will use the default endpoint of the cluster. | `string` | `"default"` | no |
| <a name="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id) | The ID of the cluster to deploy the log collection service in. | `string` | n/a | yes |
| <a name="input_cluster_resource_group_id"></a> [cluster\_resource\_group\_id](#input\_cluster\_resource\_group\_id) | The resource group ID of the cluster. | `string` | n/a | yes |
| <a name="input_ibmcloud_api_key"></a> [ibmcloud\_api\_key](#input\_ibmcloud\_api\_key) | The IBM Cloud api key to generate an IAM token. | `string` | n/a | yes |
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. Set to false to skip running this script. | `bool` | `true` | no |
| <a name="input_region"></a> [region](#input\_region) | The IBM Cloud region where the cluster is provisioned. | `string` | n/a | yes |
| <a name="input_use_private_endpoint"></a> [use\_private\_endpoint](#input\_use\_private\_endpoint) | Set this to true to force all api calls to use the IBM Cloud private endpoints. | `bool` | `false` | no |
| <a name="input_wait_till"></a> [wait\_till](#input\_wait\_till) | To avoid long wait times when you run your Terraform code, you can specify the stage when you want Terraform to mark the cluster resource creation as completed. Depending on what stage you choose, the cluster creation might not be fully completed and continues to run in the background. However, your Terraform code can continue to run without waiting for the cluster to be fully created. Supported args are `MasterNodeReady`, `OneWorkerNodeReady`, `IngressReady` and `Normal` | `string` | `"IngressReady"` | no |
Expand Down
39 changes: 29 additions & 10 deletions modules/kube-audit/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
locals {
binaries_path = "/tmp"
}

resource "null_resource" "install_required_binaries" {
count = var.install_required_binaries ? 1 : 0
triggers = {
audit_log_policy = var.audit_log_policy
audit_deployment_name = var.audit_deployment_name
audit_namespace = var.audit_namespace
audit_webhook_listener_image = var.audit_webhook_listener_image
audit_webhook_listener_image_tag_digest = var.audit_webhook_listener_image_tag_digest
}
provisioner "local-exec" {
command = "${path.module}/scripts/install-binaries.sh ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
}
}

data "ibm_container_cluster_config" "cluster_config" {
cluster_name_id = var.cluster_id
config_dir = "${path.module}/kubeconfig"
Expand All @@ -19,11 +38,12 @@ locals {
}

resource "null_resource" "set_audit_log_policy" {
depends_on = [null_resource.install_required_binaries]
triggers = {
audit_log_policy = var.audit_log_policy
}
provisioner "local-exec" {
command = "${path.module}/scripts/set_audit_log_policy.sh ${var.audit_log_policy}"
command = "${path.module}/scripts/set_audit_log_policy.sh ${var.audit_log_policy} ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = data.ibm_container_cluster_config.cluster_config.config_file_path
Expand All @@ -40,7 +60,7 @@ locals {
}

resource "helm_release" "kube_audit" {
depends_on = [null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
depends_on = [null_resource.install_required_binaries, null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
name = var.audit_deployment_name
chart = local.kube_audit_chart_location
timeout = 1200
Expand Down Expand Up @@ -72,7 +92,7 @@ resource "helm_release" "kube_audit" {
]

provisioner "local-exec" {
command = "${path.module}/scripts/confirm-rollout-status.sh ${var.audit_deployment_name} ${var.audit_namespace}"
command = "${path.module}/scripts/confirm-rollout-status.sh ${var.audit_deployment_name} ${var.audit_namespace} ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = data.ibm_container_cluster_config.cluster_config.config_file_path
Expand All @@ -90,21 +110,20 @@ locals {
audit_server = "https://127.0.0.1:2040/api/v1/namespaces/${var.audit_namespace}/services/${var.audit_deployment_name}-service/proxy/post"
}

# see [issue](https://github.com/IBM-Cloud/terraform-provider-ibm/issues/6107)
# data "ibm_iam_auth_token" "webhook_api_key_tokendata" {
# depends_on = [data.ibm_container_cluster_config.cluster_config]
# }
data "ibm_iam_auth_token" "webhook_api_key_tokendata" {
depends_on = [time_sleep.wait_for_kube_audit]
}

resource "null_resource" "set_audit_webhook" {
depends_on = [time_sleep.wait_for_kube_audit]
depends_on = [null_resource.install_required_binaries]
triggers = {
audit_log_policy = var.audit_log_policy
}
provisioner "local-exec" {
command = "${path.module}/scripts/set_webhook.sh ${var.region} ${var.use_private_endpoint} ${var.cluster_config_endpoint_type} ${var.cluster_id} ${var.cluster_resource_group_id} ${var.audit_log_policy != "default" ? "verbose" : "default"}"
command = "${path.module}/scripts/set_webhook.sh ${var.region} ${var.use_private_endpoint} ${var.cluster_config_endpoint_type} ${var.cluster_id} ${var.cluster_resource_group_id} ${var.audit_log_policy != "default" ? "verbose" : "default"} ${local.binaries_path}"
interpreter = ["/bin/bash", "-c"]
environment = {
IAM_API_KEY = var.ibmcloud_api_key
IAM_TOKEN = sensitive(data.ibm_iam_auth_token.webhook_api_key_tokendata.iam_access_token)
AUDIT_SERVER = local.audit_server
CLIENT_CERT = data.ibm_container_cluster_config.cluster_config.admin_certificate
CLIENT_KEY = data.ibm_container_cluster_config.cluster_config.admin_key
Expand Down
3 changes: 3 additions & 0 deletions modules/kube-audit/scripts/confirm-rollout-status.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,7 @@ set -e
deployment=$1
namespace=$2

# The binaries downloaded by the install-binaries script are located in the /tmp directory.
export PATH=$PATH:${3:-"/tmp"}

kubectl rollout status deploy "${deployment}" -n "${namespace}" --timeout 30m
44 changes: 44 additions & 0 deletions modules/kube-audit/scripts/install-binaries.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#!/bin/bash

# This script is stored in the kube-audit module because modules cannot access
# scripts placed in the root module when they are invoked individually.
# Placing it here also avoids duplicating the install-binaries script across modules.

set -o errexit
set -o pipefail

DIRECTORY=${1:-"/tmp"}
# renovate: datasource=github-tags depName=terraform-ibm-modules/common-bash-library
TAG=v0.2.0

echo "Downloading common-bash-library version ${TAG}."

# download common-bash-library
curl --silent \
--connect-timeout 5 \
--max-time 10 \
--retry 3 \
--retry-delay 2 \
--retry-connrefused \
--fail \
--show-error \
--location \
--output "${DIRECTORY}/common-bash.tar.gz" \
"https://github.com/terraform-ibm-modules/common-bash-library/archive/refs/tags/$TAG.tar.gz"

mkdir -p "${DIRECTORY}/common-bash-library"
tar -xzf "${DIRECTORY}/common-bash.tar.gz" --strip-components=1 -C "${DIRECTORY}/common-bash-library"
rm -f "${DIRECTORY}/common-bash.tar.gz"

# The file doesn’t exist at the time shellcheck runs, so this check is skipped.
# shellcheck disable=SC1091
source "${DIRECTORY}/common-bash-library/common/common.sh"

echo "Installing jq."
install_jq "latest" "${DIRECTORY}" "true"
echo "Installing kubectl."
install_kubectl "latest" "${DIRECTORY}" "true"

rm -rf "${DIRECTORY}/common-bash-library"

echo "Installation complete successfully"
Loading