-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Description
When running terragrunt destroy -var-file=./vars/dev.tfvars using the EKS module version 21.8.0, Terraform fails with an Invalid count argument error originating from eks-managed-node-group/main.tf.
The destroy operation cannot proceed due to dependencies within the module that rely on computed resource attributes, which are not yet known during plan/destroy.
This appears to be a regression or missing conditional handling in how count is computed for the following data sources:
data.aws_partition.currentdata.aws_caller_identity.current
The issue occurs even though the environment is fully provisioned and operational.
β οΈ Note
I have already performed the following checks:
-
Removed the local
.terraformdirectoryrm -rf .terraform/
-
Re-initialized the project root
terraform init
-
Re-ran the destroy command β the issue persists.
Versions
-
Module version: 21.8.0
-
Terraform version: 1.13.5
-
Terragrunt version: 0.93.3
-
Provider version(s):
+ provider registry.terraform.io/hashicorp/aws 6.20.0
Reproduction Code [Required]
terraform apply -var-file=vars.tfvars
terragrunt destroy -var-file=vars.tfvarsEKS module reference:
# ref:
# - https://docs.aws.amazon.com/eks/latest/userguide/network-reqs.html
locals {
eks_cluster_name = "${local.prefix}-eks"
eks_iam_provider_aud = "${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:aud"
eks_iam_provider_sub = "${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:sub"
eks_iam_provider_arn = module.eks.oidc_provider_arn
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.8.0"
depends_on = [module.vpc]
name = "test-eks"
kubernetes_version = "1.34"
# Kubernetes Upgrade Policy - https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-version-support-policy
upgrade_policy = {
support_type = "STANDARD"
}
# Network configuration
vpc_id = module.vpc.vpc_id
control_plane_subnet_ids = module.vpc.private_subnet_ids
# Security configuration
create_iam_role = true
attach_encryption_policy = false
endpoint_private_access = true
endpoint_public_access = var.eks_configuration.enable_endpoint_public_access
endpoint_public_access_cidrs = var.eks_configuration.endpoint_public_access_cidrs
create_security_group = true
security_group_description = "EKS cluster security group"
# authentication_mode = "API"
enable_cluster_creator_admin_permissions = true
# Cluster Addons configuration
addons = {
vpc-cni = {
most_recent = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"
before_compute = true
}
metrics-server = {}
kube-proxy = {
most_recent = true
}
coredns = {
most_recent = true
}
}
# Some defaults
dataplane_wait_duration = "60s"
# Control Plane Logging - Enable all log types
enabled_log_types = [
"api",
"audit",
"authenticator",
# "controllerManager",
# "scheduler"
]
# Override defaults
create_cloudwatch_log_group = false
create_kms_key = false
enable_kms_key_rotation = false
kms_key_enable_default_policy = false
enable_irsa = true
encryption_config = null # issues/3469
enable_auto_mode_custom_tags = false
# EKS Managed Node Group(s)
create_node_security_group = true
node_security_group_enable_recommended_rules = true
node_security_group_description = "EKS node group security group - used by nodes to communicate with the cluster API Server"
node_security_group_use_name_prefix = true
subnet_ids = module.vpc.private_subnet_ids
eks_managed_node_groups = {
group1 = {
name = "test-eks-node-group"
ami_type = "BOTTLEROCKET_x86_64"
instance_types = ["t3a.xlarge"]
# ec2_ssh_key = "ss-rishang"
capacity_type = "ON_DEMAND"
min_size = 1
max_size = 2
desired_size = 1
block_device_mappings = {
# OS partition - Bottlerocket system files
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 20
volume_type = "gp3"
iops = 3000
throughput = 150
encrypted = true
delete_on_termination = true
}
}
# Data partition - Container images, volumes, and workload data
xvdb = {
device_name = "/dev/xvdb"
ebs = {
volume_size = 60
volume_type = "gp3"
iops = 3000
throughput = 150
encrypted = true
delete_on_termination = true
}
}
}
}
}
}Steps to reproduce:
-
Deploy the stack successfully using
terragrunt apply -var-file=./vars/dev.tfvars. -
Run
terragrunt destroy -var-file=./vars/dev.tfvars. -
Observe that Terraform fails during state refresh with:
Error: Invalid count argument The "count" value depends on resource attributes that cannot be determined until apply.
Expected behavior
Terraform should successfully destroy the EKS cluster and associated resources without requiring -target workarounds.
Actual behavior
Destroy fails immediately during state refresh with:
Error: Invalid count argument
on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 2, in data "aws_partition" "current":
2: count = var.create && var.partition == "" ? 1 : 0
This prevents cleanup and leaves resources orphaned.
Terminal Output Screenshot(s)
22:59:51.771 STDERR terraform: β Error: Invalid count argument
22:59:51.771 STDERR terraform: β on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 2, in data "aws_partition" "current":
22:59:51.771 STDERR terraform: β 2: count = var.create && var.partition == "" ? 1 : 0
22:59:51.804 STDERR terraform: β on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 5, in data "aws_caller_identity" "current":
22:59:51.804 STDERR terraform: β 5: count = var.create && var.account_id == "" ? 1 : 0
Additional context
- The error occurs during destroy,plan and apply
- Custom patch
- I did this change in eks-managed-node-group module manually as a patch to fix the issue and it worked
data "aws_partition" "current" {
# count = var.create && var.partition == "" ? 1 : 0
}
data "aws_caller_identity" "current" {
# count = var.create && var.account_id == "" ? 1 : 0
}
locals {
partition = data.aws_partition.current.partition
account_id = data.aws_caller_identity.current.account_id
}