-
Notifications
You must be signed in to change notification settings - Fork 111
Open
Description
Hey all,
I'm refactoring our GKE Terraform module from individual resources to a for_each pattern, but I'm running into connection issues with the provider. Since TF wants to recreate cluster it seems it can't get the cluster details appropriately. Has anyone encountered this before?
My Current Setup
....
kubectl = {
source = "gavinbunney/kubectl"
version = "1.19.0"
}
....
provider "kubectl" {
alias = "md2-integration-tests"
load_config_file = false
host = "https://${google_container_cluster.k8s.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(google_container_cluster.k8s.master_auth[0].cluster_ca_certificate)
}The Problem
When I run terraform plan after refactoring to for_each, the cluster needs to be replaced (expected), but the kubectl provider throws these errors:
Error: Get "http://localhost/api/v1/namespaces/md-integration-tests": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/es-tests": dial tcp [::1]:80: connect: connection refused
The provider seems to be trying to connect to localhost instead of the actual GKE cluster endpoint.
What I've Tried So Far
Attempt 1: Using load_config_file = true
provider "kubectl" {
alias = "md2-integration-tests"
load_config_file = true
config_context = "gke_${var.project_id}_${var.region}_${var.cluster_name}"
}Result: Still fails, same error.
Attempt 2: Adding explicit dependencies
resource "kubectl_manifest" "namespace" {
depends_on = [google_container_cluster.k8s]
# ... rest of config
}Any guidance would be greatly appreciated! Happy to provide more details if needed.
question, help-wanted, gke, kubectl-provider, terraform, for-each, refactoring
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels