Skip to content

Commit 03c2231

Browse files
elattmax-rocket-internet
authored andcommitted
Fix issue where ConfigMap isn't applied to new cluster (#235)
If you are trying to recover a cluster that was deleted, the current code will not re-apply the ConfigMap because it is already rendered so kubectl command won't get triggered. This change adds the cluster endpoint (which should be different when spinning up a new cluster even with the same name) so we will force a re-render and cause the kubectl command to run.
1 parent 91eb56f commit 03c2231

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/).
1212
- Write your awesome addition here (by @you)
1313

1414
### Changed
15-
15+
- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt)
1616
- Removed invalid action from worker_autoscaling iam policy (by @marcelloromani)
1717
- Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani)
1818
- Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani)

aws_auth.tf

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ resource "null_resource" "update_config_map_aws_auth" {
1414

1515
triggers {
1616
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
17+
endpoint = "${aws_eks_cluster.this.endpoint}"
1718
}
1819

1920
count = "${var.manage_aws_auth ? 1 : 0}"

0 commit comments

Comments
 (0)