You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix issue where ConfigMap isn't applied to new cluster (#235)
If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.
This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/).
12
12
- Write your awesome addition here (by @you)
13
13
14
14
### Changed
15
-
15
+
- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt)
16
16
- Removed invalid action from worker_autoscaling iam policy (by @marcelloromani)
17
17
- Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani)
18
18
- Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani)
0 commit comments