- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 4.3k
 
Dev 5769/custom cluster admin arn #3201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev 5769/custom cluster admin arn #3201
Conversation
| bootstrap_cluster_creator_admin_permissions = { | ||
| cluster_creator = { | ||
| principal_arn = data.aws_iam_session_context.current.issuer_arn | ||
| principal_arn = var.custom_cluster_creator_admin_arn != "" ? var.custom_cluster_creator_admin_arn : data.aws_iam_session_context.current.issuer_arn | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't do this - you should just pass in the intended role via the cluster access entry. this was just a helper utility to make that easier for users to carry over behavior prior to cluster access entries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally didn't mean to create this PR, but since I did, @bryantbiggs are you saying to disable enable_cluster_creator_admin_permissions and create an aws_eks_access_entry resource independently of the module, or do you mean something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can do that, or you can do it through the module like
terraform-aws-eks/tests/eks-managed-node-group/main.tf
Lines 366 to 381 in d2e6262
| access_entries = { | |
| # One access entry with a policy associated | |
| ex-single = { | |
| kubernetes_groups = [] | |
| principal_arn = aws_iam_role.this["single"].arn | |
| policy_associations = { | |
| single = { | |
| policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy" | |
| access_scope = { | |
| namespaces = ["default"] | |
| type = "namespace" | |
| } | |
| } | |
| } | |
| } | 
enable_cluster_creator_admin_permissions is just mimicking the behavior of the past - for folks who rely on Terraform going into the cluster to provision additional resources, this makes that easier. but if you aren't using the same Terraform IAM entity to go into the cluster, don't use enable_cluster_creator_admin_permissions and instead just define those entities and what permissions they should have via access_entries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see... That's very helpful, thanks for the help!
| 
           I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.  | 
    
Description
Motivation and Context
Breaking Changes
How Has This Been Tested?
examples/*to demonstrate and validate my change(s)examples/*projectspre-commit run -aon my pull request