Skip to content

Commit 2de3846

Browse files
committed
fix: Updates from plan validation
1 parent 940a0e8 commit 2de3846

File tree

15 files changed

+271
-195
lines changed

15 files changed

+271
-195
lines changed

README.md

Lines changed: 88 additions & 88 deletions
Large diffs are not rendered by default.

docs/compute_resources.md

Lines changed: 27 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -56,22 +56,36 @@ Refer to the [EKS Managed Node Group documentation](https://docs.aws.amazon.com/
5656
```hcl
5757
eks_managed_node_groups = {
5858
custom_ami = {
59-
ami_id = "ami-0caf35bc73450c396"
59+
ami_id = "ami-0caf35bc73450c396"
60+
ami_type = "AL2023_x86_64_STANDARD"
6061
6162
# By default, EKS managed node groups will not append bootstrap script;
6263
# this adds it back in using the default template provided by the module
6364
# Note: this assumes the AMI provided is an EKS optimized AMI derivative
6465
enable_bootstrap_user_data = true
6566
66-
pre_bootstrap_user_data = <<-EOT
67-
export FOO=bar
68-
EOT
69-
70-
# Because we have full control over the user data supplied, we can also run additional
71-
# scripts/configuration changes after the bootstrap script has been run
72-
post_bootstrap_user_data = <<-EOT
73-
echo "you are free little kubelet!"
74-
EOT
67+
cloudinit_pre_nodeadm = [{
68+
content = <<-EOT
69+
---
70+
apiVersion: node.eks.aws/v1alpha1
71+
kind: NodeConfig
72+
spec:
73+
kubelet:
74+
config:
75+
shutdownGracePeriod: 30s
76+
featureGates:
77+
DisableKubeletCloudCredentialProviders: true
78+
EOT
79+
content_type = "application/node.eks.aws"
80+
}]
81+
82+
# This is only possible when `ami_id` is specified, indicating a custom AMI
83+
cloudinit_post_nodeadm = [{
84+
content = <<-EOT
85+
echo "All done"
86+
EOT
87+
content_type = "text/x-shellscript; charset=\"us-ascii\""
88+
}]
7589
}
7690
}
7791
```
@@ -115,7 +129,7 @@ Refer to the [Self Managed Node Group documentation](https://docs.aws.amazon.com
115129
```hcl
116130
kubernetes_version = "1.33"
117131
118-
# This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.27
132+
# This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.33
119133
self_managed_node_groups = {
120134
default = {}
121135
}
@@ -152,7 +166,7 @@ For example, the following creates 4 AWS EKS Managed Node Groups:
152166

153167
```hcl
154168
eks_managed_node_group_defaults = {
155-
ami_type = "AL2_x86_64"
169+
ami_type = "AL2023_x86_64_STANDARD"
156170
disk_size = 50
157171
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
158172
}
@@ -166,9 +180,8 @@ For example, the following creates 4 AWS EKS Managed Node Groups:
166180
instance_types = ["c5.large", "c6i.large", "c6d.large"]
167181
}
168182
169-
# This further overrides the instance types and disk size used
183+
# This further overrides the instance types
170184
persistent = {
171-
disk_size = 1024
172185
instance_types = ["r5.xlarge", "r6i.xlarge", "r5b.xlarge"]
173186
}
174187

docs/faq.md

Lines changed: 25 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -12,23 +12,36 @@
1212

1313
`disk_size`, and `remote_access` can only be set when using the EKS managed node group default launch template. This module defaults to providing a custom launch template to allow for custom security groups, tag propagation, etc. If you wish to forgo the custom launch template route, you can set `use_custom_launch_template = false` and then you can set `disk_size` and `remote_access`.
1414

15-
### I received an error: `expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...`
15+
### I received an error: `expect exactly one securityGroup tagged with kubernetes.io/cluster/<CLUSTER_NAME> ...`
1616

17-
By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag `{ "kubernetes.io/cluster/<CLUSTER_NAME>" = "owned" }`. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting `attach_cluster_primary_security_group = true`). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:
17+
⚠️ `<CLUSTER_NAME>` would be the name of your cluster
1818

19-
⚠️ `<CLUSTER_NAME>` below needs to be replaced with the name of your cluster
19+
By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag `{ "kubernetes.io/cluster/<CLUSTER_NAME>" = "owned" }`. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting `attach_cluster_primary_security_group = true`). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:
2020

2121
1. If you want to use the cluster primary security group, you can disable the creation of the shared node security group with:
2222

2323
```hcl
24-
create_node_security_group = false # default is true
25-
attach_cluster_primary_security_group = true # default is false
24+
create_node_security_group = false # default is true
25+
26+
eks_managed_node_group_defaults = {
27+
attach_cluster_primary_security_group = true # default is false
28+
}
29+
# Or for self-managed
30+
self_managed_node_group_defaults = {
31+
attach_cluster_primary_security_group = true # default is false
32+
}
2633
```
2734

2835
2. By not attaching the cluster primary security group. The cluster primary security group has quite broad access and the module has instead provided a security group with the minimum amount of access to launch an empty EKS cluster successfully and users are encouraged to open up access when necessary to support their workload.
2936

3037
```hcl
31-
attach_cluster_primary_security_group = false # this is the default for the module
38+
eks_managed_node_group_defaults = {
39+
attach_cluster_primary_security_group = true # default is false
40+
}
41+
# Or for self-managed
42+
self_managed_node_group_defaults = {
43+
attach_cluster_primary_security_group = true # default is false
44+
}
3245
```
3346

3447
In theory, if you are attaching the cluster primary security group, you shouldn't need to use the shared node security group created by the module. However, this is left up to users to decide for their requirements and use case.
@@ -58,6 +71,8 @@ If you require a public endpoint, setting up both (public and private) and restr
5871

5972
The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.
6073

74+
:info: See [this](https://github.com/bryantbiggs/eks-desired-size-hack) for a workaround to this limitation.
75+
6176
### How do I access compute resource attributes?
6277

6378
Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named `eks` as in `module "eks" { ... }`:
@@ -90,6 +105,10 @@ aws eks describe-addon-versions --query 'addons[*].addonName'
90105

91106
### What configuration values are available for an add-on?
92107

108+
> [!NOTE]
109+
> The available configuration values will vary between add-on versions,
110+
> typically more configuration values will be added in later versions as functionality is enabled by EKS.
111+
93112
You can retrieve the configuration value schema for a given addon using the following command:
94113

95114
```sh
@@ -286,7 +305,3 @@ Returns (at the time of writing):
286305
}
287306
}
288307
```
289-
290-
> [!NOTE]
291-
> The available configuration values will vary between add-on versions,
292-
> typically more configuration values will be added in later versions as functionality is enabled by EKS.

docs/network_connectivity.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ See the example snippet below which adds additional security group rules to the
2727
```hcl
2828
...
2929
# Extend cluster security group rules
30-
cluster_security_group_additional_rules = {
30+
security_group_additional_rules = {
3131
egress_nodes_ephemeral_ports_tcp = {
3232
description = "To node 1025-65535"
3333
protocol = "tcp"

docs/user_data.md

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,8 @@ Users can see the various methods of using and providing user data through the [
1010
- AMI types of `BOTTLEROCKET_*`, user data must be in TOML format
1111
- AMI types of `WINDOWS_*`, user data must be in powershell/PS1 script format
1212
- Self Managed Node Groups
13-
- `AL2_x86_64` AMI type (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
13+
- `AL2_*` AMI types -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
14+
- `AL2023_*` AMI types -> the user data template (MIME multipart format) provided by the module is used as the default; users are able to provide their own user data template
1415
- `BOTTLEROCKET_*` AMI types -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
1516
- `WINDOWS_*` AMI types -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template
1617

@@ -24,9 +25,24 @@ When using an EKS managed node group, users have 2 primary routes for interactin
2425

2526
- Users can use the following variables to facilitate this process:
2627

27-
```hcl
28-
pre_bootstrap_user_data = "..."
29-
```
28+
For `AL2_*`, `BOTTLEROCKET_*`, and `WINDOWS_*`:
29+
```hcl
30+
pre_bootstrap_user_data = "..."
31+
```
32+
33+
For `AL2023_*`
34+
```hcl
35+
cloudinit_pre_nodeadm = [{
36+
content = <<-EOT
37+
---
38+
apiVersion: node.eks.aws/v1alpha1
39+
kind: NodeConfig
40+
spec:
41+
...
42+
EOT
43+
content_type = "application/node.eks.aws"
44+
}]
45+
```
3046
3147
2. If a custom AMI is used, then per the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami), users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process:
3248
- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:

main.tf

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ resource "aws_eks_cluster" "this" {
6060
content {
6161
enabled = compute_config.value.enabled
6262
node_pools = compute_config.value.node_pools
63-
node_role_arn = try(compute_config.value.node_pools, []) > 0 ? try(compute_config.value.node_role_arn, aws_iam_role.eks_auto[0].arn, null) : null
63+
node_role_arn = compute_config.value.node_pools != null ? try(compute_config.value.node_role_arn, aws_iam_role.eks_auto[0].arn, null) : null
6464
}
6565
}
6666

@@ -283,14 +283,14 @@ resource "aws_eks_access_entry" "this" {
283283
for_each = { for k, v in local.merged_access_entries : k => v if local.create }
284284

285285
cluster_name = aws_eks_cluster.this[0].id
286-
kubernetes_groups = each.value.kubernetes_groups
286+
kubernetes_groups = try(each.value.kubernetes_groups, null)
287287
principal_arn = each.value.principal_arn
288-
type = each.value.type
289-
user_name = each.value.user_name
288+
type = try(each.value.type, null)
289+
user_name = try(each.value.user_name, null)
290290

291291
tags = merge(
292292
var.tags,
293-
each.value.tags,
293+
try(each.value.tags, {}),
294294
)
295295
}
296296

@@ -403,12 +403,12 @@ resource "aws_security_group_rule" "cluster" {
403403
from_port = each.value.from_port
404404
to_port = each.value.to_port
405405
type = each.value.type
406-
description = each.value.description
407-
cidr_blocks = each.value.cidr_blocks
408-
ipv6_cidr_blocks = each.value.ipv6_cidr_blocks
409-
prefix_list_ids = each.value.prefix_list_ids
410-
self = each.value.self
411-
source_security_group_id = each.value.source_node_security_group ? local.node_security_group_id : each.value.source_security_group_id
406+
description = try(each.value.description, null)
407+
cidr_blocks = try(each.value.cidr_blocks, null)
408+
ipv6_cidr_blocks = try(each.value.ipv6_cidr_blocks, null)
409+
prefix_list_ids = try(each.value.prefix_list_ids, null)
410+
self = try(each.value.self, null)
411+
source_security_group_id = try(each.value.source_node_security_group, false) ? local.node_security_group_id : try(each.value.source_security_group_id, null)
412412
}
413413

414414
################################################################################
@@ -733,7 +733,7 @@ resource "aws_iam_role_policy_attachment" "custom" {
733733
data "aws_eks_addon_version" "this" {
734734
for_each = var.addons != null && local.create && !local.create_outposts_local_cluster ? var.addons : {}
735735

736-
addon_name = try(each.value.name, each.key)
736+
addon_name = coalesce(each.value.name, each.key)
737737
kubernetes_version = coalesce(var.kubernetes_version, aws_eks_cluster.this[0].version)
738738
most_recent = each.value.most_recent
739739
}
@@ -743,7 +743,7 @@ resource "aws_eks_addon" "this" {
743743
for_each = var.addons != null && local.create && !local.create_outposts_local_cluster ? { for k, v in var.addons : k => v if !v.before_compute } : {}
744744

745745
cluster_name = aws_eks_cluster.this[0].id
746-
addon_name = try(each.value.name, each.key)
746+
addon_name = coalesce(each.value.name, each.key)
747747

748748
addon_version = try(each.value.addon_version, data.aws_eks_addon_version.this[each.key].version)
749749
configuration_values = each.value.configuration_values
@@ -786,7 +786,7 @@ resource "aws_eks_addon" "before_compute" {
786786
for_each = var.addons != null && local.create && !local.create_outposts_local_cluster ? { for k, v in var.addons : k => v if v.before_compute } : {}
787787

788788
cluster_name = aws_eks_cluster.this[0].id
789-
addon_name = try(each.value.name, each.key)
789+
addon_name = coalesce(each.value.name, each.key)
790790

791791
addon_version = try(each.value.addon_version, data.aws_eks_addon_version.this[each.key].version)
792792
configuration_values = each.value.configuration_values

modules/eks-managed-node-group/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,7 @@ module "eks_managed_node_group" {
9494
| [aws_vpc_security_group_ingress_rule.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_security_group_ingress_rule) | resource |
9595
| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
9696
| [aws_ec2_instance_type.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_instance_type) | data source |
97+
| [aws_eks_cluster_versions.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_versions) | data source |
9798
| [aws_iam_policy_document.assume_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
9899
| [aws_iam_policy_document.role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
99100
| [aws_partition.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) | data source |
@@ -117,7 +118,7 @@ module "eks_managed_node_group" {
117118
| <a name="input_cluster_auth_base64"></a> [cluster\_auth\_base64](#input\_cluster\_auth\_base64) | Base64 encoded CA of associated EKS cluster | `string` | `""` | no |
118119
| <a name="input_cluster_endpoint"></a> [cluster\_endpoint](#input\_cluster\_endpoint) | Endpoint of associated EKS cluster | `string` | `""` | no |
119120
| <a name="input_cluster_ip_family"></a> [cluster\_ip\_family](#input\_cluster\_ip\_family) | The IP family used to assign Kubernetes pod and service addresses. Valid values are `ipv4` (default) and `ipv6` | `string` | `"ipv4"` | no |
120-
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | Name of associated EKS cluster | `string` | `null` | no |
121+
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | Name of associated EKS cluster | `string` | `""` | no |
121122
| <a name="input_cluster_primary_security_group_id"></a> [cluster\_primary\_security\_group\_id](#input\_cluster\_primary\_security\_group\_id) | The ID of the EKS cluster primary security group to associate with the instance(s). This is the security group that is automatically created by the EKS service | `string` | `null` | no |
122123
| <a name="input_cluster_service_cidr"></a> [cluster\_service\_cidr](#input\_cluster\_service\_cidr) | The CIDR block (IPv4 or IPv6) used by the cluster to assign Kubernetes service IP addresses. This is derived from the cluster itself | `string` | `""` | no |
123124
| <a name="input_cpu_options"></a> [cpu\_options](#input\_cpu\_options) | The CPU options for the instance | <pre>object({<br/> amd_sev_snp = optional(string)<br/> core_count = optional(number)<br/> threads_per_core = optional(number)<br/> })</pre> | `null` | no |

modules/eks-managed-node-group/main.tf

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -361,9 +361,16 @@ resource "aws_launch_template" "this" {
361361
# AMI SSM Parameter
362362
################################################################################
363363

364+
data "aws_eks_cluster_versions" "this" {
365+
count = var.create && var.kubernetes_version == null ? 1 : 0
366+
367+
cluster_type = "eks"
368+
version_status = "STANDARD_SUPPORT"
369+
}
370+
364371
locals {
365372
# Just to ensure templating doesn't fail when values are not provided
366-
ssm_kubernetes_version = var.kubernetes_version != null ? var.kubernetes_version : ""
373+
ssm_kubernetes_version = var.kubernetes_version != null ? var.kubernetes_version : try(data.aws_eks_cluster_versions.this[0].cluster_versions[0].cluster_version, "UNSPECIFIED")
367374
ssm_ami_type = var.ami_type != null ? var.ami_type : ""
368375

369376
# Map the AMI type to the respective SSM param path
@@ -737,7 +744,7 @@ resource "aws_vpc_security_group_ingress_rule" "this" {
737744
}
738745

739746
resource "aws_vpc_security_group_egress_rule" "this" {
740-
for_each = { for k, v in local.security_group_egress_rules : k => v if length(local.security_group_egress_rules) && local.create_security_group }
747+
for_each = { for k, v in local.security_group_egress_rules : k => v if length(local.security_group_egress_rules) > 0 && local.create_security_group }
741748

742749
cidr_ipv4 = each.value.cidr_ipv4
743750
cidr_ipv6 = each.value.cidr_ipv6

0 commit comments

Comments
 (0)