You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/faq.md
+25-10Lines changed: 25 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,23 +12,36 @@
12
12
13
13
`disk_size`, and `remote_access` can only be set when using the EKS managed node group default launch template. This module defaults to providing a custom launch template to allow for custom security groups, tag propagation, etc. If you wish to forgo the custom launch template route, you can set `use_custom_launch_template = false` and then you can set `disk_size` and `remote_access`.
14
14
15
-
### I received an error: `expect exactly one securityGroup tagged with kubernetes.io/cluster/<NAME> ...`
15
+
### I received an error: `expect exactly one securityGroup tagged with kubernetes.io/cluster/<CLUSTER_NAME> ...`
16
16
17
-
By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag `{ "kubernetes.io/cluster/<CLUSTER_NAME>" = "owned" }`. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting `attach_cluster_primary_security_group = true`). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:
17
+
⚠️ `<CLUSTER_NAME>` would be the name of your cluster
18
18
19
-
⚠️ `<CLUSTER_NAME>` below needs to be replaced with the name of your cluster
19
+
By default, EKS creates a cluster primary security group that is created outside of the module and the EKS service adds the tag `{ "kubernetes.io/cluster/<CLUSTER_NAME>" = "owned" }`. This on its own does not cause any conflicts for addons such as the AWS Load Balancer Controller until users decide to attach both the cluster primary security group and the shared node security group created by the module (by setting `attach_cluster_primary_security_group = true`). The issue is not with having multiple security groups in your account with this tag key:value combination, but having multiple security groups with this tag key:value combination attached to nodes in the same cluster. There are a few ways to resolve this depending on your use case/intentions:
20
20
21
21
1. If you want to use the cluster primary security group, you can disable the creation of the shared node security group with:
22
22
23
23
```hcl
24
-
create_node_security_group = false # default is true
25
-
attach_cluster_primary_security_group = true # default is false
24
+
create_node_security_group = false # default is true
25
+
26
+
eks_managed_node_group_defaults = {
27
+
attach_cluster_primary_security_group = true # default is false
28
+
}
29
+
# Or for self-managed
30
+
self_managed_node_group_defaults = {
31
+
attach_cluster_primary_security_group = true # default is false
32
+
}
26
33
```
27
34
28
35
2. By not attaching the cluster primary security group. The cluster primary security group has quite broad access and the module has instead provided a security group with the minimum amount of access to launch an empty EKS cluster successfully and users are encouraged to open up access when necessary to support their workload.
29
36
30
37
```hcl
31
-
attach_cluster_primary_security_group = false # this is the default for the module
38
+
eks_managed_node_group_defaults = {
39
+
attach_cluster_primary_security_group = true # default is false
40
+
}
41
+
# Or for self-managed
42
+
self_managed_node_group_defaults = {
43
+
attach_cluster_primary_security_group = true # default is false
44
+
}
32
45
```
33
46
34
47
In theory, if you are attaching the cluster primary security group, you shouldn't need to use the shared node security group created by the module. However, this is left up to users to decide for their requirements and use case.
@@ -58,6 +71,8 @@ If you require a public endpoint, setting up both (public and private) and restr
58
71
59
72
The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.
60
73
74
+
:info: See [this](https://github.com/bryantbiggs/eks-desired-size-hack) for a workaround to this limitation.
75
+
61
76
### How do I access compute resource attributes?
62
77
63
78
Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named `eks` as in `module "eks" { ... }`:
Copy file name to clipboardExpand all lines: docs/user_data.md
+20-4Lines changed: 20 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,8 @@ Users can see the various methods of using and providing user data through the [
10
10
- AMI types of `BOTTLEROCKET_*`, user data must be in TOML format
11
11
- AMI types of `WINDOWS_*`, user data must be in powershell/PS1 script format
12
12
- Self Managed Node Groups
13
-
-`AL2_x86_64` AMI type (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
13
+
-`AL2_*` AMI types -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
14
+
-`AL2023_*` AMI types -> the user data template (MIME multipart format) provided by the module is used as the default; users are able to provide their own user data template
14
15
-`BOTTLEROCKET_*` AMI types -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
15
16
-`WINDOWS_*` AMI types -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template
16
17
@@ -24,9 +25,24 @@ When using an EKS managed node group, users have 2 primary routes for interactin
24
25
25
26
- Users can use the following variables to facilitate this process:
26
27
27
-
```hcl
28
-
pre_bootstrap_user_data = "..."
29
-
```
28
+
For `AL2_*`, `BOTTLEROCKET_*`, and `WINDOWS_*`:
29
+
```hcl
30
+
pre_bootstrap_user_data = "..."
31
+
```
32
+
33
+
For `AL2023_*`
34
+
```hcl
35
+
cloudinit_pre_nodeadm = [{
36
+
content = <<-EOT
37
+
---
38
+
apiVersion: node.eks.aws/v1alpha1
39
+
kind: NodeConfig
40
+
spec:
41
+
...
42
+
EOT
43
+
content_type = "application/node.eks.aws"
44
+
}]
45
+
```
30
46
31
47
2. If a custom AMI is used, then per the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami), users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process:
32
48
- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
|[aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity)| data source |
96
96
|[aws_ec2_instance_type.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_instance_type)| data source |
97
+
|[aws_eks_cluster_versions.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_versions)| data source |
97
98
|[aws_iam_policy_document.assume_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)| data source |
98
99
|[aws_iam_policy_document.role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)| data source |
99
100
|[aws_partition.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition)| data source |
| <aname="input_cluster_auth_base64"></a> [cluster\_auth\_base64](#input\_cluster\_auth\_base64)| Base64 encoded CA of associated EKS cluster |`string`|`""`| no |
118
119
| <aname="input_cluster_endpoint"></a> [cluster\_endpoint](#input\_cluster\_endpoint)| Endpoint of associated EKS cluster |`string`|`""`| no |
119
120
| <aname="input_cluster_ip_family"></a> [cluster\_ip\_family](#input\_cluster\_ip\_family)| The IP family used to assign Kubernetes pod and service addresses. Valid values are `ipv4` (default) and `ipv6`|`string`|`"ipv4"`| no |
120
-
| <aname="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name)| Name of associated EKS cluster |`string`|`null`| no |
121
+
| <aname="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name)| Name of associated EKS cluster |`string`|`""`| no |
121
122
| <aname="input_cluster_primary_security_group_id"></a> [cluster\_primary\_security\_group\_id](#input\_cluster\_primary\_security\_group\_id)| The ID of the EKS cluster primary security group to associate with the instance(s). This is the security group that is automatically created by the EKS service |`string`|`null`| no |
122
123
| <aname="input_cluster_service_cidr"></a> [cluster\_service\_cidr](#input\_cluster\_service\_cidr)| The CIDR block (IPv4 or IPv6) used by the cluster to assign Kubernetes service IP addresses. This is derived from the cluster itself |`string`|`""`| no |
123
124
| <aname="input_cpu_options"></a> [cpu\_options](#input\_cpu\_options)| The CPU options for the instance | <pre>object({<br/> amd_sev_snp = optional(string)<br/> core_count = optional(number)<br/> threads_per_core = optional(number)<br/> })</pre> |`null`| no |
0 commit comments