Skip to content

Commit 323fb75

Browse files
authored
docs: Move examples that are more like test cases to the new tests/ directory; add better example configurations (#3069)
* chore: Move examples that are more like test cases to the new `tests/` directory * chore: Stash * feat: Add better examples for EKS managed node groups * chore: Add better examples for self-managed node groups * chore: Update docs and correct `nodegroup` to `node group`
1 parent 73b752a commit 323fb75

File tree

85 files changed

+509
-109
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+509
-109
lines changed

README.md

Lines changed: 24 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# AWS EKS Terraform module
22

3-
Terraform module which creates AWS EKS (Kubernetes) resources
3+
Terraform module which creates Amazon EKS (Kubernetes) resources
44

55
[![SWUbanner](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md)
66

@@ -23,13 +23,6 @@ Please note that we strive to provide a comprehensive suite of documentation for
2323
- [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
2424
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
2525

26-
#### Reference Architecture
27-
28-
The examples provided under `examples/` provide a comprehensive suite of configurations that demonstrate nearly all of the possible different configurations and settings that can be used with this module. However, these examples are not representative of clusters that you would normally find in use for production workloads. For reference architectures that utilize this module, please see the following:
29-
30-
- [EKS Reference Architecture](https://github.com/clowdhaus/eks-reference-architecture)
31-
- [EKS Blueprints](https://github.com/aws-ia/terraform-aws-eks-blueprints)
32-
3326
## Usage
3427

3528
```hcl
@@ -38,20 +31,15 @@ module "eks" {
3831
version = "~> 20.0"
3932
4033
cluster_name = "my-cluster"
41-
cluster_version = "1.29"
34+
cluster_version = "1.30"
4235
4336
cluster_endpoint_public_access = true
4437
4538
cluster_addons = {
46-
coredns = {
47-
most_recent = true
48-
}
49-
kube-proxy = {
50-
most_recent = true
51-
}
52-
vpc-cni = {
53-
most_recent = true
54-
}
39+
coredns = {}
40+
eks-pod-identity-agent = {}
41+
kube-proxy = {}
42+
vpc-cni = {}
5543
}
5644
5745
vpc_id = "vpc-1234556abcdef"
@@ -65,12 +53,13 @@ module "eks" {
6553
6654
eks_managed_node_groups = {
6755
example = {
68-
min_size = 1
69-
max_size = 10
70-
desired_size = 1
56+
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
57+
ami_type = "AL2023_x86_64_STANDARD"
58+
instance_types = ["m5.xlarge"]
7159
72-
instance_types = ["t3.large"]
73-
capacity_type = "SPOT"
60+
min_size = 2
61+
max_size = 10
62+
desired_size = 2
7463
}
7564
}
7665
@@ -105,7 +94,7 @@ module "eks" {
10594

10695
### Cluster Access Entry
10796

108-
When enabling `authentication_mode = "API_AND_CONFIG_MAP"`, EKS will automatically create an access entry for the IAM role(s) used by managed nodegroup(s) and Fargate profile(s). There are no additional actions required by users. For self-managed nodegroups and the Karpenter sub-module, this project automatically adds the access entry on behalf of users so there are no additional actions required by users.
97+
When enabling `authentication_mode = "API_AND_CONFIG_MAP"`, EKS will automatically create an access entry for the IAM role(s) used by managed node group(s) and Fargate profile(s). There are no additional actions required by users. For self-managed node groups and the Karpenter sub-module, this project automatically adds the access entry on behalf of users so there are no additional actions required by users.
10998

11099
On clusters that were created prior to CAM support, there will be an existing access entry for the cluster creator. This was previously not visible when using `aws-auth` ConfigMap, but will become visible when access entry is enabled.
111100

@@ -115,22 +104,22 @@ Setting the `bootstrap_cluster_creator_admin_permissions` is a one time operatio
115104

116105
### Enabling EFA Support
117106

118-
When enabling EFA support via `enable_efa_support = true`, there are two locations this can be specified - one at the cluster level, and one at the nodegroup level. Enabling at the cluster level will add the EFA required ingress/egress rules to the shared security group created for the nodegroup(s). Enabling at the nodegroup level will do the following (per nodegroup where enabled):
107+
When enabling EFA support via `enable_efa_support = true`, there are two locations this can be specified - one at the cluster level, and one at the node group level. Enabling at the cluster level will add the EFA required ingress/egress rules to the shared security group created for the node group(s). Enabling at the node group level will do the following (per node group where enabled):
119108

120-
1. All EFA interfaces supported by the instance will be exposed on the launch template used by the nodegroup
121-
2. A placement group with `strategy = "clustered"` per EFA requirements is created and passed to the launch template used by the nodegroup
109+
1. All EFA interfaces supported by the instance will be exposed on the launch template used by the node group
110+
2. A placement group with `strategy = "clustered"` per EFA requirements is created and passed to the launch template used by the node group
122111
3. Data sources will reverse lookup the availability zones that support the instance type selected based on the subnets provided, ensuring that only the associated subnets are passed to the launch template and therefore used by the placement group. This avoids the placement group being created in an availability zone that does not support the instance type selected.
123112

124113
> [!TIP]
125114
> Use the [aws-efa-k8s-device-plugin](https://github.com/aws/eks-charts/tree/master/stable/aws-efa-k8s-device-plugin) Helm chart to expose the EFA interfaces on the nodes as an extended resource, and allow pods to request the interfaces be mounted to their containers.
126115
>
127116
> The EKS AL2 GPU AMI comes with the necessary EFA components pre-installed - you just need to expose the EFA devices on the nodes via their launch templates, ensure the required EFA security group rules are in place, and deploy the `aws-efa-k8s-device-plugin` in order to start utilizing EFA within your cluster. Your application container will need to have the necessary libraries and runtime in order to utilize communication over the EFA interfaces (NCCL, aws-ofi-nccl, hwloc, libfabric, aws-neuornx-collectives, CUDA, etc.).
128117
129-
If you disable the creation and use of the managed nodegroup custom launch template (`create_launch_template = false` and/or `use_custom_launch_template = false`), this will interfere with the EFA functionality provided. In addition, if you do not supply an `instance_type` for self-managed nodegroup(s), or `instance_types` for the managed nodegroup(s), this will also interfere with the functionality. In order to support the EFA functionality provided by `enable_efa_support = true`, you must utilize the custom launch template created/provided by this module, and supply an `instance_type`/`instance_types` for the respective nodegroup.
118+
If you disable the creation and use of the managed node group custom launch template (`create_launch_template = false` and/or `use_custom_launch_template = false`), this will interfere with the EFA functionality provided. In addition, if you do not supply an `instance_type` for self-managed node group(s), or `instance_types` for the managed node group(s), this will also interfere with the functionality. In order to support the EFA functionality provided by `enable_efa_support = true`, you must utilize the custom launch template created/provided by this module, and supply an `instance_type`/`instance_types` for the respective node group.
130119

131-
The logic behind supporting EFA uses a data source to lookup the instance type to retrieve the number of interfaces that the instance supports in order to enumerate and expose those interfaces on the launch template created. For managed nodegroups where a list of instance types are supported, the first instance type in the list is used to calculate the number of EFA interfaces supported. Mixing instance types with varying number of interfaces is not recommended for EFA (or in some cases, mixing instance types is not supported - i.e. - p5.48xlarge and p4d.24xlarge). In addition to exposing the EFA interfaces and updating the security group rules, a placement group is created per the EFA requirements and only the availability zones that support the instance type selected are used in the subnets provided to the nodegroup.
120+
The logic behind supporting EFA uses a data source to lookup the instance type to retrieve the number of interfaces that the instance supports in order to enumerate and expose those interfaces on the launch template created. For managed node groups where a list of instance types are supported, the first instance type in the list is used to calculate the number of EFA interfaces supported. Mixing instance types with varying number of interfaces is not recommended for EFA (or in some cases, mixing instance types is not supported - i.e. - p5.48xlarge and p4d.24xlarge). In addition to exposing the EFA interfaces and updating the security group rules, a placement group is created per the EFA requirements and only the availability zones that support the instance type selected are used in the subnets provided to the node group.
132121

133-
In order to enable EFA support, you will have to specify `enable_efa_support = true` on both the cluster and each nodegroup that you wish to enable EFA support for:
122+
In order to enable EFA support, you will have to specify `enable_efa_support = true` on both the cluster and each node group that you wish to enable EFA support for:
134123

135124
```hcl
136125
module "eks" {
@@ -140,14 +129,14 @@ module "eks" {
140129
# Truncated for brevity ...
141130
142131
# Adds the EFA required security group rules to the shared
143-
# security group created for the nodegroup(s)
132+
# security group created for the node group(s)
144133
enable_efa_support = true
145134
146135
eks_managed_node_groups = {
147136
example = {
148137
instance_types = ["p5.48xlarge"]
149138
150-
# Exposes all EFA interfaces on the launch template created by the nodegroup(s)
139+
# Exposes all EFA interfaces on the launch template created by the node group(s)
151140
# This would expose all 32 EFA interfaces for the p5.48xlarge instance type
152141
enable_efa_support = true
153142
@@ -169,12 +158,10 @@ module "eks" {
169158

170159
## Examples
171160

172-
- [EKS Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group): EKS Cluster using EKS managed node groups
173-
- [Fargate Profile](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile): EKS cluster using [Fargate Profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html)
161+
- [EKS Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks-managed-node-group): EKS Cluster using EKS managed node groups
174162
- [Karpenter](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/karpenter): EKS Cluster with [Karpenter](https://karpenter.sh/) provisioned for intelligent data plane management
175163
- [Outposts](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/outposts): EKS local cluster provisioned on [AWS Outposts](https://docs.aws.amazon.com/eks/latest/userguide/eks-outposts.html)
176-
- [Self Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self_managed_node_group): EKS Cluster using self-managed node groups
177-
- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data): Various supported methods of providing necessary bootstrap scripts and configuration settings via user data
164+
- [Self Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self-managed-node-group): EKS Cluster using self-managed node groups
178165

179166
## Contributing
180167

@@ -290,7 +277,7 @@ We are grateful to the community for contributing bugfixes and improvements! Ple
290277
| <a name="input_create_kms_key"></a> [create\_kms\_key](#input\_create\_kms\_key) | Controls if a KMS key for cluster encryption should be created | `bool` | `true` | no |
291278
| <a name="input_create_node_security_group"></a> [create\_node\_security\_group](#input\_create\_node\_security\_group) | Determines whether to create a security group for the node groups or use the existing `node_security_group_id` | `bool` | `true` | no |
292279
| <a name="input_custom_oidc_thumbprints"></a> [custom\_oidc\_thumbprints](#input\_custom\_oidc\_thumbprints) | Additional list of server certificate thumbprints for the OpenID Connect (OIDC) identity provider's server certificate(s) | `list(string)` | `[]` | no |
293-
| <a name="input_dataplane_wait_duration"></a> [dataplane\_wait\_duration](#input\_dataplane\_wait\_duration) | Duration to wait after the EKS cluster has become active before creating the dataplane components (EKS managed nodegroup(s), self-managed nodegroup(s), Fargate profile(s)) | `string` | `"30s"` | no |
280+
| <a name="input_dataplane_wait_duration"></a> [dataplane\_wait\_duration](#input\_dataplane\_wait\_duration) | Duration to wait after the EKS cluster has become active before creating the dataplane components (EKS managed node group(s), self-managed node group(s), Fargate profile(s)) | `string` | `"30s"` | no |
294281
| <a name="input_eks_managed_node_group_defaults"></a> [eks\_managed\_node\_group\_defaults](#input\_eks\_managed\_node\_group\_defaults) | Map of EKS managed node group default configurations | `any` | `{}` | no |
295282
| <a name="input_eks_managed_node_groups"></a> [eks\_managed\_node\_groups](#input\_eks\_managed\_node\_groups) | Map of EKS managed node group definitions to create | `any` | `{}` | no |
296283
| <a name="input_enable_cluster_creator_admin_permissions"></a> [enable\_cluster\_creator\_admin\_permissions](#input\_enable\_cluster\_creator\_admin\_permissions) | Indicates whether or not to add the cluster creator (the identity used by Terraform) as an administrator via access entry | `bool` | `false` | no |

0 commit comments

Comments
 (0)