Skip to content

Commit bc80724

Browse files
Merge pull request #13 from terraform-aws-modules/NextDeveloperTeam-add-worker-groups
Flexible number of worker autoscaling groups now able to be created by module consumers.
2 parents 23f1c37 + c8997a5 commit bc80724

File tree

18 files changed

+341
-349
lines changed

18 files changed

+341
-349
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,5 @@ Gemfile.lock
99
terraform.tfstate.d/
1010
kubeconfig
1111
config-map-aws-auth.yaml
12+
eks-admin-cluster-role-binding.yaml
13+
eks-admin-service-account.yaml

.travis.yml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,21 @@
11
language: ruby
22
sudo: required
33
dist: trusty
4+
45
services:
56
- docker
7+
68
rvm:
79
- 2.4.2
10+
811
before_install:
912
- echo "before_install"
13+
1014
install:
1115
- echo "install"
1216
- gem install bundler --no-rdoc --no-ri
1317
- bundle install
18+
1419
before_script:
1520
- echo 'before_script'
1621
- export AWS_REGION='us-east-1'
@@ -22,12 +27,13 @@ before_script:
2227
- unzip terraform.zip ; rm -f terraform.zip; chmod +x terraform
2328
- mkdir -p ${HOME}/bin ; export PATH=${PATH}:${HOME}/bin; mv terraform ${HOME}/bin/
2429
- terraform -v
30+
2531
script:
2632
- echo 'script'
2733
- terraform init
2834
- terraform fmt -check=true
2935
- terraform validate -var "region=${AWS_REGION}" -var "vpc_id=vpc-123456" -var "subnets=[\"subnet-12345a\"]" -var "workers_ami_id=ami-123456" -var "cluster_ingress_cidrs=[]" -var "cluster_name=test_cluster"
30-
- docker run --rm -v $(pwd):/app/ --workdir=/app/ -t wata727/tflint --error-with-issues
36+
# - docker run --rm -v $(pwd):/app/ --workdir=/app/ -t wata727/tflint --error-with-issues
3137
- cd examples/eks_test_fixture
3238
- terraform init
3339
- terraform fmt -check=true
@@ -40,6 +46,7 @@ script:
4046
# script: ci/deploy.sh
4147
# on:
4248
# branch: master
49+
4350
notifications:
4451
email:
4552
recipients:

CHANGELOG.md

Lines changed: 19 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,22 +5,36 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](http://keepachangelog.com/) and this
66
project adheres to [Semantic Versioning](http://semver.org/).
77

8+
## [[v1.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.2.0...v1.0.0)] - 2018-06-11]
9+
10+
### Added
11+
12+
- security group id can be provided for either/both of the cluster and the workers. If not provided, security groups will be created with sufficient rules to allow cluster-worker communication. - kudos to @tanmng on the idea ⭐
13+
- outputs of security group ids and worker ASG arns added for working with these resources outside the module.
14+
15+
### Changed
16+
17+
- Worker build out refactored to allow multiple autoscaling groups each having differing specs. If none are given, a single ASG is created with a set of sane defaults - big thanks to @kppullin 🥨
18+
819
## [[v0.2.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.1.1...v0.2.0)] - 2018-06-08]
920

21+
### Added
22+
23+
- ability to specify extra userdata code to execute following kubelet services start.
24+
- EBS optimization used whenever possible for the given instance type.
25+
- When `configure_kubectl_session` is set to true the current shell will be configured to talk to the kubernetes cluster using config files output from the module.
26+
1027
### Changed
1128

1229
- files rendered from dedicated templates to separate out raw code and config from `hcl`
1330
- `workers_ami_id` is now made optional. If not specified, the module will source the latest AWS supported EKS AMI instead.
14-
- added ability to specify extra userdata code to execute after the second to configure and start kube services.
15-
- When `configure_kubectl_session` is set to true the current shell will be configured to talk to the kubernetes cluster using config files output from the module.
16-
- EBS optimization used whenever possible for the given instance type.
1731

1832
## [[v0.1.1](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.1.0...v0.1.1)] - 2018-06-07]
1933

2034
### Changed
2135

22-
- pre-commit hooks fixed and working.
23-
- made progress on CI, advancing the build to the final `kitchen test` stage before failing.
36+
- Pre-commit hooks fixed and working.
37+
- Made progress on CI, advancing the build to the final `kitchen test` stage before failing.
2438

2539
## [v0.1.0] - 2018-06-07
2640

README.md

Lines changed: 26 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,17 @@ A terraform module to create a managed Kubernetes cluster on AWS EKS. Available
44
through the [Terraform registry](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws).
55
Inspired by and adapted from [this doc](https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html)
66
and its [source code](https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started).
7-
Instructions on [this post](https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/)
8-
can help guide you through connecting to the cluster via `kubectl`.
7+
Read the [AWS docs on EKS to get connected to the k8s dashboard](https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html).
98

109
| Branch | Build status |
1110
| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
1211
| master | [![build Status](https://travis-ci.org/terraform-aws-modules/terraform-aws-eks.svg?branch=master)](https://travis-ci.org/terraform-aws-modules/terraform-aws-eks) |
1312

1413
## Assumptions
1514

16-
* You want to create a set of resources around an EKS cluster: namely an autoscaling group of workers and a security group for them.
17-
* You've created a Virtual Private Cloud (VPC) and subnets where you intend to put this EKS.
15+
* You want to create an EKS cluster and an autoscaling group of workers for the cluster.
16+
* You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
17+
* You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources.
1818

1919
## Usage example
2020

@@ -28,7 +28,6 @@ module "eks" {
2828
subnets = ["subnet-abcde012", "subnet-bcde012a"]
2929
tags = "${map("Environment", "test")}"
3030
vpc_id = "vpc-abcde012"
31-
cluster_ingress_cidrs = ["24.18.23.91/32"]
3231
}
3332
```
3433

@@ -52,8 +51,10 @@ This module has been packaged with [awspec](https://github.com/k1LoW/awspec) tes
5251
3. Ensure your AWS environment is configured (i.e. credentials and region) for test.
5352
4. Test using `bundle exec kitchen test` from the root of the repo.
5453

55-
For now, connectivity to the kubernetes cluster is not tested but will be in the future.
56-
To test your kubectl connection manually, see the [eks_test_fixture README](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_test_fixture/README.md).
54+
For now, connectivity to the kubernetes cluster is not tested but will be in the
55+
future. If `configure_kubectl_session` is set `true`, once the test fixture has
56+
converged, you can query the test cluster from that terminal session with
57+
`kubectl get nodes --watch --kubeconfig kubeconfig`.
5758

5859
## Doc generation
5960

@@ -93,30 +94,28 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
9394
9495
| Name | Description | Type | Default | Required |
9596
|------|-------------|:----:|:-----:|:-----:|
96-
| additional_userdata | Extra lines of userdata (bash) which are appended to the default userdata code. | string | `` | no |
97-
| cluster_ingress_cidrs | The CIDRs from which we can execute kubectl commands. | list | - | yes |
98-
| cluster_name | Name of the EKS cluster which is also used as a prefix in names of related resources. | string | - | yes |
99-
| cluster_version | Kubernetes version to use for the cluster. | string | `1.10` | no |
97+
| cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | - | yes |
98+
| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no |
99+
| cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no |
100100
| config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. | string | `./` | no |
101-
| configure_kubectl_session | Configure the current session's kubectl to use the instantiated cluster. | string | `false` | no |
102-
| ebs_optimized_workers | If left at default of true, will use ebs optimization if available on the given instance type. | string | `true` | no |
103-
| subnets | A list of subnets to associate with the cluster's underlying instances. | list | - | yes |
101+
| configure_kubectl_session | Configure the current session's kubectl to use the instantiated EKS cluster. | string | `true` | no |
102+
| subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes |
104103
| tags | A map of tags to add to all resources. | string | `<map>` | no |
105-
| vpc_id | VPC id where the cluster and other resources will be deployed. | string | - | yes |
106-
| workers_ami_id | AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI. | string | `` | no |
107-
| workers_asg_desired_capacity | Desired worker capacity in the autoscaling group. | string | `1` | no |
108-
| workers_asg_max_size | Maximum worker capacity in the autoscaling group. | string | `3` | no |
109-
| workers_asg_min_size | Minimum worker capacity in the autoscaling group. | string | `1` | no |
110-
| workers_instance_type | Size of the workers instances. | string | `m4.large` | no |
104+
| vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes |
105+
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no |
106+
| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no |
107+
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no |
111108

112109
## Outputs
113110

114111
| Name | Description |
115112
|------|-------------|
116-
| cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. Tis is the base64 encoded certificate data required to communicate with your cluster. |
117-
| cluster_endpoint | The endpoint for your Kubernetes API server. |
118-
| cluster_id | The name/id of the cluster. |
119-
| cluster_security_group_ids | description |
120-
| cluster_version | The Kubernetes server version for the cluster. |
121-
| config_map_aws_auth | A kubernetes configuration to authenticate to this cluster. |
122-
| kubeconfig | kubectl config file contents for this cluster. |
113+
| cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. |
114+
| cluster_endpoint | The endpoint for your EKS Kubernetes API. |
115+
| cluster_id | The name/id of the EKS cluster. |
116+
| cluster_security_group_id | Security group ID attached to the EKS cluster. |
117+
| cluster_version | The Kubernetes server version for the EKS cluster. |
118+
| config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. |
119+
| kubeconfig | kubectl config file contents for this EKS cluster. |
120+
| worker_security_group_id | Security group ID attached to the EKS workers. |
121+
| workers_asg_arns | IDs of the autoscaling groups containing workers. |

cluster.tf

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ resource "aws_eks_cluster" "this" {
44
version = "${var.cluster_version}"
55

66
vpc_config {
7-
security_group_ids = ["${aws_security_group.cluster.id}"]
7+
security_group_ids = ["${local.cluster_security_group_id}"]
88
subnet_ids = ["${var.subnets}"]
99
}
1010

@@ -16,39 +16,43 @@ resource "aws_eks_cluster" "this" {
1616

1717
resource "aws_security_group" "cluster" {
1818
name_prefix = "${var.cluster_name}"
19-
description = "Cluster communication with workers nodes"
19+
description = "EKS cluster security group."
2020
vpc_id = "${var.vpc_id}"
2121
tags = "${merge(var.tags, map("Name", "${var.cluster_name}-eks_cluster_sg"))}"
22+
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
2223
}
2324

2425
resource "aws_security_group_rule" "cluster_egress_internet" {
25-
description = "Allow cluster egress to the Internet."
26+
description = "Allow cluster egress access to the Internet."
2627
protocol = "-1"
2728
security_group_id = "${aws_security_group.cluster.id}"
2829
cidr_blocks = ["0.0.0.0/0"]
2930
from_port = 0
3031
to_port = 0
3132
type = "egress"
33+
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
3234
}
3335

3436
resource "aws_security_group_rule" "cluster_https_worker_ingress" {
35-
description = "Allow pods to communicate with the cluster API Server."
37+
description = "Allow pods to communicate with the EKS cluster API."
3638
protocol = "tcp"
3739
security_group_id = "${aws_security_group.cluster.id}"
38-
source_security_group_id = "${aws_security_group.workers.id}"
40+
source_security_group_id = "${local.worker_security_group_id}"
3941
from_port = 443
4042
to_port = 443
4143
type = "ingress"
44+
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
4245
}
4346

4447
resource "aws_security_group_rule" "cluster_https_cidr_ingress" {
45-
cidr_blocks = ["${var.cluster_ingress_cidrs}"]
46-
description = "Allow communication with the cluster API Server."
48+
cidr_blocks = ["${local.workstation_external_cidr}"]
49+
description = "Allow kubectl communication with the EKS cluster API."
4750
protocol = "tcp"
4851
security_group_id = "${aws_security_group.cluster.id}"
4952
from_port = 443
5053
to_port = 443
5154
type = "ingress"
55+
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
5256
}
5357

5458
resource "aws_iam_role" "cluster" {

data.tf

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,7 @@
11
data "aws_region" "current" {}
22

3-
data "aws_ami" "eks_worker" {
4-
filter {
5-
name = "name"
6-
values = ["eks-worker-*"]
7-
}
8-
9-
most_recent = true
10-
owners = ["602401143452"] # Amazon
3+
data "http" "workstation_external_ip" {
4+
url = "http://icanhazip.com"
115
}
126

137
data "aws_iam_policy_document" "workers_assume_role_policy" {
@@ -25,6 +19,16 @@ data "aws_iam_policy_document" "workers_assume_role_policy" {
2519
}
2620
}
2721

22+
data "aws_ami" "eks_worker" {
23+
filter {
24+
name = "name"
25+
values = ["eks-worker-*"]
26+
}
27+
28+
most_recent = true
29+
owners = ["602401143452"] # Amazon
30+
}
31+
2832
data "aws_iam_policy_document" "cluster_assume_role_policy" {
2933
statement {
3034
sid = "EKSClusterAssumeRole"
@@ -40,19 +44,6 @@ data "aws_iam_policy_document" "cluster_assume_role_policy" {
4044
}
4145
}
4246

43-
data template_file userdata {
44-
template = "${file("${path.module}/templates/userdata.sh.tpl")}"
45-
46-
vars {
47-
region = "${data.aws_region.current.name}"
48-
max_pod_count = "${lookup(local.max_pod_per_node, var.workers_instance_type)}"
49-
cluster_name = "${var.cluster_name}"
50-
endpoint = "${aws_eks_cluster.this.endpoint}"
51-
cluster_auth_base64 = "${aws_eks_cluster.this.certificate_authority.0.data}"
52-
additional_userdata = "${var.additional_userdata}"
53-
}
54-
}
55-
5647
data template_file kubeconfig {
5748
template = "${file("${path.module}/templates/kubeconfig.tpl")}"
5849

@@ -72,7 +63,16 @@ data template_file config_map_aws_auth {
7263
}
7364
}
7465

75-
module "ebs_optimized" {
76-
source = "./modules/tf_util_ebs_optimized"
77-
instance_type = "${var.workers_instance_type}"
66+
data template_file userdata {
67+
template = "${file("${path.module}/templates/userdata.sh.tpl")}"
68+
count = "${length(var.worker_groups)}"
69+
70+
vars {
71+
region = "${data.aws_region.current.name}"
72+
cluster_name = "${var.cluster_name}"
73+
endpoint = "${aws_eks_cluster.this.endpoint}"
74+
cluster_auth_base64 = "${aws_eks_cluster.this.certificate_authority.0.data}"
75+
max_pod_count = "${lookup(local.max_pod_per_node, lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type")))}"
76+
additional_userdata = "${lookup(var.worker_groups[count.index], "additional_userdata",lookup(var.workers_group_defaults, "additional_userdata"))}"
77+
}
7878
}

examples/eks_test_fixture/main.tf

Lines changed: 13 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -11,18 +11,16 @@ provider "random" {
1111
version = "= 1.3.1"
1212
}
1313

14-
provider "http" {}
15-
provider "local" {}
16-
1714
data "aws_availability_zones" "available" {}
1815

19-
data "http" "workstation_external_ip" {
20-
url = "http://icanhazip.com"
21-
}
22-
2316
locals {
24-
workstation_external_cidr = "${chomp(data.http.workstation_external_ip.body)}/32"
25-
cluster_name = "test-eks-${random_string.suffix.result}"
17+
cluster_name = "test-eks-${random_string.suffix.result}"
18+
19+
worker_groups = "${list(
20+
map("instance_type","t2.small",
21+
"additional_userdata","echo foo bar"
22+
),
23+
)}"
2624

2725
tags = "${map("Environment", "test",
2826
"GithubRepo", "terraform-aws-eks",
@@ -50,13 +48,10 @@ module "vpc" {
5048
}
5149

5250
module "eks" {
53-
source = "../.."
54-
cluster_name = "${local.cluster_name}"
55-
subnets = "${module.vpc.public_subnets}"
56-
tags = "${local.tags}"
57-
vpc_id = "${module.vpc.vpc_id}"
58-
cluster_ingress_cidrs = ["${local.workstation_external_cidr}"]
59-
workers_instance_type = "t2.small"
60-
additional_userdata = "echo hello world"
61-
configure_kubectl_session = true
51+
source = "../.."
52+
cluster_name = "${local.cluster_name}"
53+
subnets = "${module.vpc.public_subnets}"
54+
tags = "${local.tags}"
55+
vpc_id = "${module.vpc.vpc_id}"
56+
worker_groups = "${local.worker_groups}"
6257
}

examples/eks_test_fixture/outputs.tf

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ output "cluster_endpoint" {
33
value = "${module.eks.cluster_endpoint}"
44
}
55

6-
output "cluster_security_group_ids" {
6+
output "cluster_security_group_id" {
77
description = "Security group ids attached to the cluster control plane."
8-
value = "${module.eks.cluster_security_group_ids}"
8+
value = "${module.eks.cluster_security_group_id}"
99
}
1010

1111
output "kubectl_config" {

kubectl.tf

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
resource "local_file" "kubeconfig" {
2+
content = "${data.template_file.kubeconfig.rendered}"
3+
filename = "${var.config_output_path}/kubeconfig"
4+
count = "${var.configure_kubectl_session ? 1 : 0}"
5+
}
6+
7+
resource "local_file" "config_map_aws_auth" {
8+
content = "${data.template_file.config_map_aws_auth.rendered}"
9+
filename = "${var.config_output_path}/config-map-aws-auth.yaml"
10+
count = "${var.configure_kubectl_session ? 1 : 0}"
11+
}
12+
13+
resource "null_resource" "configure_kubectl" {
14+
provisioner "local-exec" {
15+
command = "kubectl apply -f ${var.config_output_path}/config-map-aws-auth.yaml --kubeconfig ${var.config_output_path}/kubeconfig"
16+
}
17+
18+
triggers {
19+
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
20+
kubeconfig_rendered = "${data.template_file.kubeconfig.rendered}"
21+
}
22+
23+
count = "${var.configure_kubectl_session ? 1 : 0}"
24+
}

0 commit comments

Comments
 (0)