Skip to content

Commit 80ab44c

Browse files
authored
Update experimental CI for EKS (#1261)
In order to avoid provider configuration issues associated with a single-apply create, use two applies, and use the AWS provider rather than the EKS module.
1 parent 0dfb1f0 commit 80ab44c

File tree

15 files changed

+235
-210
lines changed

15 files changed

+235
-210
lines changed
Lines changed: 34 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,35 @@
1-
# EKS test infrastructure
1+
# EKS (Amazon Elastic Kubernetes Service)
22

3-
This directory contains files used for testing the Kubernetes provider in our internal CI system. See the [examples](https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/eks) directory instead, if you're looking for example code.
3+
This example demonstrates the most reliable way to use the Kubernetes provider together with the AWS provider to create an EKS cluster. By keeping the two providers' resources in separate Terraform states (or separate workspaces using [Terraform Cloud](https://app.terraform.io/)), we can limit the scope of impact to apply the right changes to the right place. (For example, updating the underlying EKS infrastructure without having to navigate the Kubernetes provider configuration challenges caused by modifying EKS cluster attributes in a single apply).
44

5-
To run this test infrastructure, you will need the following environment variables to be set:
5+
You will need the following environment variables to be set:
66

77
- `AWS_ACCESS_KEY_ID`
88
- `AWS_SECRET_ACCESS_KEY`
99

1010
See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables and alternatives, like `AWS_PROFILE`.
1111

12-
Ensure that `KUBE_CONFIG_PATH` and `KUBE_CONFIG_PATHS` environment variables are NOT set, as they will interfere with the cluster build.
1312

14-
```
15-
unset KUBE_CONFIG_PATH
16-
unset KUBE_CONFIG_PATHS
17-
```
13+
## Create EKS cluster
1814

19-
To install the EKS cluster using default values, run terraform init and apply from the directory containing this README.
15+
Choose a name for the cluster, or use the terraform config in the current directory to create a random name.
2016

2117
```
2218
terraform init
23-
terraform apply
19+
terraform apply --auto-approve
20+
export CLUSTERNAME=$(terraform output -raw cluster_name)
2421
```
2522

26-
## Kubeconfig for manual CLI access
27-
28-
The token contained in the kubeconfig expires in 15 minutes. The token can be refreshed by running `terraform apply` again. Export the KUBECONFIG to manually access the cluster:
23+
Change into the eks-cluster directory and create the EKS cluster infrastrcture.
2924

3025
```
31-
terraform apply
32-
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
33-
kubectl get pods -n test
26+
cd eks-cluster
27+
terraform init
28+
terraform apply -var=cluster_name=$CLUSTERNAME
29+
cd -
3430
```
3531

36-
## Optional variables
37-
38-
The Kubernetes version can be specified at apply time:
32+
Optionally, the Kubernetes version can be specified at apply time:
3933

4034
```
4135
terraform apply -var=kubernetes_version=1.18
@@ -44,14 +38,30 @@ terraform apply -var=kubernetes_version=1.18
4438
See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.
4539

4640

47-
### Worker node count and instance type
41+
## Create Kubernetes resources
4842

49-
The number of worker nodes, and the instance type, can be specified at apply time:
43+
Change into the kubernetes-config directory to apply Kubernetes resources to the new cluster.
5044

5145
```
52-
terraform apply -var=workers_count=4 -var=workers_type=m4.xlarge
46+
cd kubernetes-config
47+
terraform init
48+
terraform apply -var=cluster_name=$CLUSTERNAME
5349
```
5450

55-
## Additional configuration of EKS
51+
## Deleting the cluster
52+
53+
First, delete the Kubernetes resources as shown below. This will give Ingress and Service related Load Balancers a chance to delete before the other AWS resources are removed.
5654

57-
To view all available configuration options for the EKS module used in this example, see [terraform-aws-modules/eks docs](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest).
55+
```
56+
cd kubernetes-config
57+
terraform destroy -var=cluster_name=$CLUSTERNAME
58+
cd -
59+
```
60+
61+
Then delete the EKS related resources:
62+
63+
```
64+
cd eks-cluster
65+
terraform destroy -var=cluster_name=$CLUSTERNAME
66+
cd -
67+
```
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
resource "aws_eks_cluster" "k8s-acc" {
2+
name = var.cluster_name
3+
role_arn = aws_iam_role.k8s-acc-cluster.arn
4+
5+
vpc_config {
6+
subnet_ids = aws_subnet.k8s-acc.*.id
7+
}
8+
9+
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
10+
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
11+
depends_on = [
12+
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSClusterPolicy,
13+
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSVPCResourceController,
14+
]
15+
}
16+
17+
resource "aws_eks_node_group" "k8s-acc" {
18+
cluster_name = aws_eks_cluster.k8s-acc.name
19+
node_group_name = var.cluster_name
20+
node_role_arn = aws_iam_role.k8s-acc-node.arn
21+
subnet_ids = aws_subnet.k8s-acc.*.id
22+
23+
scaling_config {
24+
desired_size = 1
25+
max_size = 1
26+
min_size = 1
27+
}
28+
29+
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
30+
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
31+
depends_on = [
32+
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSWorkerNodePolicy,
33+
aws_iam_role_policy_attachment.k8s-acc-AmazonEKS_CNI_Policy,
34+
aws_iam_role_policy_attachment.k8s-acc-AmazonEC2ContainerRegistryReadOnly,
35+
]
36+
}
Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
resource "aws_iam_role" "k8s-acc-cluster" {
2+
name = var.cluster_name
3+
4+
assume_role_policy = <<POLICY
5+
{
6+
"Version": "2012-10-17",
7+
"Statement": [
8+
{
9+
"Effect": "Allow",
10+
"Principal": {
11+
"Service": "eks.amazonaws.com"
12+
},
13+
"Action": "sts:AssumeRole"
14+
}
15+
]
16+
}
17+
POLICY
18+
}
19+
20+
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSClusterPolicy" {
21+
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
22+
role = aws_iam_role.k8s-acc-cluster.name
23+
}
24+
25+
# Optionally, enable Security Groups for Pods
26+
# Reference: https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
27+
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSVPCResourceController" {
28+
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
29+
role = aws_iam_role.k8s-acc-cluster.name
30+
}
31+
32+
resource "aws_iam_role" "k8s-acc-node" {
33+
name = "${var.cluster_name}-node"
34+
35+
assume_role_policy = jsonencode({
36+
Statement = [{
37+
Action = "sts:AssumeRole"
38+
Effect = "Allow"
39+
Principal = {
40+
Service = "ec2.amazonaws.com"
41+
}
42+
}]
43+
Version = "2012-10-17"
44+
})
45+
}
46+
47+
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSWorkerNodePolicy" {
48+
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
49+
role = aws_iam_role.k8s-acc-node.name
50+
}
51+
52+
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKS_CNI_Policy" {
53+
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
54+
role = aws_iam_role.k8s-acc-node.name
55+
}
56+
57+
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEC2ContainerRegistryReadOnly" {
58+
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
59+
role = aws_iam_role.k8s-acc-node.name
60+
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
variable "cluster_name" {
2+
type = string
3+
}
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
terraform {
2+
required_providers {
3+
aws = {
4+
source = "hashicorp/aws"
5+
version = "3.38.0"
6+
}
7+
}
8+
}
9+
10+
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,16 @@
1-
terraform {
2-
required_providers {
3-
aws = {
4-
source = "hashicorp/aws"
5-
version = "3.22.0"
6-
}
7-
}
8-
}
9-
10-
# VPC Resources
11-
# * VPC
12-
# * Subnets
13-
# * Internet Gateway
14-
# * Route Table
15-
#
16-
# Using these data sources allows the configuration to be
17-
# generic for any region.
181
data "aws_region" "current" {
192
}
203

214
data "aws_availability_zones" "available" {
225
}
236

24-
resource "random_id" "cluster_name" {
25-
byte_length = 2
26-
prefix = "k8s-acc-"
27-
}
28-
297
resource "aws_vpc" "k8s-acc" {
308
cidr_block = "10.0.0.0/16"
319
enable_dns_support = true
3210
enable_dns_hostnames = true
3311
tags = {
34-
"Name" = "terraform-eks-k8s-acc-node"
35-
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
12+
"Name" = "terraform-eks-k8s-acc-node"
13+
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
3614
}
3715
}
3816

@@ -45,9 +23,9 @@ resource "aws_subnet" "k8s-acc" {
4523
map_public_ip_on_launch = true
4624

4725
tags = {
48-
"Name" = "terraform-eks-k8s-acc-node"
49-
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
50-
"kubernetes.io/role/elb" = 1
26+
"Name" = "terraform-eks-k8s-acc-node"
27+
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
28+
"kubernetes.io/role/elb" = 1
5129
}
5230
}
5331

@@ -74,3 +52,5 @@ resource "aws_route_table_association" "k8s-acc" {
7452
subnet_id = aws_subnet.k8s-acc[count.index].id
7553
route_table_id = aws_route_table.k8s-acc.id
7654
}
55+
56+
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
apiVersion: v1
2+
preferences: {}
3+
kind: Config
4+
5+
clusters:
6+
- cluster:
7+
server: ${endpoint}
8+
certificate-authority-data: ${clusterca}
9+
name: ${cluster_name}
10+
11+
contexts:
12+
- context:
13+
cluster: ${cluster_name}
14+
user: ${cluster_name}
15+
name: ${cluster_name}
16+
17+
current-context: ${cluster_name}
18+
19+
users:
20+
- name: ${cluster_name}
21+
user:
22+
exec:
23+
apiVersion: client.authentication.k8s.io/v1alpha1
24+
command: aws-iam-authenticator
25+
args:
26+
- token
27+
- --cluster-id
28+
- ${cluster_name}
Lines changed: 31 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,48 @@
1-
terraform {
2-
required_providers {
3-
kubernetes-local = {
4-
source = "localhost/test/kubernetes"
5-
version = "9.9.9"
6-
}
7-
helm = {
8-
source = "localhost/test/helm"
9-
version = "9.9.9"
10-
}
1+
data "aws_eks_cluster" "default" {
2+
name = var.cluster_name
3+
}
4+
5+
data "aws_eks_cluster_auth" "default" {
6+
name = var.cluster_name
7+
}
8+
9+
provider "kubernetes" {
10+
host = data.aws_eks_cluster.default.endpoint
11+
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
12+
token = data.aws_eks_cluster_auth.default.token
13+
}
14+
15+
provider "helm" {
16+
kubernetes {
17+
host = data.aws_eks_cluster.default.endpoint
18+
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
19+
token = data.aws_eks_cluster_auth.default.token
1120
}
1221
}
1322

14-
# For this resource, we need to explicitly establish the dependency on the cluster API, because the dependency is not yet present in this file.
15-
# https://github.com/terraform-aws-modules/terraform-aws-eks/blob/31ad394dbc61390dc46643b571249a2b670e9caa/kubectl.tf
23+
resource "local_file" "kubeconfig" {
24+
sensitive_content = templatefile("${path.module}/kubeconfig.tpl", {
25+
cluster_name = var.cluster_name,
26+
clusterca = data.aws_eks_cluster.default.certificate_authority[0].data,
27+
endpoint = data.aws_eks_cluster.default.endpoint,
28+
})
29+
filename = "./kubeconfig-${var.cluster_name}"
30+
}
31+
1632
resource "kubernetes_namespace" "test" {
17-
depends_on = [var.cluster_name]
18-
provider = kubernetes-local
1933
metadata {
2034
name = "test"
2135
}
2236
}
2337

24-
resource helm_release nginx_ingress {
38+
resource "helm_release" "nginx_ingress" {
39+
namespace = kubernetes_namespace.test.metadata.0.name
2540
wait = true
2641
timeout = 600
2742

2843
name = "ingress-nginx"
2944

3045
repository = "https://kubernetes.github.io/ingress-nginx"
3146
chart = "ingress-nginx"
32-
version = "v3.24.0"
33-
34-
set {
35-
name = "controller.updateStrategy.rollingUpdate.maxUnavailable"
36-
value = "1"
37-
}
38-
set {
39-
name = "controller.replicaCount"
40-
value = "2"
41-
}
42-
set_sensitive {
43-
name = "controller.maxmindLicenseKey"
44-
value = "testSensitiveValue"
45-
}
47+
version = "v3.30.0"
4648
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
output "kubeconfig" {
2+
value = abspath("${path.root}/${local_file.kubeconfig.filename}")
3+
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
variable "cluster_name" {
2+
type = string
3+
}

0 commit comments

Comments
 (0)