Skip to content

Commit 68903da

Browse files
maxgioaknysh
andauthored
Feature/cluster autoscaler (#13)
* Use splat syntax to return NG underlying resources * Update EKS cluster module's version in examples Update from 0.13.0 to 0.16.0. * Enable the NG to scale the ASG * Update the cluster-autoscaler-enabling tags * Update Node Group dependencies Add the cluster autoscaling policy attachment to the list of the Node Group's dependencies. * Leverage the Labels module to name the policy * Update documentation * Add Github workflow files Co-authored-by: Andriy Knysh <[email protected]>
1 parent 740c50f commit 68903da

File tree

11 files changed

+172
-1
lines changed

11 files changed

+172
-1
lines changed

.github/CODEOWNERS

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Use this file to define individuals or teams that are responsible for code in a repository.
2+
# Read more: <https://help.github.com/articles/about-codeowners/>
3+
4+
* @cloudposse/engineering
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
name: Bug report
3+
about: Create a report to help us improve
4+
title: ''
5+
labels: 'bug'
6+
assignees: ''
7+
8+
---
9+
10+
Found a bug? Maybe our [Slack Community](https://slack.cloudposse.com) can help.
11+
12+
[![Slack Community](https://slack.cloudposse.com/badge.svg)](https://slack.cloudposse.com)
13+
14+
## Describe the Bug
15+
A clear and concise description of what the bug is.
16+
17+
## Expected Behavior
18+
A clear and concise description of what you expected to happen.
19+
20+
## Steps to Reproduce
21+
Steps to reproduce the behavior:
22+
1. Go to '...'
23+
2. Run '....'
24+
3. Enter '....'
25+
4. See error
26+
27+
## Screenshots
28+
If applicable, add screenshots or logs to help explain your problem.
29+
30+
## Environment (please complete the following information):
31+
32+
Anything that will help us triage the bug will help. Here are some ideas:
33+
- OS: [e.g. Linux, OSX, WSL, etc]
34+
- Version [e.g. 10.15]
35+
36+
## Additional Context
37+
Add any other context about the problem here.

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
blank_issues_enabled: false
2+
3+
contact_links:
4+
5+
- name: Community Slack Team
6+
url: https://cloudposse.com/slack/
7+
about: |-
8+
Please ask and answer questions here.
9+
10+
- name: Office Hours
11+
url: https://cloudposse.com/office-hours/
12+
about: |-
13+
Join us every Wednesday for FREE Office Hours (lunch & learn).
14+
15+
- name: DevOps Accelerator Program
16+
url: https://cloudposse.com/accelerate/
17+
about: |-
18+
Own your infrastructure in record time. We build it. You drive it.
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
name: Feature Request
3+
about: Suggest an idea for this project
4+
title: ''
5+
labels: 'feature request'
6+
assignees: ''
7+
8+
---
9+
10+
Have a question? Please checkout our [Slack Community](https://slack.cloudposse.com) in the `#geodesic` channel or visit our [Slack Archive](https://archive.sweetops.com/geodesic/).
11+
12+
[![Slack Community](https://slack.cloudposse.com/badge.svg)](https://slack.cloudposse.com)
13+
14+
## Describe the Feature
15+
16+
A clear and concise description of what the bug is.
17+
18+
## Expected Behavior
19+
20+
A clear and concise description of what you expected to happen.
21+
22+
## Use Case
23+
24+
Is your feature request related to a problem/challenge you are trying to solve? Please provide some additional context of why this feature or capability will be valuable.
25+
26+
## Describe Ideal Solution
27+
28+
A clear and concise description of what you want to happen. If you don't know, that's okay.
29+
30+
## Alternatives Considered
31+
32+
Explain what alternative solutions or features you've considered.
33+
34+
## Additional Context
35+
36+
Add any other context or screenshots about the feature request here.

.github/ISSUE_TEMPLATE/question.md

Whitespace-only changes.

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
## what
2+
* Describe high-level what changed as a result of these commits (i.e. in plain-english, what do these changes mean?)
3+
* Use bullet points to be concise and to the point.
4+
5+
## why
6+
* Provide the justifications for the changes (e.g. business case).
7+
* Describe why these changes were made (e.g. why do these commits fix the problem?)
8+
* Use bullet points to be concise and to the point.
9+
10+
## references
11+
* Link to any supporting github issues or helpful documentation to add some context (e.g. stackoverflow).
12+
* Use `closes #123`, if this PR closes a GitHub issue `#123`
13+
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
name: Slash Command Dispatch
2+
on:
3+
issue_comment:
4+
types: [created]
5+
6+
jobs:
7+
slashCommandDispatch:
8+
runs-on: ubuntu-latest
9+
steps:
10+
- uses: actions/checkout@v2
11+
12+
- name: Slash Command Dispatch
13+
uses: cloudposse/actions/github/[email protected]
14+
with:
15+
token: ${{ secrets.GITHUB_BOT_TOKEN }}
16+
reaction-token: ${{ secrets.GITHUB_TOKEN }}
17+
repository: cloudposse/actions
18+
commands: rebuild-readme, terraform-fmt
19+
permission: none
20+
issue-type: pull-request

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,6 +203,7 @@ Available targets:
203203
| desired_size | Desired number of worker nodes | number | - | yes |
204204
| disk_size | Disk size in GiB for worker nodes. Defaults to 20. Terraform will only perform drift detection if a configuration value is provided | number | `20` | no |
205205
| ec2_ssh_key | SSH key name that should be used to access the worker nodes | string | `null` | no |
206+
| enable_cluster_autoscaler | Whether to enable node group to scale the Auto Scaling Group | bool | `false` | no |
206207
| enabled | Whether to create the resources. Set to `false` to prevent the module from creating any resources | bool | `true` | no |
207208
| existing_workers_role_policy_arns | List of existing policy ARNs that will be attached to the workers default role on creation | list(string) | `<list>` | no |
208209
| existing_workers_role_policy_arns_count | Count of existing policy ARNs that will be attached to the workers default role on creation. Needed to prevent Terraform error `count can't be computed` | number | `0` | no |

docs/terraform.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
| desired_size | Desired number of worker nodes | number | - | yes |
1111
| disk_size | Disk size in GiB for worker nodes. Defaults to 20. Terraform will only perform drift detection if a configuration value is provided | number | `20` | no |
1212
| ec2_ssh_key | SSH key name that should be used to access the worker nodes | string | `null` | no |
13+
| enable_cluster_autoscaler | Whether to enable node group to scale the Auto Scaling Group | bool | `false` | no |
1314
| enabled | Whether to create the resources. Set to `false` to prevent the module from creating any resources | bool | `true` | no |
1415
| existing_workers_role_policy_arns | List of existing policy ARNs that will be attached to the workers default role on creation | list(string) | `<list>` | no |
1516
| existing_workers_role_policy_arns_count | Count of existing policy ARNs that will be attached to the workers default role on creation. Needed to prevent Terraform error `count can't be computed` | number | `0` | no |

main.tf

Lines changed: 36 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ locals {
88
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "owned"
99
},
1010
{
11-
"k8s.io/cluster-autoscaler/enabled" = "true"
11+
"k8s.io/cluster-autoscaler/enabled" = "${var.enable_cluster_autoscaler}"
1212
}
1313
)
1414
}
@@ -38,6 +38,34 @@ data "aws_iam_policy_document" "assume_role" {
3838
}
3939
}
4040

41+
data "aws_iam_policy_document" "amazon_eks_worker_node_autoscaler_policy" {
42+
count = (var.enabled && var.enable_cluster_autoscaler) ? 1 : 0
43+
statement {
44+
sid = "AllowToScaleEKSNodeGroupAutoScalingGroup"
45+
46+
actions = [
47+
"autoscaling:DescribeAutoScalingGroups",
48+
"autoscaling:DescribeAutoScalingInstances",
49+
"autoscaling:DescribeLaunchConfigurations",
50+
"autoscaling:DescribeTags",
51+
"autoscaling:SetDesiredCapacity",
52+
"autoscaling:TerminateInstanceInAutoScalingGroup",
53+
"ec2:DescribeLaunchTemplateVersions"
54+
]
55+
56+
resources = [
57+
"*"
58+
]
59+
}
60+
}
61+
62+
resource "aws_iam_policy" "amazon_eks_worker_node_autoscaler_policy" {
63+
count = (var.enabled && var.enable_cluster_autoscaler) ? 1 : 0
64+
name = "${module.label.id}-autoscaler"
65+
path = "/"
66+
policy = join("", data.aws_iam_policy_document.amazon_eks_worker_node_autoscaler_policy.*.json)
67+
}
68+
4169
resource "aws_iam_role" "default" {
4270
count = var.enabled ? 1 : 0
4371
name = module.label.id
@@ -51,6 +79,12 @@ resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy" {
5179
role = join("", aws_iam_role.default.*.name)
5280
}
5381

82+
resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_autoscaler_policy" {
83+
count = (var.enabled && var.enable_cluster_autoscaler) ? 1 : 0
84+
policy_arn = join("", aws_iam_policy.amazon_eks_worker_node_autoscaler_policy.*.arn)
85+
role = join("", aws_iam_role.default.*.name)
86+
}
87+
5488
resource "aws_iam_role_policy_attachment" "amazon_eks_cni_policy" {
5589
count = var.enabled ? 1 : 0
5690
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
@@ -102,6 +136,7 @@ resource "aws_eks_node_group" "default" {
102136
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
103137
depends_on = [
104138
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
139+
aws_iam_role_policy_attachment.amazon_eks_worker_node_autoscaler_policy,
105140
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
106141
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only
107142
]

0 commit comments

Comments
 (0)