Skip to content

Commit 9fa75c0

Browse files
chore: Remove kubectl provider from Karpenter example (#3251)
* Change kubectl provider * chore: Remove `kubectl` provider --------- Co-authored-by: Bryant Biggs <[email protected]>
1 parent 791b905 commit 9fa75c0

File tree

5 files changed

+111
-151
lines changed

5 files changed

+111
-151
lines changed

examples/karpenter/README.md

Lines changed: 30 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,11 @@ Once the cluster is up and running, you can check that Karpenter is functioning
1818
# First, make sure you have updated your local kubeconfig
1919
aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter
2020

21-
# Second, scale the example deployment
22-
kubectl scale deployment inflate --replicas 5
21+
# Second, deploy the Karpenter NodeClass/NodePool
22+
kubectl apply -f karpenter.yaml
23+
24+
# Second, deploy the example deployment
25+
kubectl apply -f inflate.yaml
2326

2427
# You can watch Karpenter's controller logs with
2528
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
@@ -32,10 +35,10 @@ kubectl get nodes -L karpenter.sh/registered
3235
```
3336

3437
```text
35-
NAME STATUS ROLES AGE VERSION REGISTERED
36-
ip-10-0-16-155.eu-west-1.compute.internal Ready <none> 100s v1.29.3-eks-ae9a62a true
37-
ip-10-0-3-23.eu-west-1.compute.internal Ready <none> 6m1s v1.29.3-eks-ae9a62a
38-
ip-10-0-41-2.eu-west-1.compute.internal Ready <none> 6m3s v1.29.3-eks-ae9a62a
38+
NAME STATUS ROLES AGE VERSION REGISTERED
39+
ip-10-0-13-51.eu-west-1.compute.internal Ready <none> 29s v1.31.1-eks-1b3e656 true
40+
ip-10-0-41-242.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
41+
ip-10-0-8-151.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
3942
```
4043

4144
```sh
@@ -44,24 +47,27 @@ kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
4447

4548
```text
4649
NAME NODE
47-
inflate-75d744d4c6-nqwz8 ip-10-0-16-155.eu-west-1.compute.internal
48-
inflate-75d744d4c6-nrqnn ip-10-0-16-155.eu-west-1.compute.internal
49-
inflate-75d744d4c6-sp4dx ip-10-0-16-155.eu-west-1.compute.internal
50-
inflate-75d744d4c6-xqzd9 ip-10-0-16-155.eu-west-1.compute.internal
51-
inflate-75d744d4c6-xr6p5 ip-10-0-16-155.eu-west-1.compute.internal
52-
aws-node-mnn7r ip-10-0-3-23.eu-west-1.compute.internal
53-
aws-node-rkmvm ip-10-0-16-155.eu-west-1.compute.internal
54-
aws-node-s4slh ip-10-0-41-2.eu-west-1.compute.internal
55-
coredns-68bd859788-7rcfq ip-10-0-3-23.eu-west-1.compute.internal
56-
coredns-68bd859788-l78hw ip-10-0-41-2.eu-west-1.compute.internal
57-
eks-pod-identity-agent-gbx8l ip-10-0-41-2.eu-west-1.compute.internal
58-
eks-pod-identity-agent-s7vt7 ip-10-0-16-155.eu-west-1.compute.internal
59-
eks-pod-identity-agent-xwgqw ip-10-0-3-23.eu-west-1.compute.internal
60-
karpenter-79f59bdfdc-9q5ff ip-10-0-41-2.eu-west-1.compute.internal
61-
karpenter-79f59bdfdc-cxvhr ip-10-0-3-23.eu-west-1.compute.internal
62-
kube-proxy-7crbl ip-10-0-41-2.eu-west-1.compute.internal
63-
kube-proxy-jtzds ip-10-0-16-155.eu-west-1.compute.internal
64-
kube-proxy-sm42c ip-10-0-3-23.eu-west-1.compute.internal
50+
inflate-67cd5bb766-hvqfn ip-10-0-13-51.eu-west-1.compute.internal
51+
inflate-67cd5bb766-jnsdp ip-10-0-13-51.eu-west-1.compute.internal
52+
inflate-67cd5bb766-k4gwf ip-10-0-41-242.eu-west-1.compute.internal
53+
inflate-67cd5bb766-m49f6 ip-10-0-13-51.eu-west-1.compute.internal
54+
inflate-67cd5bb766-pgzx9 ip-10-0-8-151.eu-west-1.compute.internal
55+
aws-node-58m4v ip-10-0-3-57.eu-west-1.compute.internal
56+
aws-node-pj2gc ip-10-0-8-151.eu-west-1.compute.internal
57+
aws-node-thffj ip-10-0-41-242.eu-west-1.compute.internal
58+
aws-node-vh66d ip-10-0-13-51.eu-west-1.compute.internal
59+
coredns-844dbb9f6f-9g9lg ip-10-0-41-242.eu-west-1.compute.internal
60+
coredns-844dbb9f6f-fmzfq ip-10-0-41-242.eu-west-1.compute.internal
61+
eks-pod-identity-agent-jr2ns ip-10-0-8-151.eu-west-1.compute.internal
62+
eks-pod-identity-agent-mpjkq ip-10-0-13-51.eu-west-1.compute.internal
63+
eks-pod-identity-agent-q4tjc ip-10-0-3-57.eu-west-1.compute.internal
64+
eks-pod-identity-agent-zzfdj ip-10-0-41-242.eu-west-1.compute.internal
65+
karpenter-5b8965dc9b-rx9bx ip-10-0-8-151.eu-west-1.compute.internal
66+
karpenter-5b8965dc9b-xrfnx ip-10-0-41-242.eu-west-1.compute.internal
67+
kube-proxy-2xf42 ip-10-0-41-242.eu-west-1.compute.internal
68+
kube-proxy-kbfc8 ip-10-0-8-151.eu-west-1.compute.internal
69+
kube-proxy-kt8zn ip-10-0-13-51.eu-west-1.compute.internal
70+
kube-proxy-sl6bz ip-10-0-3-57.eu-west-1.compute.internal
6571
```
6672

6773
### Tear Down & Clean-Up
@@ -72,7 +78,6 @@ Because Karpenter manages the state of node resources outside of Terraform, Karp
7278

7379
```bash
7480
kubectl delete deployment inflate
75-
kubectl delete node -l karpenter.sh/provisioner-name=default
7681
```
7782

7883
2. Remove the resources created by Terraform
@@ -91,7 +96,6 @@ Note that this example may create resources which cost money. Run `terraform des
9196
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3.2 |
9297
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.81 |
9398
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.7 |
94-
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | >= 2.0 |
9599

96100
## Providers
97101

@@ -100,7 +104,6 @@ Note that this example may create resources which cost money. Run `terraform des
100104
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.81 |
101105
| <a name="provider_aws.virginia"></a> [aws.virginia](#provider\_aws.virginia) | >= 5.81 |
102106
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.7 |
103-
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | >= 2.0 |
104107

105108
## Modules
106109

@@ -116,9 +119,6 @@ Note that this example may create resources which cost money. Run `terraform des
116119
| Name | Type |
117120
|------|------|
118121
| [helm_release.karpenter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
119-
| [kubectl_manifest.karpenter_example_deployment](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
120-
| [kubectl_manifest.karpenter_node_class](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
121-
| [kubectl_manifest.karpenter_node_pool](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
122122
| [aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) | data source |
123123
| [aws_ecrpublic_authorization_token.token](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token) | data source |
124124

examples/karpenter/inflate.yaml

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: inflate
5+
spec:
6+
replicas: 5
7+
selector:
8+
matchLabels:
9+
app: inflate
10+
template:
11+
metadata:
12+
labels:
13+
app: inflate
14+
spec:
15+
terminationGracePeriodSeconds: 0
16+
containers:
17+
- name: inflate
18+
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
19+
resources:
20+
requests:
21+
cpu: 1

examples/karpenter/karpenter.yaml

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
apiVersion: karpenter.k8s.aws/v1
3+
kind: EC2NodeClass
4+
metadata:
5+
name: default
6+
spec:
7+
amiSelectorTerms:
8+
- alias: bottlerocket@latest
9+
role: ex-karpenter
10+
subnetSelectorTerms:
11+
- tags:
12+
karpenter.sh/discovery: ex-karpenter
13+
securityGroupSelectorTerms:
14+
- tags:
15+
karpenter.sh/discovery: ex-karpenter
16+
tags:
17+
karpenter.sh/discovery: ex-karpenter
18+
---
19+
apiVersion: karpenter.sh/v1
20+
kind: NodePool
21+
metadata:
22+
name: default
23+
spec:
24+
template:
25+
spec:
26+
nodeClassRef:
27+
group: karpenter.k8s.aws
28+
kind: EC2NodeClass
29+
name: default
30+
requirements:
31+
- key: "karpenter.k8s.aws/instance-category"
32+
operator: In
33+
values: ["c", "m", "r"]
34+
- key: "karpenter.k8s.aws/instance-cpu"
35+
operator: In
36+
values: ["4", "8", "16", "32"]
37+
- key: "karpenter.k8s.aws/instance-hypervisor"
38+
operator: In
39+
values: ["nitro"]
40+
- key: "karpenter.k8s.aws/instance-generation"
41+
operator: Gt
42+
values: ["2"]
43+
limits:
44+
cpu: 1000
45+
disruption:
46+
consolidationPolicy: WhenEmpty
47+
consolidateAfter: 30s

examples/karpenter/main.tf

Lines changed: 13 additions & 117 deletions
Original file line numberDiff line numberDiff line change
@@ -21,20 +21,6 @@ provider "helm" {
2121
}
2222
}
2323

24-
provider "kubectl" {
25-
apply_retry_count = 5
26-
host = module.eks.cluster_endpoint
27-
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
28-
load_config_file = false
29-
30-
exec {
31-
api_version = "client.authentication.k8s.io/v1beta1"
32-
command = "aws"
33-
# This requires the awscli to be installed locally where Terraform is executed
34-
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
35-
}
36-
}
37-
3824
data "aws_availability_zones" "available" {
3925
# Exclude local zones
4026
filter {
@@ -89,21 +75,20 @@ module "eks" {
8975

9076
eks_managed_node_groups = {
9177
karpenter = {
92-
ami_type = "AL2023_x86_64_STANDARD"
78+
ami_type = "BOTTLEROCKET_x86_64"
9379
instance_types = ["m5.large"]
9480

9581
min_size = 2
9682
max_size = 3
9783
desired_size = 2
84+
85+
labels = {
86+
# Used to ensure Karpenter runs on nodes that it does not manage
87+
"karpenter.sh/controller" = "true"
88+
}
9889
}
9990
}
10091

101-
# cluster_tags = merge(local.tags, {
102-
# NOTE - only use this option if you are using "attach_cluster_primary_security_group"
103-
# and you know what you're doing. In this case, you can remove the "node_security_group_tags" below.
104-
# "karpenter.sh/discovery" = local.name
105-
# })
106-
10792
node_security_group_tags = merge(local.tags, {
10893
# NOTE - if creating multiple security groups with this module, only tag the
10994
# security group that Karpenter should utilize with the following tag
@@ -121,11 +106,12 @@ module "eks" {
121106
module "karpenter" {
122107
source = "../../modules/karpenter"
123108

124-
cluster_name = module.eks.cluster_name
125-
109+
cluster_name = module.eks.cluster_name
126110
enable_v1_permissions = true
127111

128-
enable_pod_identity = true
112+
# Name needs to match role name passed to the EC2NodeClass
113+
node_iam_role_use_name_prefix = false
114+
node_iam_role_name = local.name
129115
create_pod_identity_association = true
130116

131117
# Used to attach additional IAM policies to the Karpenter node IAM role
@@ -154,11 +140,13 @@ resource "helm_release" "karpenter" {
154140
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
155141
repository_password = data.aws_ecrpublic_authorization_token.token.password
156142
chart = "karpenter"
157-
version = "1.1.0"
143+
version = "1.1.1"
158144
wait = false
159145

160146
values = [
161147
<<-EOT
148+
nodeSelector:
149+
karpenter.sh/controller: 'true'
162150
dnsPolicy: Default
163151
settings:
164152
clusterName: ${module.eks.cluster_name}
@@ -170,98 +158,6 @@ resource "helm_release" "karpenter" {
170158
]
171159
}
172160

173-
resource "kubectl_manifest" "karpenter_node_class" {
174-
yaml_body = <<-YAML
175-
apiVersion: karpenter.k8s.aws/v1beta1
176-
kind: EC2NodeClass
177-
metadata:
178-
name: default
179-
spec:
180-
amiFamily: AL2023
181-
role: ${module.karpenter.node_iam_role_name}
182-
subnetSelectorTerms:
183-
- tags:
184-
karpenter.sh/discovery: ${module.eks.cluster_name}
185-
securityGroupSelectorTerms:
186-
- tags:
187-
karpenter.sh/discovery: ${module.eks.cluster_name}
188-
tags:
189-
karpenter.sh/discovery: ${module.eks.cluster_name}
190-
YAML
191-
192-
depends_on = [
193-
helm_release.karpenter
194-
]
195-
}
196-
197-
resource "kubectl_manifest" "karpenter_node_pool" {
198-
yaml_body = <<-YAML
199-
apiVersion: karpenter.sh/v1beta1
200-
kind: NodePool
201-
metadata:
202-
name: default
203-
spec:
204-
template:
205-
spec:
206-
nodeClassRef:
207-
name: default
208-
requirements:
209-
- key: "karpenter.k8s.aws/instance-category"
210-
operator: In
211-
values: ["c", "m", "r"]
212-
- key: "karpenter.k8s.aws/instance-cpu"
213-
operator: In
214-
values: ["4", "8", "16", "32"]
215-
- key: "karpenter.k8s.aws/instance-hypervisor"
216-
operator: In
217-
values: ["nitro"]
218-
- key: "karpenter.k8s.aws/instance-generation"
219-
operator: Gt
220-
values: ["5"]
221-
limits:
222-
cpu: 1000
223-
disruption:
224-
consolidationPolicy: WhenEmpty
225-
consolidateAfter: 30s
226-
YAML
227-
228-
depends_on = [
229-
kubectl_manifest.karpenter_node_class
230-
]
231-
}
232-
233-
# Example deployment using the [pause image](https://www.ianlewis.org/en/almighty-pause-container)
234-
# and starts with zero replicas
235-
resource "kubectl_manifest" "karpenter_example_deployment" {
236-
yaml_body = <<-YAML
237-
apiVersion: apps/v1
238-
kind: Deployment
239-
metadata:
240-
name: inflate
241-
spec:
242-
replicas: 0
243-
selector:
244-
matchLabels:
245-
app: inflate
246-
template:
247-
metadata:
248-
labels:
249-
app: inflate
250-
spec:
251-
terminationGracePeriodSeconds: 0
252-
containers:
253-
- name: inflate
254-
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
255-
resources:
256-
requests:
257-
cpu: 1
258-
YAML
259-
260-
depends_on = [
261-
helm_release.karpenter
262-
]
263-
}
264-
265161
################################################################################
266162
# Supporting Resources
267163
################################################################################

examples/karpenter/versions.tf

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,5 @@ terraform {
1010
source = "hashicorp/helm"
1111
version = ">= 2.7"
1212
}
13-
kubectl = {
14-
source = "alekc/kubectl"
15-
version = ">= 2.0"
16-
}
1713
}
1814
}

0 commit comments

Comments
 (0)