@@ -18,8 +18,11 @@ Once the cluster is up and running, you can check that Karpenter is functioning
18
18
# First, make sure you have updated your local kubeconfig
19
19
aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter
20
20
21
- # Second, scale the example deployment
22
- kubectl scale deployment inflate --replicas 5
21
+ # Second, deploy the Karpenter NodeClass/NodePool
22
+ kubectl apply -f karpenter.yaml
23
+
24
+ # Second, deploy the example deployment
25
+ kubectl apply -f inflate.yaml
23
26
24
27
# You can watch Karpenter's controller logs with
25
28
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
@@ -32,10 +35,10 @@ kubectl get nodes -L karpenter.sh/registered
32
35
```
33
36
34
37
``` text
35
- NAME STATUS ROLES AGE VERSION REGISTERED
36
- ip-10-0-16-155 .eu-west-1.compute.internal Ready <none> 100s v1.29.3 -eks-ae9a62a true
37
- ip-10-0-3-23 .eu-west-1.compute.internal Ready <none> 6m1s v1.29.3 -eks-ae9a62a
38
- ip-10-0-41-2 .eu-west-1.compute.internal Ready <none> 6m3s v1.29.3 -eks-ae9a62a
38
+ NAME STATUS ROLES AGE VERSION REGISTERED
39
+ ip-10-0-13-51 .eu-west-1.compute.internal Ready <none> 29s v1.31.1 -eks-1b3e656 true
40
+ ip-10-0-41-242 .eu-west-1.compute.internal Ready <none> 35m v1.31.1 -eks-1b3e656
41
+ ip-10-0-8-151 .eu-west-1.compute.internal Ready <none> 35m v1.31.1 -eks-1b3e656
39
42
```
40
43
41
44
``` sh
@@ -44,24 +47,27 @@ kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
44
47
45
48
``` text
46
49
NAME NODE
47
- inflate-75d744d4c6-nqwz8 ip-10-0-16-155.eu-west-1.compute.internal
48
- inflate-75d744d4c6-nrqnn ip-10-0-16-155.eu-west-1.compute.internal
49
- inflate-75d744d4c6-sp4dx ip-10-0-16-155.eu-west-1.compute.internal
50
- inflate-75d744d4c6-xqzd9 ip-10-0-16-155.eu-west-1.compute.internal
51
- inflate-75d744d4c6-xr6p5 ip-10-0-16-155.eu-west-1.compute.internal
52
- aws-node-mnn7r ip-10-0-3-23.eu-west-1.compute.internal
53
- aws-node-rkmvm ip-10-0-16-155.eu-west-1.compute.internal
54
- aws-node-s4slh ip-10-0-41-2.eu-west-1.compute.internal
55
- coredns-68bd859788-7rcfq ip-10-0-3-23.eu-west-1.compute.internal
56
- coredns-68bd859788-l78hw ip-10-0-41-2.eu-west-1.compute.internal
57
- eks-pod-identity-agent-gbx8l ip-10-0-41-2.eu-west-1.compute.internal
58
- eks-pod-identity-agent-s7vt7 ip-10-0-16-155.eu-west-1.compute.internal
59
- eks-pod-identity-agent-xwgqw ip-10-0-3-23.eu-west-1.compute.internal
60
- karpenter-79f59bdfdc-9q5ff ip-10-0-41-2.eu-west-1.compute.internal
61
- karpenter-79f59bdfdc-cxvhr ip-10-0-3-23.eu-west-1.compute.internal
62
- kube-proxy-7crbl ip-10-0-41-2.eu-west-1.compute.internal
63
- kube-proxy-jtzds ip-10-0-16-155.eu-west-1.compute.internal
64
- kube-proxy-sm42c ip-10-0-3-23.eu-west-1.compute.internal
50
+ inflate-67cd5bb766-hvqfn ip-10-0-13-51.eu-west-1.compute.internal
51
+ inflate-67cd5bb766-jnsdp ip-10-0-13-51.eu-west-1.compute.internal
52
+ inflate-67cd5bb766-k4gwf ip-10-0-41-242.eu-west-1.compute.internal
53
+ inflate-67cd5bb766-m49f6 ip-10-0-13-51.eu-west-1.compute.internal
54
+ inflate-67cd5bb766-pgzx9 ip-10-0-8-151.eu-west-1.compute.internal
55
+ aws-node-58m4v ip-10-0-3-57.eu-west-1.compute.internal
56
+ aws-node-pj2gc ip-10-0-8-151.eu-west-1.compute.internal
57
+ aws-node-thffj ip-10-0-41-242.eu-west-1.compute.internal
58
+ aws-node-vh66d ip-10-0-13-51.eu-west-1.compute.internal
59
+ coredns-844dbb9f6f-9g9lg ip-10-0-41-242.eu-west-1.compute.internal
60
+ coredns-844dbb9f6f-fmzfq ip-10-0-41-242.eu-west-1.compute.internal
61
+ eks-pod-identity-agent-jr2ns ip-10-0-8-151.eu-west-1.compute.internal
62
+ eks-pod-identity-agent-mpjkq ip-10-0-13-51.eu-west-1.compute.internal
63
+ eks-pod-identity-agent-q4tjc ip-10-0-3-57.eu-west-1.compute.internal
64
+ eks-pod-identity-agent-zzfdj ip-10-0-41-242.eu-west-1.compute.internal
65
+ karpenter-5b8965dc9b-rx9bx ip-10-0-8-151.eu-west-1.compute.internal
66
+ karpenter-5b8965dc9b-xrfnx ip-10-0-41-242.eu-west-1.compute.internal
67
+ kube-proxy-2xf42 ip-10-0-41-242.eu-west-1.compute.internal
68
+ kube-proxy-kbfc8 ip-10-0-8-151.eu-west-1.compute.internal
69
+ kube-proxy-kt8zn ip-10-0-13-51.eu-west-1.compute.internal
70
+ kube-proxy-sl6bz ip-10-0-3-57.eu-west-1.compute.internal
65
71
```
66
72
67
73
### Tear Down & Clean-Up
@@ -72,7 +78,6 @@ Because Karpenter manages the state of node resources outside of Terraform, Karp
72
78
73
79
``` bash
74
80
kubectl delete deployment inflate
75
- kubectl delete node -l karpenter.sh/provisioner-name=default
76
81
```
77
82
78
83
2 . Remove the resources created by Terraform
@@ -91,7 +96,6 @@ Note that this example may create resources which cost money. Run `terraform des
91
96
| <a name =" requirement_terraform " ></a > [ terraform] ( #requirement\_ terraform ) | >= 1.3.2 |
92
97
| <a name =" requirement_aws " ></a > [ aws] ( #requirement\_ aws ) | >= 5.81 |
93
98
| <a name =" requirement_helm " ></a > [ helm] ( #requirement\_ helm ) | >= 2.7 |
94
- | <a name =" requirement_kubectl " ></a > [ kubectl] ( #requirement\_ kubectl ) | >= 2.0 |
95
99
96
100
## Providers
97
101
@@ -100,7 +104,6 @@ Note that this example may create resources which cost money. Run `terraform des
100
104
| <a name =" provider_aws " ></a > [ aws] ( #provider\_ aws ) | >= 5.81 |
101
105
| <a name =" provider_aws.virginia " ></a > [ aws.virginia] ( #provider\_ aws.virginia ) | >= 5.81 |
102
106
| <a name =" provider_helm " ></a > [ helm] ( #provider\_ helm ) | >= 2.7 |
103
- | <a name =" provider_kubectl " ></a > [ kubectl] ( #provider\_ kubectl ) | >= 2.0 |
104
107
105
108
## Modules
106
109
@@ -116,9 +119,6 @@ Note that this example may create resources which cost money. Run `terraform des
116
119
| Name | Type |
117
120
| ------| ------|
118
121
| [ helm_release.karpenter] ( https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release ) | resource |
119
- | [ kubectl_manifest.karpenter_example_deployment] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest ) | resource |
120
- | [ kubectl_manifest.karpenter_node_class] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest ) | resource |
121
- | [ kubectl_manifest.karpenter_node_pool] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest ) | resource |
122
122
| [ aws_availability_zones.available] ( https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones ) | data source |
123
123
| [ aws_ecrpublic_authorization_token.token] ( https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token ) | data source |
124
124
0 commit comments