@@ -18,8 +18,11 @@ Once the cluster is up and running, you can check that Karpenter is functioning
1818#  First, make sure you have updated your local kubeconfig
1919aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter
2020
21- #  Second, scale the example deployment
22- kubectl scale deployment inflate --replicas 5
21+ #  Second, deploy the Karpenter NodeClass/NodePool
22+ kubectl apply -f karpenter.yaml
23+ 
24+ #  Second, deploy the example deployment
25+ kubectl apply -f inflate.yaml
2326
2427#  You can watch Karpenter's controller logs with
2528kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
@@ -32,10 +35,10 @@ kubectl get nodes -L karpenter.sh/registered
3235``` 
3336
3437``` text 
35- NAME                                        STATUS   ROLES    AGE     VERSION               REGISTERED 
36- ip-10-0-16-155 .eu-west-1.compute.internal   Ready    <none>   100s    v1.29.3 -eks-ae9a62a    true 
37- ip-10-0-3-23 .eu-west-1.compute.internal      Ready    <none>   6m1s    v1.29.3 -eks-ae9a62a  
38- ip-10-0-41-2 .eu-west-1.compute.internal      Ready    <none>   6m3s    v1.29.3 -eks-ae9a62a  
38+ NAME                                        STATUS   ROLES    AGE   VERSION               REGISTERED 
39+ ip-10-0-13-51 .eu-west-1.compute.internal     Ready    <none>   29s    v1.31.1 -eks-1b3e656    true 
40+ ip-10-0-41-242 .eu-west-1.compute.internal   Ready    <none>   35m    v1.31.1 -eks-1b3e656  
41+ ip-10-0-8-151 .eu-west-1.compute.internal    Ready    <none>   35m    v1.31.1 -eks-1b3e656  
3942``` 
4043
4144``` sh 
@@ -44,24 +47,27 @@ kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
4447
4548``` text 
4649NAME                           NODE 
47- inflate-75d744d4c6-nqwz8       ip-10-0-16-155.eu-west-1.compute.internal 
48- inflate-75d744d4c6-nrqnn       ip-10-0-16-155.eu-west-1.compute.internal 
49- inflate-75d744d4c6-sp4dx       ip-10-0-16-155.eu-west-1.compute.internal 
50- inflate-75d744d4c6-xqzd9       ip-10-0-16-155.eu-west-1.compute.internal 
51- inflate-75d744d4c6-xr6p5       ip-10-0-16-155.eu-west-1.compute.internal 
52- aws-node-mnn7r                 ip-10-0-3-23.eu-west-1.compute.internal 
53- aws-node-rkmvm                 ip-10-0-16-155.eu-west-1.compute.internal 
54- aws-node-s4slh                 ip-10-0-41-2.eu-west-1.compute.internal 
55- coredns-68bd859788-7rcfq       ip-10-0-3-23.eu-west-1.compute.internal 
56- coredns-68bd859788-l78hw       ip-10-0-41-2.eu-west-1.compute.internal 
57- eks-pod-identity-agent-gbx8l   ip-10-0-41-2.eu-west-1.compute.internal 
58- eks-pod-identity-agent-s7vt7   ip-10-0-16-155.eu-west-1.compute.internal 
59- eks-pod-identity-agent-xwgqw   ip-10-0-3-23.eu-west-1.compute.internal 
60- karpenter-79f59bdfdc-9q5ff     ip-10-0-41-2.eu-west-1.compute.internal 
61- karpenter-79f59bdfdc-cxvhr     ip-10-0-3-23.eu-west-1.compute.internal 
62- kube-proxy-7crbl               ip-10-0-41-2.eu-west-1.compute.internal 
63- kube-proxy-jtzds               ip-10-0-16-155.eu-west-1.compute.internal 
64- kube-proxy-sm42c               ip-10-0-3-23.eu-west-1.compute.internal 
50+ inflate-67cd5bb766-hvqfn       ip-10-0-13-51.eu-west-1.compute.internal 
51+ inflate-67cd5bb766-jnsdp       ip-10-0-13-51.eu-west-1.compute.internal 
52+ inflate-67cd5bb766-k4gwf       ip-10-0-41-242.eu-west-1.compute.internal 
53+ inflate-67cd5bb766-m49f6       ip-10-0-13-51.eu-west-1.compute.internal 
54+ inflate-67cd5bb766-pgzx9       ip-10-0-8-151.eu-west-1.compute.internal 
55+ aws-node-58m4v                 ip-10-0-3-57.eu-west-1.compute.internal 
56+ aws-node-pj2gc                 ip-10-0-8-151.eu-west-1.compute.internal 
57+ aws-node-thffj                 ip-10-0-41-242.eu-west-1.compute.internal 
58+ aws-node-vh66d                 ip-10-0-13-51.eu-west-1.compute.internal 
59+ coredns-844dbb9f6f-9g9lg       ip-10-0-41-242.eu-west-1.compute.internal 
60+ coredns-844dbb9f6f-fmzfq       ip-10-0-41-242.eu-west-1.compute.internal 
61+ eks-pod-identity-agent-jr2ns   ip-10-0-8-151.eu-west-1.compute.internal 
62+ eks-pod-identity-agent-mpjkq   ip-10-0-13-51.eu-west-1.compute.internal 
63+ eks-pod-identity-agent-q4tjc   ip-10-0-3-57.eu-west-1.compute.internal 
64+ eks-pod-identity-agent-zzfdj   ip-10-0-41-242.eu-west-1.compute.internal 
65+ karpenter-5b8965dc9b-rx9bx     ip-10-0-8-151.eu-west-1.compute.internal 
66+ karpenter-5b8965dc9b-xrfnx     ip-10-0-41-242.eu-west-1.compute.internal 
67+ kube-proxy-2xf42               ip-10-0-41-242.eu-west-1.compute.internal 
68+ kube-proxy-kbfc8               ip-10-0-8-151.eu-west-1.compute.internal 
69+ kube-proxy-kt8zn               ip-10-0-13-51.eu-west-1.compute.internal 
70+ kube-proxy-sl6bz               ip-10-0-3-57.eu-west-1.compute.internal 
6571``` 
6672
6773### Tear Down & Clean-Up  
@@ -72,7 +78,6 @@ Because Karpenter manages the state of node resources outside of Terraform, Karp
7278
7379``` bash 
7480kubectl delete deployment inflate
75- kubectl delete node -l karpenter.sh/provisioner-name=default
7681``` 
7782
78832 .  Remove the resources created by Terraform
@@ -91,7 +96,6 @@ Note that this example may create resources which cost money. Run `terraform des
9196|  <a  name =" requirement_terraform " ></a > [ terraform] ( #requirement\_ terraform )  |  >= 1.3.2 | 
9297|  <a  name =" requirement_aws " ></a > [ aws] ( #requirement\_ aws )  |  >= 5.81 | 
9398|  <a  name =" requirement_helm " ></a > [ helm] ( #requirement\_ helm )  |  >= 2.7 | 
94- |  <a  name =" requirement_kubectl " ></a > [ kubectl] ( #requirement\_ kubectl )  |  >= 2.0 | 
9599
96100## Providers  
97101
@@ -100,7 +104,6 @@ Note that this example may create resources which cost money. Run `terraform des
100104|  <a  name =" provider_aws " ></a > [ aws] ( #provider\_ aws )  |  >= 5.81 | 
101105|  <a  name =" provider_aws.virginia " ></a > [ aws.virginia] ( #provider\_ aws.virginia )  |  >= 5.81 | 
102106|  <a  name =" provider_helm " ></a > [ helm] ( #provider\_ helm )  |  >= 2.7 | 
103- |  <a  name =" provider_kubectl " ></a > [ kubectl] ( #provider\_ kubectl )  |  >= 2.0 | 
104107
105108## Modules  
106109
@@ -116,9 +119,6 @@ Note that this example may create resources which cost money. Run `terraform des
116119|  Name |  Type | 
117120| ------| ------| 
118121|  [ helm_release.karpenter] ( https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release )  |  resource | 
119- |  [ kubectl_manifest.karpenter_example_deployment] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest )  |  resource | 
120- |  [ kubectl_manifest.karpenter_node_class] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest )  |  resource | 
121- |  [ kubectl_manifest.karpenter_node_pool] ( https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest )  |  resource | 
122122|  [ aws_availability_zones.available] ( https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones )  |  data source | 
123123|  [ aws_ecrpublic_authorization_token.token] ( https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token )  |  data source | 
124124
0 commit comments