You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{eam} includes the Amazon EKS node monitoring agent. You can use this agent to view troubleshooting and debugging information about nodes. The node monitoring agent publishes Kubernetes `events` and node `conditions`. For more information, see <<node-health>>.
35
31
32
+
[[auto-node-console,auto-node-console.title]]
36
33
== Get console output from an {emi} by using the {aws} EC2 CLI
37
34
38
35
This procedure helps with troubleshooting boot-time or kernel-level issues.
@@ -59,9 +56,61 @@ kubectl get pod <pod-name> -o wide
59
56
aws ec2 get-console-output --instance-id <instance id> --latest --output text
== Get node logs by using __debug containers__ and the kubectl CLI
61
+
62
+
The recommended way of retrieving logs from an EKS Auto Mode node is to use NodeDiagnostic resource. For these steps, see <<auto-get-logs>>.
63
+
64
+
However, you can stream logs live from an instance by using the `kubectl debug node` command. This command launches a new Pod on the node that you want to debug which you can then interactively use.
65
+
66
+
. Launch a debug container. The following node - i-01234567890123456
67
+
-it - allocate a tty and attach stdin for interactive usage
Creating debugging pod node-debugger-i-01234567890123456-nxb9c with container debugger on node i-01234567890123456.
80
+
If you don't see a command prompt, try pressing enter.
81
+
bash-5.2#
82
+
----
83
+
84
+
. From the shell, you can now install util-linux-core which provides the nsenter command. Here nsenter is used to enter the mount namespace of pid 1 (init) on the hots, and run the journalctl command to stream logs from kubelet :
85
+
+
86
+
[source,none]
87
+
----
88
+
yum install -y util-linux-core
89
+
nsenter -t 1 -m journalctl -f -u kubelet
90
+
----
91
+
92
+
For security, the Amazon Linux container image doesn't install many binaries by default. You can use the yum wh atprovides command to identify the package that must be installed to provide a given binary.
93
+
94
+
[source,cli]
95
+
----
96
+
yum whatprovides ps
97
+
----
63
98
64
-
For information about getting node logs, see <<auto-get-logs>>.
99
+
[source,none]
100
+
----
101
+
Last metadata expiration check: 0:03:36 ago on Thu Jan 16 14:49:17 2025.
102
+
procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities
103
+
Repo : @System
104
+
Matched from:
105
+
Filename : /usr/bin/ps
106
+
Provide : /bin/ps
107
+
108
+
procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities
109
+
Repo : amazonlinux
110
+
Matched from:
111
+
Filename : /usr/bin/ps
112
+
Provide : /bin/ps
113
+
----
65
114
66
115
== View resources associated with {eam} in the {aws} Console
67
116
@@ -87,19 +136,104 @@ Look for errors related to your EKS cluster. Use the error messages to update yo
87
136
88
137
//Ensure you are running the latest version of the {aws} CLI, eksctl, etc.
== Troubleshoot Pod failing to schedule onto Auto Mode node
91
141
92
142
If pods are not being scheduled onto an auto mode node, verify if your pod/deployment manifest has a **nodeSelector**. If a nodeSelector is present, please ensure it is using `eks.amazonaws.com/compute-type: auto` to allow it to be scheduled. See <<associate-workload>>.
93
143
94
-
== Node not joining cluster
144
+
[[auto-node-join,auto-node-join.title]]
145
+
== Troubleshoot node not joining the cluster
95
146
96
-
Run `kubectl get nodeclaim` to check for nodeclaims that are `Ready = False`.
147
+
EKS Auto Mode automatically configures new EC2 instances with the correct information to join the cluster, including the cluster endpoint and cluster certificate authority (CA). However, these instances can still fail to join the EKS cluster as a node. Run the following commands to identify instances that didn't join the cluster:
97
148
98
-
Proceed to run `kubectl describe nodeclaim <node_claim>` and look under *Status* to find any issues preventing the node from joining the cluster.
149
+
. Run `kubectl get nodeclaim` to check for `NodeClaims` that are `Ready = False`.
150
+
+
151
+
[source,cli]
152
+
----
153
+
kubectl get nodeclaim
154
+
----
155
+
156
+
. Run `kubectl describe nodeclaim <node_claim>` and look under *Status* to find any issues preventing the node from joining the cluster.
157
+
+
158
+
[source,cli]
159
+
----
160
+
kubectl describe nodeclaim <node_claim>
161
+
----
99
162
100
163
*Common error messages:*
101
164
102
165
* "Error getting launch template configs"
103
166
** You may receive this error if you are setting custom tags in the NodeClass with the default cluster IAM role permissions. See <<auto-learn-iam>>.
104
167
* "Error creating fleet"
105
168
** There may be some authorization issue with calling the RunInstances API call. Check CloudTrail for errors and see <<auto-cluster-iam-role>> for the required IAM permissions.
=== Detect node connectivity issues with the `VPC Reachability Analyzer`
172
+
173
+
One reason that an instance didn't join the cluster is a network connectivity issue that prevents them from reaching the API server. To diagnose this issue, you can use the link:vpc/latest/reachability/what-is-reachability-analyzer.html[VPC Reachability Analyzer,type="documentation"] to perform an analysis of the connectivity between a Node that is failing to join the cluster and the API server. You will need two pieces of information:
174
+
175
+
* *instance ID* of a node that can't join the cluster
176
+
* IP address of the *Kubernetes API server endpoint*
177
+
178
+
To get the *instance ID*, you will need to create a workload on the cluster to cause EKS Auto Mode to launch an EC2 instance. This also creates a `NodeClaim` object in your cluster that will have the instance ID. Run `kubectl get nodeclaim -o yaml` to print all of the `NodeClaims` in your cluster. Each `NodeClaim` contains the instance ID as a field and again in the providerID:
179
+
180
+
[source,cli]
181
+
----
182
+
kubectl get nodeclaim -o yaml
183
+
----
184
+
185
+
An example output is as follows.
186
+
+
187
+
[source,bash,subs="verbatim,attributes"]
188
+
----
189
+
nodeName: i-01234567890123456
190
+
providerID: aws:///us-west-2a/i-01234567890123456
191
+
----
192
+
193
+
You can determine your *Kubernetes API server endpoint* by running `kubectl get endpoint kubernetes -o yaml`. The addresses are in the addresses field:
194
+
195
+
[source,cli]
196
+
----
197
+
kubectl get endpoints kubernetes -o yaml
198
+
----
199
+
200
+
An example output is as follows.
201
+
+
202
+
[source,bash,subs="verbatim,attributes"]
203
+
----
204
+
apiVersion: v1
205
+
kind: Endpoints
206
+
metadata:
207
+
name: kubernetes
208
+
namespace: default
209
+
subsets:
210
+
- addresses:
211
+
- ip: 10.0.143.233
212
+
- ip: 10.0.152.17
213
+
ports:
214
+
- name: https
215
+
port: 443
216
+
protocol: TCP
217
+
----
218
+
219
+
With these two pieces of information, you can perform the analysis. First navigate to the VPC Reachability Analyzer in the{aws-management-console}.
220
+
221
+
. Click “Create and Analyze Path”
222
+
. Provide a name for the analysis (e.g. “Node Join Failure”)
223
+
. For the “Source Type” select “Instances”
224
+
. Enter the instance ID of the failing Node as the “Source”
225
+
. For the “Path Destination” select “IP Address”
226
+
. Enter one of the IP addresses for the API server as the “Destination Address”
227
+
. Expand the “Additional Packet Header Configuration Section”
228
+
. Enter a “Destination Port” of 443
229
+
. Select “Protocol” as TCP if it is not already selected
230
+
. Click “Create and Analyze Path”
231
+
. The analysis might take a few minutes to complete. If the analysis results indicates failed reachability, it will indicate where the failure was in the network path so you can resolve the issue.
232
+
233
+
[[auto-troubleshoot-controllers]]
234
+
== Troubleshoot included controllers in Auto Mode
235
+
236
+
If you have a problem with a controller, you should research:
237
+
238
+
* If the resources associated with that controller are properly formatted and valid.
239
+
* If the {aws} IAM and Kubernetes RBAC resources are properly configured for your cluster. For more information, see <<auto-learn-iam>>.
0 commit comments