You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
29
+
The first step in debugging a Pod is taking a look at it. Check the current
30
+
state of the Pod and recent events with the following command:
30
31
31
32
```shell
32
33
kubectl describe pods ${POD_NAME}
33
34
```
34
35
35
-
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
36
+
Look at the state of the containers in the pod. Are they all `Running`?
37
+
Have there been recent restarts?
36
38
37
39
Continue debugging depending on the state of the pods.
38
40
39
41
#### My pod stays pending
40
42
41
-
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
42
-
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
43
-
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
44
-
your pod. Reasons include:
43
+
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node.
44
+
Generally this is because there are insufficient resources of one type or another
45
+
that prevent scheduling. Look at the output of the `kubectl describe ...` command above.
46
+
There should be messages from the scheduler about why it can not schedule your pod.
47
+
Reasons include:
45
48
46
-
***You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
47
-
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
48
-
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
49
+
***You don't have enough resources**: You may have exhausted the supply of CPU
50
+
or Memory in your cluster, in this case you need to delete Pods, adjust resource
51
+
requests, or add new nodes to your cluster. See [Compute Resources document](/docs/concepts/configuration/manage-resources-containers/)
52
+
for more information.
49
53
50
-
***You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
51
-
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
52
-
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
54
+
***You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a
55
+
limited number of places that pod can be scheduled. In most cases, `hostPort`
56
+
is unnecessary, try using a Service object to expose your Pod. If you do require
57
+
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
53
58
54
59
55
60
#### My pod stays waiting
56
61
57
-
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
58
-
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
62
+
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node,
63
+
but it can't run on that machine. Again, the information from `kubectl describe ...`
64
+
should be informative. The most common cause of `Waiting` pods is a failure to pull the image.
65
+
There are three things to check:
59
66
60
67
* Make sure that you have the name of the image correct.
61
68
* Have you pushed the image to the registry?
@@ -64,8 +71,9 @@ Again, the information from `kubectl describe ...` should be informative. The m
64
71
65
72
#### My pod is crashing or otherwise unhealthy
66
73
67
-
Once your pod has been scheduled, the methods described in [Debug Running Pods](
68
-
/docs/tasks/debug/debug-application/debug-running-pod/) are available for debugging.
74
+
Once your pod has been scheduled, the methods described in
Copy file name to clipboardExpand all lines: content/en/docs/tasks/debug/debug-cluster/local-debugging.md
+37-9Lines changed: 37 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,20 @@ content_type: task
7
7
8
8
{{% thirdparty-content %}}
9
9
10
-
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/) in order to run debugging tools.
10
+
Kubernetes applications usually consist of multiple, separate services,
11
+
each running in its own container. Developing and debugging these services
12
+
on a remote Kubernetes cluster can be cumbersome, requiring you to
13
+
[get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/)
14
+
in order to run debugging tools.
11
15
12
-
`telepresence` is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
16
+
`telepresence` is a tool to ease the process of developing and debugging
17
+
services locally while proxying the service to a remote Kubernetes cluster.
18
+
Using `telepresence` allows you to use custom tools, such as a debugger and
19
+
IDE, for a local service and provides the service full access to ConfigMap,
20
+
secrets, and the services running on the remote cluster.
13
21
14
-
This document describes using `telepresence` to develop and debug services running on a remote cluster locally.
22
+
This document describes using `telepresence` to develop and debug services
23
+
running on a remote cluster locally.
15
24
16
25
## {{% heading "prerequisites" %}}
17
26
@@ -24,7 +33,8 @@ This document describes using `telepresence` to develop and debug services runni
24
33
25
34
## Connecting your local machine to a remote Kubernetes cluster
26
35
27
-
After installing `telepresence`, run `telepresence connect` to launch its Daemon and connect your local workstation to the cluster.
36
+
After installing `telepresence`, run `telepresence connect` to launch
37
+
its Daemon and connect your local workstation to the cluster.
28
38
29
39
```
30
40
$ telepresence connect
@@ -38,24 +48,42 @@ You can curl services using the Kubernetes syntax e.g. `curl -ik https://kuberne
38
48
39
49
## Developing or debugging an existing service
40
50
41
-
When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.
51
+
When developing an application on Kubernetes, you typically program
52
+
or debug a single service. The service might require access to other
53
+
services for testing and debugging. One option is to use the continuous
54
+
deployment pipeline, but even the fastest deployment pipeline introduces
55
+
a delay in the program or debug cycle.
42
56
43
-
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
57
+
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT`
58
+
command to create an "intercept" for rerouting remote service traffic.
44
59
45
60
Where:
46
61
47
62
-`$SERVICE_NAME` is the name of your local service
48
63
-`$LOCAL_PORT` is the port that your service is running on your local workstation
49
64
- And `$REMOTE_PORT` is the port your service listens to in the cluster
50
65
51
-
Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.
66
+
Running this command tells Telepresence to send remote traffic to your
67
+
local service instead of the service in the remote Kubernetes cluster.
68
+
Make edits to your service source code locally, save, and see the corresponding
69
+
changes when accessing your remote application take effect immediately.
70
+
You can also run your local service using a debugger or any other local development tool.
52
71
53
72
## How does Telepresence work?
54
73
55
-
Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment.
74
+
Telepresence installs a traffic-agent sidecar next to your existing
75
+
application's container running in the remote cluster. It then captures
76
+
all traffic requests going into the Pod, and instead of forwarding this
77
+
to the application in the remote cluster, it routes all traffic (when you
78
+
create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
85
+
If you're interested in a hands-on tutorial, check out
0 commit comments