Skip to content

Commit 4ba09eb

Browse files
committed
Tweak line wrappings in local-debugging.md debug-pods.md
Signed-off-by: xin.li <[email protected]>
1 parent 3933bbf commit 4ba09eb

File tree

2 files changed

+80
-43
lines changed

2 files changed

+80
-43
lines changed

content/en/docs/tasks/debug/debug-application/debug-pods.md

Lines changed: 43 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -9,53 +9,60 @@ weight: 10
99

1010
<!-- overview -->
1111

12-
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
13-
This is *not* a guide for people who want to debug their cluster. For that you should check out
14-
[this guide](/docs/tasks/debug/debug-cluster).
12+
This guide is to help users debug applications that are deployed into Kubernetes
13+
and not behaving correctly. This is *not* a guide for people who want to debug their cluster.
14+
For that you should check out [this guide](/docs/tasks/debug/debug-cluster).
1515

1616
<!-- body -->
1717

1818
## Diagnosing the problem
1919

20-
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
21-
your Service?
20+
The first step in troubleshooting is triage. What is the problem?
21+
Is it your Pods, your Replication Controller or your Service?
2222

2323
* [Debugging Pods](#debugging-pods)
2424
* [Debugging Replication Controllers](#debugging-replication-controllers)
2525
* [Debugging Services](#debugging-services)
2626

2727
### Debugging Pods
2828

29-
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
29+
The first step in debugging a Pod is taking a look at it. Check the current
30+
state of the Pod and recent events with the following command:
3031

3132
```shell
3233
kubectl describe pods ${POD_NAME}
3334
```
3435

35-
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
36+
Look at the state of the containers in the pod. Are they all `Running`?
37+
Have there been recent restarts?
3638

3739
Continue debugging depending on the state of the pods.
3840

3941
#### My pod stays pending
4042

41-
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
42-
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
43-
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
44-
your pod. Reasons include:
43+
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node.
44+
Generally this is because there are insufficient resources of one type or another
45+
that prevent scheduling. Look at the output of the `kubectl describe ...` command above.
46+
There should be messages from the scheduler about why it can not schedule your pod.
47+
Reasons include:
4548

46-
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
47-
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
48-
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
49+
* **You don't have enough resources**: You may have exhausted the supply of CPU
50+
or Memory in your cluster, in this case you need to delete Pods, adjust resource
51+
requests, or add new nodes to your cluster. See [Compute Resources document](/docs/concepts/configuration/manage-resources-containers/)
52+
for more information.
4953

50-
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
51-
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
52-
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
54+
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a
55+
limited number of places that pod can be scheduled. In most cases, `hostPort`
56+
is unnecessary, try using a Service object to expose your Pod. If you do require
57+
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
5358

5459

5560
#### My pod stays waiting
5661

57-
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
58-
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
62+
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node,
63+
but it can't run on that machine. Again, the information from `kubectl describe ...`
64+
should be informative. The most common cause of `Waiting` pods is a failure to pull the image.
65+
There are three things to check:
5966

6067
* Make sure that you have the name of the image correct.
6168
* Have you pushed the image to the registry?
@@ -64,8 +71,9 @@ Again, the information from `kubectl describe ...` should be informative. The m
6471

6572
#### My pod is crashing or otherwise unhealthy
6673

67-
Once your pod has been scheduled, the methods described in [Debug Running Pods](
68-
/docs/tasks/debug/debug-application/debug-running-pod/) are available for debugging.
74+
Once your pod has been scheduled, the methods described in
75+
[Debug Running Pods](/docs/tasks/debug/debug-application/debug-running-pod/)
76+
are available for debugging.
6977

7078
#### My pod is running but not doing what I told it to do
7179

@@ -92,25 +100,27 @@ The next thing to check is whether the pod on the apiserver
92100
matches the pod you meant to create (e.g. in a yaml file on your local machine).
93101
For example, run `kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` and then
94102
manually compare the original pod description, `mypod.yaml` with the one you got
95-
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
96-
lines on the "apiserver" version that are not on the original version. This is
97-
expected. However, if there are lines on the original that are not on the apiserver
103+
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
104+
lines on the "apiserver" version that are not on the original version. This is
105+
expected. However, if there are lines on the original that are not on the apiserver
98106
version, then this may indicate a problem with your pod spec.
99107

100108
### Debugging Replication Controllers
101109

102-
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
103-
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
110+
Replication controllers are fairly straightforward. They can either create Pods or they can't.
111+
If they can't create pods, then please refer to the
112+
[instructions above](#debugging-pods) to debug your pods.
104113

105-
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
106-
controller.
114+
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events
115+
related to the replication controller.
107116

108117
### Debugging Services
109118

110-
Services provide load balancing across a set of pods. There are several common problems that can make Services
119+
Services provide load balancing across a set of pods. There are several common problems that can make Services
111120
not work properly. The following instructions should help debug Service problems.
112121

113-
First, verify that there are endpoints for the service. For every Service object, the apiserver makes an `endpoints` resource available.
122+
First, verify that there are endpoints for the service. For every Service object,
123+
the apiserver makes an `endpoints` resource available.
114124

115125
You can view this resource with:
116126

@@ -124,8 +134,8 @@ IP addresses in the Service's endpoints.
124134

125135
#### My service is missing endpoints
126136

127-
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
128-
a Service where the labels are:
137+
If you are missing endpoints, try listing pods using the labels that Service uses.
138+
Imagine that you have a Service where the labels are:
129139

130140
```yaml
131141
...
@@ -141,7 +151,7 @@ You can use:
141151
kubectl get pods --selector=name=nginx,type=frontend
142152
```
143153

144-
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
154+
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
145155
Verify that the pod's `containerPort` matches up with the Service's `targetPort`
146156

147157
#### Network traffic is not forwarded
@@ -157,4 +167,3 @@ actually serving; you have DNS working, iptables rules installed, and kube-proxy
157167
does not seem to be misbehaving.
158168

159169
You may also visit [troubleshooting document](/docs/tasks/debug/) for more information.
160-

content/en/docs/tasks/debug/debug-cluster/local-debugging.md

Lines changed: 37 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,20 @@ content_type: task
77

88
{{% thirdparty-content %}}
99

10-
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/) in order to run debugging tools.
10+
Kubernetes applications usually consist of multiple, separate services,
11+
each running in its own container. Developing and debugging these services
12+
on a remote Kubernetes cluster can be cumbersome, requiring you to
13+
[get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/)
14+
in order to run debugging tools.
1115

12-
`telepresence` is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
16+
`telepresence` is a tool to ease the process of developing and debugging
17+
services locally while proxying the service to a remote Kubernetes cluster.
18+
Using `telepresence` allows you to use custom tools, such as a debugger and
19+
IDE, for a local service and provides the service full access to ConfigMap,
20+
secrets, and the services running on the remote cluster.
1321

14-
This document describes using `telepresence` to develop and debug services running on a remote cluster locally.
22+
This document describes using `telepresence` to develop and debug services
23+
running on a remote cluster locally.
1524

1625
## {{% heading "prerequisites" %}}
1726

@@ -24,7 +33,8 @@ This document describes using `telepresence` to develop and debug services runni
2433

2534
## Connecting your local machine to a remote Kubernetes cluster
2635

27-
After installing `telepresence`, run `telepresence connect` to launch its Daemon and connect your local workstation to the cluster.
36+
After installing `telepresence`, run `telepresence connect` to launch
37+
its Daemon and connect your local workstation to the cluster.
2838

2939
```
3040
$ telepresence connect
@@ -38,24 +48,42 @@ You can curl services using the Kubernetes syntax e.g. `curl -ik https://kuberne
3848

3949
## Developing or debugging an existing service
4050

41-
When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.
51+
When developing an application on Kubernetes, you typically program
52+
or debug a single service. The service might require access to other
53+
services for testing and debugging. One option is to use the continuous
54+
deployment pipeline, but even the fastest deployment pipeline introduces
55+
a delay in the program or debug cycle.
4256

43-
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
57+
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT`
58+
command to create an "intercept" for rerouting remote service traffic.
4459

4560
Where:
4661

4762
- `$SERVICE_NAME` is the name of your local service
4863
- `$LOCAL_PORT` is the port that your service is running on your local workstation
4964
- And `$REMOTE_PORT` is the port your service listens to in the cluster
5065

51-
Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.
66+
Running this command tells Telepresence to send remote traffic to your
67+
local service instead of the service in the remote Kubernetes cluster.
68+
Make edits to your service source code locally, save, and see the corresponding
69+
changes when accessing your remote application take effect immediately.
70+
You can also run your local service using a debugger or any other local development tool.
5271

5372
## How does Telepresence work?
5473

55-
Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment.
74+
Telepresence installs a traffic-agent sidecar next to your existing
75+
application's container running in the remote cluster. It then captures
76+
all traffic requests going into the Pod, and instead of forwarding this
77+
to the application in the remote cluster, it routes all traffic (when you
78+
create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)
79+
or a subset of the traffic (when you create a
80+
[personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept))
81+
to your local development environment.
5682

5783
## {{% heading "whatsnext" %}}
5884

59-
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
85+
If you're interested in a hands-on tutorial, check out
86+
[this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s)
87+
that walks through locally developing the Guestbook application on Google Kubernetes Engine.
6088

6189
For further reading, visit the [Telepresence website](https://www.telepresence.io).

0 commit comments

Comments
 (0)