You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-27Lines changed: 24 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Preface
2
2
3
-
In a perfect world every written service will work smooth, your test coverage is on top and there are no bugs in your API. But we all know, that we can't achieve this world, sadly. It's not unusual that there's a bug in an API and you have to debug this problem in a production environment. We have faced this problem with our go services in our Kubernetes cluster and we want to show you how it's possible to remote debug a go service in a Kubernetes cluster.
3
+
In a perfect world every written service will work smooth, your test coverage is on top and there are no bugs in your API. But we all know, that we can't achieve this world, sadly. It's not unusual that there's a bug in an API and you have to debug this problem in a production environment. We have faced this problem with our go services in our Kubernetes cluster, and we want to show you how it's possible to remote debug a go service in a Kubernetes cluster.
4
4
5
5
## Software Prerequisites
6
6
@@ -11,18 +11,16 @@ For this scenario we need some software:
*[Visual Studio Code](https://code.visualstudio.com/download) (used version: 1.32.3)
13
13
14
-
We decided to use `kind` instead of `minikube`, since it's a very good tool for testing Kubernetes locally and we can use our docker images without a docker registry.
14
+
We decided to use `kind` instead of `minikube`, since it's a very good tool for testing Kubernetes locally, and we can use our docker images without a docker registry.
15
15
16
16
## Big Picture
17
17
18
-
First we will briefly explain how it works:
18
+
First we will briefly explain how it works. We start by creating a new Kubernetes cluster `local-debug-k8s` on our local system.
19
19
20
-
1. We create a new Kubernetes cluster `local-debug-k8s` on our local system
21
-
22
-
* you need a docker container with [delve](https://github.com/go-delve/delve) (the go debugger) as the main process
23
-
* delve needs access to the path with the project data. This is done by mounting `$GOPATH/src` on the pod which is running in the Kubernetes cluster
24
-
* we start the delve container on port 30123 and bind this port to localhost, so that only our local debugger can communicate with delve
25
-
* to debug an API with delve, it's necessary to set up an ingress network. For this we use port 8090.
20
+
* You need a docker container with [delve](https://github.com/go-delve/delve) (the go debugger) as the main process.
21
+
* The debugger delve needs access to the path with the project data. This is done by mounting `$GOPATH/src` on the pod which is running in the Kubernetes cluster.
22
+
* We start the delve container on port 30123 and bind this port to localhost, so that only our local debugger can communicate with delve.
23
+
* To debug an API with delve, it's necessary to set up an ingress network. For this we use port 8090.
26
24
27
25
A picture serves to illustrate the communication:
28
26
@@ -69,7 +67,7 @@ nodes:
69
67
containerPath: /go/src # path to the project folder inside the worker node
70
68
```
71
69
72
-
Desired result:
70
+
Expected result:
73
71
74
72
```sh
75
73
Creating cluster "local-debug-k8s" ...
@@ -97,23 +95,23 @@ Activate the kube-context for `kubectl` to communicate with the new cluster:
97
95
98
96
#### Install nginx-ingress
99
97
100
-
For both ports (8090 and 30123) to work it's necessary to deploy a nginx controller:
98
+
For both ports (8090 and 30123) to work, it is necessary to deploy an nginx controller:
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=120s
112
110
```
113
111
114
112
#### Labelling the worker node
115
113
116
-
We would suggest to label a worker node where the pod is going to be deployed: by default, a pod is deployed on one of several worker nodes you might have in the kind cluster. To make it work the docker image must be populated on all worker nodes in the cluster (it takes time). Otherwise, you can get into a situation, in which the pod is started on a node where the docker image is missing. Let's work with a dedicated node and safe the time.
114
+
We suggest labelling a worker node where the pod is going to be deployed. By default, a pod is deployed on one of several worker nodes you might have in the kind cluster. To make it work, the docker image must be populated on all worker nodes in the cluster (which takes time). Otherwise, you can get into a situation, in which the pod has started on a node where the docker image is missing. Let's work with a dedicated node and safe the time.
This message will be shown and it is just saying that the image was not there:
153
+
This message will be shown, and it is just saying that the image was not there:
156
154
157
155
Image: "setlog/debug-k8s:latest" with ID "sha256:944baa03d49698b9ca1f22e1ce87b801a20ce5aa52ccfc648a6c82cf8708a783" not present on node "local-debug-k8s-worker"
158
156
@@ -169,7 +167,7 @@ The interesting part here is:
169
167
path: /go/src
170
168
```
171
169
172
-
Lets take a look at the full chain of mounting the local project path into the pod, since you want probably to adjust them to your environment:
170
+
Let's take a look at the full chain of mounting the local project path into the pod, since you want probably to adjust them to your environment:
173
171
174
172

175
173
@@ -180,13 +178,13 @@ Check, if your persistent volume claim has been successfully created (STATUS mus
180
178
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
181
179
go-pvc Bound go-pv 256Mi RWO hostpath 51s
182
180
183
-
You are ready to start the service in the debug mode:
181
+
You are ready to start the service in debug mode:
184
182
185
183
`kubectl create -f cluster/deploy-service.yaml`
186
184
187
185
Let's go through the deployment.
188
186
189
-
* Image name is what we loaded into the kind cluster with the command `kind load image...`. _imagePullPolicy_ must be set to _IfNotPresent_, because it is already loaded there and we don't want Kubernetes to try doing it once more.
187
+
* Image name is what we loaded into the kind cluster with the command `kind load image...`. _imagePullPolicy_ must be set to _IfNotPresent_, because it is already loaded, we don't want Kubernetes to try loading it again.
190
188
191
189
image: setlog/debug-k8s:latest
192
190
imagePullPolicy: IfNotPresent
@@ -209,7 +207,7 @@ Let's go through the deployment.
209
207
nodeSelector:
210
208
debug: "true"
211
209
212
-
* Service _service-debug_ has the type _NodePort_ and is mounted into the worker node. This port 30123 is equal to the parameter _--listen=:30123_ in the Dockerfile, what makes possible to send debug commands to the delve server.
210
+
* Service _service-debug_ has the type _NodePort_ and is mounted into the worker node. This port 30123 is equal to the parameter _--listen=:30123_ in the Dockerfile, which makes it possible to send debug commands to the delve server.
213
211
214
212
* Service _debug-k8s_ will be connected to the ingress server in the final step. It serves for exposing the API endpoints we are going to debug.
_Hint: create a new variable to store the pod name. It can be helpful, if you repeatedly debug the pod_
229
-
`PODNAME=$(kubectl get pod -o jsonpath='{.items[0].metadata.name}')`
226
+
_Hint: create a new variable to store the pod name using `PODNAME=$(kubectl get pod -o jsonpath='{.items[0].metadata.name}')`. It can be helpful, if you repeatedly debug the pod._
230
227
231
-
Usualy it takes a couple of seconds to start the debugging process with delve. If your paths are mounted in the proper way, you will find the file `__debug_bin` in the project path on your computer. That is an executable which has been created by delve.
228
+
Usually it takes a couple of seconds to start the debugging process with delve. If your paths are mounted in the proper way, you will find the file `__debug_bin` in the project path on your computer. That is an executable which has been created by delve.
232
229
233
-
Also, you can output logs of the pod by performing `kubectl logs $PODNAME` in order to make sure that the delve API server is listening at 30123.
230
+
Also, you can output logs of the pod by performing `kubectl logs $PODNAME` in order to make sure the delve API server is listening at 30123.
234
231
235
232
Output:
236
233
237
234
API server listening at: [::]:30123
238
235
239
-
_Hint: always wait until this log message is shown for this pod before you start the debugging process. Otherwise the delve server is not up yet and cannot answer to the debugger_
236
+
_Hint: always wait until this log message is shown for this pod before you start the debugging process. Otherwise, the delve server is not up yet and cannot answer to the debugger._
240
237
241
238
### Starting the debug process via launch.json
242
239
@@ -272,11 +269,11 @@ We are ready to debug, but we have to trigger the API functions through the ingr
272
269
273
270
`kubectl create -f cluster/ingress.yaml`
274
271
275
-
...and try it now:
272
+
And try accessing it now:
276
273
277
274
`curl http://localhost:8090/hello`
278
275
279
-
Here you go:
276
+
Which should trigger the debugger:
280
277
281
278

0 commit comments