You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/other/3-auto-instrumentation/3-java-microservices-k8s/60-log-observer-connect.md
+97-29Lines changed: 97 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,25 @@ This change will configure the Spring PetClinic application to use an Otel-based
13
13
14
14
The Splunk Log Observer component is used to view the logs and with this information can automatically relate log information with APM Services and Traces. This feature called **Related Content** will also work with Infrastructure.
15
15
16
-
## 2. Update Logback config for the services
16
+
Lets grab the actual code for the application now.
17
+
18
+
## 2. Downloading the Spring Microservices PetClinic Application
19
+
20
+
For this exercise, we will use the Spring microservices PetClinic application. This is a very popular sample Java application built with the Spring framework (Springboot) and we are using a version witch actual microservices.
21
+
22
+
First, clone the PetClinic GitHub repository, as we will need this later in the workshop to compile, build, package and containerize the application:
The Spring PetClinic application can be configured to use several different Java logging libraries. In this scenario, the application is using `logback`. To make sure we get the otel information in the logs we need to update a file named `logback.xml` with the log structure, and add an Otel dependency to the `pom.xml` of each of the services in the petclinic microservices folders.
19
37
@@ -27,8 +45,15 @@ Note the following entries that will be added:
27
45
- trace_flags
28
46
- service.name
29
47
- deployment.environment
30
-
These fields allow the **Splunk** Observability Cloud Suite** to display **Related Content**:
31
-
So let's run the script that will update our log structure with the format above:
48
+
These fields allow the **Splunk** Observability Cloud Suite** to display **Related Content** when used in a pattern shown below:
Before we can build the new services with the updated log format we need to add the dependency to the `Pom.xml`:
90
+
Before we can build the new services with the updated log format we need to add the Opentelemetry dependency tht handles field injection to the `Pom.xml` of our services:
Given that Kubernetes needs to pull these freshly build images from somewhere, we are going to store them in the repository we set up earlier. To do this, run the script that will push the newly build containers into our local repository:
133
+
Given that Kubernetes needs to pull these freshly build images from somewhere, we are going to store them in the repository we tested earlier. To do this, run the script that will push the newly build containers into our local repository:
The containers should now be stored in the local repository, lets confirm by checking the catalog,
157
182
158
183
```bash
159
-
curl -X GET http://localhost:5000/v2/_catalog
184
+
curl -X GET http://localhost:9999/v2/_catalog
160
185
```
161
186
162
187
The result should be :
@@ -167,14 +192,14 @@ The result should be :
167
192
168
193
## 5. Deploy new services to kubernetes
169
194
170
-
To see the changes in effect, we need to redeploy the services, First let change the location of the images from the external repo to the local one by running the following script:
195
+
To see the changes in effect, we need to redeploy the services, First let change the location of the images from the external repo to the local one by running the following script:
171
196
172
197
```bash
173
198
.~/workshop/petclinic/scripts/set_local.sh
174
199
```
175
200
176
201
The result is a new file on disk called **petclinic-local.yaml**
177
-
Let switch to the local version by applying the local version of the deployment yaml. First delete the old deplyment with:
202
+
Let switch to the local versions by using the new version of the deployment yaml. First delete the old containers from the original deployment with:
However, as we only patched the deployment before, the new deployment does not have the right annotations for zero config auto-instrumentation, so lets fix that now by running the patch command again:
227
+
228
+
Note, there will be no change for the *config-server & discovery-server* as they do hav e the annotation included in the deployment.
203
229
204
-
First give the service time to get back into sync and lets tail the load generator log again
205
230
{{< tabs >}}
206
-
{{% tab title="Tail Log" %}}
231
+
{{% tab title="Patch all Petclinic services" %}}
207
232
208
-
```bash
209
-
.~/workshop/petclinic/scripts/tail_logs.sh
233
+
```bash
234
+
kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}"
210
235
```
211
236
212
237
{{% /tab %}}
213
-
{{% tab title="Tail Log Output" %}}
238
+
{{% tab title="kubectl patch Output" %}}
214
239
215
240
```text
216
-
{"severity":"info","msg":"Welcome Text = "Welcome to Petclinic"}
217
-
{"severity":"info","msg":"@ALL}"
218
-
{"severity":"info","msg":"@owner details page"}
219
-
{"severity":"info","msg":"@pet details page"}
220
-
{"severity":"info","msg":"@add pet page"}
221
-
{"severity":"info","msg":"@veterinarians page"}
222
-
{"severity":"info","msg":"cookies was"}
241
+
deployment.apps/config-server patched (no change)
242
+
deployment.apps/admin-server patched
243
+
deployment.apps/customers-service patched
244
+
deployment.apps/visits-service patched
245
+
deployment.apps/discovery-server patched (no change)
246
+
deployment.apps/vets-service patched
247
+
deployment.apps/api-gateway patched
223
248
```
224
249
225
250
{{% /tab %}}
226
251
{{< /tabs >}}
227
252
228
-
From the left-hand menu click on **Log Observer** and ensure **Index** is set to **splunk4rookies-workshop**.
253
+
Lets check the `api-gateway` container again
229
254
230
-
Next, click **Add Filter** search for the field `service_name` select the value `<INSTANCE>-petclinic-service` and click `=` (include). You should now see only the log messages from your PetClinic application.
255
+
```bash
256
+
kubectl describe pods api-gateway |grep Image:
257
+
```
231
258
232
-

259
+
The resulting output should say (again if you see double, its the old container being terminated, give it a few seconds):
Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
269
+
270
+
After a couple of minuted or so you should see that the Pods are being restarted by the operator and the Zero config container will be added.
271
+
This will look similar like the Screen shot below:
Wait for the pods to turn green again.(You may want to refresh the screen), then from the left-hand menu click on **Log Observer** and ensure **Index** is set to **splunk4rookies-workshop**.
276
+
277
+
Next, click **Add Filter** search for the field `deployment.environment` and select the value of rou Workshop (Remember the INSTANCE value ?) and click `=` (include). You should now see only the log messages from your PetClinic application.
278
+
279
+
Next search for the field `service_name` select the value `customers-service` and click `=` (include). Now the log files should be reduce to just the lines from your `customers-service`.
280
+
281
+
Wait for Log Lines to show up with an injected trace-id like trace_id=08b5ce63e46ddd0ce07cf8cfd2d6161a as show below **(1)**:
Click on a line with an injected trace_id, this should be all log lines created by your services that are part of a trace **(1)**.
286
+
A Side pane opens where you can see the related information about your logs. including the relevant Trace and Span Id's **(2)**.
287
+
288
+
Also, at the bottom next to APM, there should be a number, this is the number of related AP Content items for this log line. click on the APM pane **(1)** as show below:
289
+

290
+
291
+
- The *Map for customers-service***(2)** brings us to the APM dependency map with the workflow focused on Customer Services, allowing you to quick understand hwo this log line is related to the overall flow of service interaction.
292
+
- The *Trace for 34c98cbf7b300ef3dedab49da71a6ce3***(3)** will bring us to the waterfall in APM for this specific trace that this log line was generated in.
293
+
294
+
As a last exercise, click on the Trace for Link, this will bering you to the waterfall for this specific trace:
Note that you now have Logs Related Content Pane **(1)** appear, clicking on this will bring you back to log observer with all the logs line that are part of this Trace.
299
+
This will help you to quickly find relevant log lines for an interaction or a problem.
233
300
234
301
## 7. Summary
235
302
236
-
This is the end of the workshop and we have certainly covered a lot of ground. At this point, you should have metrics, traces (APM & RUM), logs, database query performance and code profiling being reported into Splunk Observability Cloud.
303
+
This is the end of the workshop and we have certainly covered a lot of ground. At this point, you should have metrics, traces, logs, database query performance and code profiling being reported into Splunk Observability Cloud.
0 commit comments