Skip to content

Commit bba4a1b

Browse files
committed
Release 1.0
1 parent 100a3fa commit bba4a1b

File tree

8 files changed

+97
-29
lines changed

8 files changed

+97
-29
lines changed

content/en/other/3-auto-instrumentation/3-java-microservices-k8s/60-log-observer-connect.md

Lines changed: 97 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,25 @@ This change will configure the Spring PetClinic application to use an Otel-based
1313

1414
The Splunk Log Observer component is used to view the logs and with this information can automatically relate log information with APM Services and Traces. This feature called **Related Content** will also work with Infrastructure.
1515

16-
## 2. Update Logback config for the services
16+
Lets grab the actual code for the application now.
17+
18+
## 2. Downloading the Spring Microservices PetClinic Application
19+
20+
For this exercise, we will use the Spring microservices PetClinic application. This is a very popular sample Java application built with the Spring framework (Springboot) and we are using a version witch actual microservices.
21+
22+
First, clone the PetClinic GitHub repository, as we will need this later in the workshop to compile, build, package and containerize the application:
23+
24+
```bash
25+
cd ~;git clone https://github.com/hagen-p/spring-petclinic-microservices.git
26+
```
27+
28+
Then change into the spring-petclinic directory:
29+
30+
```bash
31+
cd ~/spring-petclinic-microservices
32+
```
33+
34+
## 3. Update Logback config for the services
1735

1836
The Spring PetClinic application can be configured to use several different Java logging libraries. In this scenario, the application is using `logback`. To make sure we get the otel information in the logs we need to update a file named `logback.xml` with the log structure, and add an Otel dependency to the `pom.xml` of each of the services in the petclinic microservices folders.
1937

@@ -27,8 +45,15 @@ Note the following entries that will be added:
2745
- trace_flags
2846
- service.name
2947
- deployment.environment
30-
These fields allow the **Splunk** Observability Cloud Suite** to display **Related Content**:
31-
So let's run the script that will update our log structure with the format above:
48+
These fields allow the **Splunk** Observability Cloud Suite** to display **Related Content** when used in a pattern shown below:
49+
50+
```xml
51+
<pattern>
52+
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
53+
</pattern>
54+
```
55+
56+
So let's run the script that will update the files with the log structure with the format above:
3257

3358
{{< tabs >}}
3459
{{% tab title="Update Logback files" %}}
@@ -60,9 +85,9 @@ We can verify if the replacement has been successful by examining the spring-log
6085
cat /home/splunk/spring-petclinic-microservices/spring-petclinic-customers-service/src/main/resources/logback-spring.xml
6186
```
6287

63-
## 3. Reconfigure and build the services locally
88+
## 4. Reconfigure and build the services locally
6489

65-
Before we can build the new services with the updated log format we need to add the dependency to the `Pom.xml`:
90+
Before we can build the new services with the updated log format we need to add the Opentelemetry dependency tht handles field injection to the `Pom.xml` of our services:
6691

6792
```bash
6893
. ~/workshop/petclinic/scripts/add_otel.sh
@@ -105,7 +130,7 @@ Successfully tagged quay.io/phagen/spring-petclinic-api-gateway:latest
105130
{{% /tab %}}
106131
{{< /tabs >}}
107132

108-
Given that Kubernetes needs to pull these freshly build images from somewhere, we are going to store them in the repository we set up earlier. To do this, run the script that will push the newly build containers into our local repository:
133+
Given that Kubernetes needs to pull these freshly build images from somewhere, we are going to store them in the repository we tested earlier. To do this, run the script that will push the newly build containers into our local repository:
109134

110135
{{< tabs >}}
111136
{{% tab title="pushing Containers" %}}
@@ -156,7 +181,7 @@ local: digest: sha256:3601c6e7f58224001946058fb0400483fbb8f1b0ea8a6dbaf403c62b4c
156181
The containers should now be stored in the local repository, lets confirm by checking the catalog,
157182

158183
```bash
159-
curl -X GET http://localhost:5000/v2/_catalog
184+
curl -X GET http://localhost:9999/v2/_catalog
160185
```
161186

162187
The result should be :
@@ -167,14 +192,14 @@ The result should be :
167192

168193
## 5. Deploy new services to kubernetes
169194

170-
To see the changes in effect, we need to redeploy the services, First let change the location of the images from the external repo to the local one by running the following script:
195+
To see the changes in effect, we need to redeploy the services, First let change the location of the images from the external repo to the local one by running the following script:
171196

172197
```bash
173198
. ~/workshop/petclinic/scripts/set_local.sh
174199
```
175200

176201
The result is a new file on disk called **petclinic-local.yaml**
177-
Let switch to the local version by applying the local version of the deployment yaml. First delete the old deplyment with:
202+
Let switch to the local versions by using the new version of the deployment yaml. First delete the old containers from the original deployment with:
178203

179204
```bash
180205
kubectl delete -f ~/workshop/petclinic/petclinic-local.yaml
@@ -192,50 +217,93 @@ This will cause the containers to be replaced with the local version, you can ve
192217
kubectl describe pods api-gateway |grep Image:
193218
```
194219

195-
The resulting output should say ( again if you see double, its the old container being terminated, give it a few seconds):
220+
The resulting output should say `localhost:9999` :
196221

197222
```text
198-
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.30.0
199-
Image: localhost:5000/spring-petclinic-api-gateway:local
223+
Image: localhost:9999/spring-petclinic-api-gateway:local
200224
```
201225

202-
## 6. View Logs
226+
However, as we only patched the deployment before, the new deployment does not have the right annotations for zero config auto-instrumentation, so lets fix that now by running the patch command again:
227+
228+
Note, there will be no change for the *config-server & discovery-server* as they do hav e the annotation included in the deployment.
203229

204-
First give the service time to get back into sync and lets tail the load generator log again
205230
{{< tabs >}}
206-
{{% tab title="Tail Log" %}}
231+
{{% tab title="Patch all Petclinic services" %}}
207232

208-
``` bash
209-
. ~/workshop/petclinic/scripts/tail_logs.sh
233+
```bash
234+
kubectl get deployments -l app.kubernetes.io/part-of=spring-petclinic -o name | xargs -I % kubectl patch % -p "{\"spec\": {\"template\":{\"metadata\":{\"annotations\":{\"instrumentation.opentelemetry.io/inject-java\":\"default/splunk-otel-collector\"}}}}}"
210235
```
211236

212237
{{% /tab %}}
213-
{{% tab title="Tail Log Output" %}}
238+
{{% tab title="kubectl patch Output" %}}
214239

215240
```text
216-
{"severity":"info","msg":"Welcome Text = "Welcome to Petclinic"}
217-
{"severity":"info","msg":"@ALL}"
218-
{"severity":"info","msg":"@owner details page"}
219-
{"severity":"info","msg":"@pet details page"}
220-
{"severity":"info","msg":"@add pet page"}
221-
{"severity":"info","msg":"@veterinarians page"}
222-
{"severity":"info","msg":"cookies was"}
241+
deployment.apps/config-server patched (no change)
242+
deployment.apps/admin-server patched
243+
deployment.apps/customers-service patched
244+
deployment.apps/visits-service patched
245+
deployment.apps/discovery-server patched (no change)
246+
deployment.apps/vets-service patched
247+
deployment.apps/api-gateway patched
223248
```
224249

225250
{{% /tab %}}
226251
{{< /tabs >}}
227252

228-
From the left-hand menu click on **Log Observer** and ensure **Index** is set to **splunk4rookies-workshop**.
253+
Lets check the `api-gateway` container again
229254

230-
Next, click **Add Filter** search for the field `service_name` select the value `<INSTANCE>-petclinic-service` and click `=` (include). You should now see only the log messages from your PetClinic application.
255+
```bash
256+
kubectl describe pods api-gateway |grep Image:
257+
```
231258

232-
![Log Observer](../images/log-observer.png)
259+
The resulting output should say (again if you see double, its the old container being terminated, give it a few seconds):
260+
261+
```text
262+
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.30.0
263+
Image: localhost:9999/spring-petclinic-api-gateway:local
264+
```
265+
266+
## 6. View Logs
267+
268+
Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
269+
270+
After a couple of minuted or so you should see that the Pods are being restarted by the operator and the Zero config container will be added.
271+
This will look similar like the Screen shot below:
272+
273+
![restart](../images/k8s-navigator-restarted-pods.png)
274+
275+
Wait for the pods to turn green again.(You may want to refresh the screen), then from the left-hand menu click on **Log Observer** ![Logo](../images/logo-icon.png?classes=inline&height=25px) and ensure **Index** is set to **splunk4rookies-workshop**.
276+
277+
Next, click **Add Filter** search for the field `deployment.environment` and select the value of rou Workshop (Remember the INSTANCE value ?) and click `=` (include). You should now see only the log messages from your PetClinic application.
278+
279+
Next search for the field `service_name` select the value `customers-service` and click `=` (include). Now the log files should be reduce to just the lines from your `customers-service`.
280+
281+
Wait for Log Lines to show up with an injected trace-id like trace_id=08b5ce63e46ddd0ce07cf8cfd2d6161a as show below **(1)**:
282+
283+
![Log Observer](../images/log-observer-trace-info.png)
284+
285+
Click on a line with an injected trace_id, this should be all log lines created by your services that are part of a trace **(1)**.
286+
A Side pane opens where you can see the related information about your logs. including the relevant Trace and Span Id's **(2)**.
287+
288+
Also, at the bottom next to APM, there should be a number, this is the number of related AP Content items for this log line. click on the APM pane **(1)** as show below:
289+
![RC](../images/log-apm-rc.png)
290+
291+
- The *Map for customers-service* **(2)** brings us to the APM dependency map with the workflow focused on Customer Services, allowing you to quick understand hwo this log line is related to the overall flow of service interaction.
292+
- The *Trace for 34c98cbf7b300ef3dedab49da71a6ce3* **(3)** will bring us to the waterfall in APM for this specific trace that this log line was generated in.
293+
294+
As a last exercise, click on the Trace for Link, this will bering you to the waterfall for this specific trace:
295+
296+
![waterfall logs](../images/waterfall-with-logs.png)
297+
298+
Note that you now have Logs Related Content Pane **(1)** appear, clicking on this will bring you back to log observer with all the logs line that are part of this Trace.
299+
This will help you to quickly find relevant log lines for an interaction or a problem.
233300

234301
## 7. Summary
235302

236-
This is the end of the workshop and we have certainly covered a lot of ground. At this point, you should have metrics, traces (APM & RUM), logs, database query performance and code profiling being reported into Splunk Observability Cloud.
303+
This is the end of the workshop and we have certainly covered a lot of ground. At this point, you should have metrics, traces, logs, database query performance and code profiling being reported into Splunk Observability Cloud.
237304

238305
**Congratulations!**
306+
239307
<!--
240308
docker system prune -a --volumes
241309
98.4 KB
Loading
29.2 KB
Loading
51.8 KB
Loading
394 KB
Loading
Binary file not shown.
514 Bytes
Loading
99.4 KB
Loading

0 commit comments

Comments
 (0)