Skip to content

Commit 3992ed6

Browse files
authored
Merge pull request #298 from splunk/conf24
Conf24 merge
2 parents d52fce7 + 1429ef3 commit 3992ed6

File tree

14 files changed

+297
-25
lines changed

14 files changed

+297
-25
lines changed

content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
2-
title: Deploy Splunk OpenTelemetry Collector
2+
3+
title: Deploy the Splunk OpenTelemetry Collector
34
linkTitle: 1. Deploy OpenTelemetry Collector
45
weight: 2
56
---

content/en/conf24/1-zero-config-k8s/3-verify-setup/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ time: 10 minutes
77

88
Once the installation has been completed, you can log in to **Splunk Observability Cloud** and verify that the metrics are flowing in from your Kubernetes cluster.
99

10-
From the left-hand menu click on **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) and select **Kubernetes**, then select the **K8s nodes** pane. Once you are in the **K8s nodes** view, change the **Time** filter from **-4h** to the last 15 minutes **(-15m)** to focus on the latest data.
10+
From the left-hand menu click on **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) and select **Kubernetes**, then select the **Kubernetes nodes** pane. Once you are in the **Kubernetes nodes** view, change the **Time** filter from **-4h** to the last 15 minutes **(-15m)** to focus on the latest data.
1111

1212
Next, click **Add filters** (next to the **Time filter**) and add the filter `k8s.cluster.name` **(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). You can also select your cluster by clicking on its image in the cluster pane. You will now only have your cluster visible **(2)**.
1313

content/en/conf24/1-zero-config-k8s/4-apm/1-patching-deployment.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,4 +76,6 @@ Navigate back to the Kubernetes Navigator in **Splunk Observability Cloud**. Aft
7676

7777
![restart](../../images/k8s-navigator-restarted-pods.png)
7878

79-
Wait for the Pods to turn green in the Kubernetes Navigator, then go to **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services.
79+
Wait for the Pods to turn green in the Kubernetes Navigator, then go tho the next section.
80+
81+

content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ linkTitle: 2. Viewing APM Data
44
weight: 2
55
---
66

7-
Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`<INSTANCE>-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected.
7+
Log in to Splunk Observability Cloud, from the left-hand menu click on **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services. Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`<INSTANCE>-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected.
88

99
![apm](../../images/zero-config-first-services-overview.png)
1010

1111
You will see the name **(2)** of the **api-gateway** service and metrics in the Latency and Request & Errors charts (you can ignore the Critical Alert, as it is caused by the sudden request increase generated by the load generator). You will also see the rest of the services appear.
1212

13-
We will visit the **Service Map** **(3)** in the next section.
13+
Once you see the Customer service, Vets service and Visits services like show in the screenshot above, let's click on the **Service Map** **(3)** pane to get ready for the next section.

content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ weight: 1
66

77
![apm map](../../images/zero-config-first-services-map.png)
88

9-
The above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear.
9+
The above map shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear.
1010

1111
Next, let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard
1212

1313
For this exercise we are going to use a common scenario you would use if the service operation was showing high latency, or errors for example.
1414

15-
Select the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**.
15+
Select (click) on the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**.
1616

1717
This should give you the workflow with a filter on `GET /owners` **(1)** as shown below.
1818

content/en/conf24/1-zero-config-k8s/5-traces/4-red-metrics.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,6 @@ Splunk APM provide **Service Centric Views** that provide engineers a deep under
88

99
To see this dashboard for the `api-gateway`, make sure you have the `api-gateway` service selected in the Service Map, then click on the ***View Service** button in the top of the right-hand pane. This will bring you to the Service Centric View dashboard:
1010

11-
This view, which is available for each of your instrumented services, offers an overview of **Service metrics**, **Runtime metrics** and **Infrastruture metrics**.
11+
This view, which is available for each of your instrumented services, offers an overview of **Service metrics**, **Runtime metrics** and **Infrastructure metrics**.
1212

13-
![metrics dashboard](../../images/service-centric-view.png)
13+
You can select the **Back* function of you browser to go back to the previous view.

content/en/conf24/1-zero-config-k8s/6-profiling-db-query/2-waterfall.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,9 @@ This will bring you to the Always-on Profiling main screen, with the Memory view
2222
* Java Function calls identified. **(3)**, allowing you to drill down into the Methods called from that function.
2323
* The Flame Graph **(4)**, with the visualization of hierarchy based on the stack traces of the profiled service.
2424

25-
Once you have identified the relevant Function or Method you are interested in, `com.mysql.cj.protocol.a.NativePacketPayload.readBytes` in our example but yours may differ, so pick the top one **(1)** and find it at the e bottom of the Flame Graph **(2)**. Click on it in the Flame Graph, it will show a pane as shown in the image below, where you can see the Thread information **(3)** by clicking on the blue *Show Thread Info* link. If you click on the *Copy Stack Trace* **(4)** button, you grab the actual stack trace that you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
25+
For further investigation the UI let's you grab the actual stack trace, so you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
26+
<!-- Once you have identified the relevant Function or Method you are interested in, `com.mysql.cj.protocol.a.NativePacketPayload.readBytes` in our example but yours may differ, so pick the top one **(1)** and find it at the e bottom of the Flame Graph **(2)**. Click on it in the Flame Graph, it will show a pane as shown in the image below, where you can see the Thread information **(3)** by clicking on the blue *Show Thread Info* link. If you click on the *Copy Stack Trace* **(4)** button, you grab the actual stack trace that you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
2627
2728
![stack trace](../../images/grab-stack-trace.png)
2829
29-
For more details on Profiling, check the the **Debug Problems workshop**, or check the documents [here](https://docs.splunk.com/observability/en/apm/profiling/intro-profiling.html#introduction-to-alwayson-profiling-for-splunk-apm)
30+
For more details on Profiling, check the the **Debug Problems workshop**, or check the documents [here](https://docs.splunk.com/observability/en/apm/profiling/intro-profiling.html#introduction-to-alwayson-profiling-for-splunk-apm)> -->

content/en/conf24/1-zero-config-k8s/7-log-observer-connect/1-configure-logback.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,37 @@ Script execution completed.
6767

6868
We can verify if the replacement has been successful by examining the `logback-spring.xml` file from one of the services:
6969

70-
```bash
70+
{{< tabs >}}
71+
{{% tab title="cat logback-spring.xml" %}}
72+
73+
``` bash
7174
cat /home/splunk/spring-petclinic-microservices/spring-petclinic-customers-service/src/main/resources/logback-spring.xml
7275
```
76+
77+
{{% /tab %}}
78+
{{% tab title="Output" %}}
79+
80+
```text
81+
<?xml version="1.0" encoding="UTF-8"?>
82+
<configuration>
83+
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
84+
<encoder>
85+
<pattern>
86+
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
87+
</pattern>
88+
</encoder>
89+
</appender>
90+
<appender name="OpenTelemetry"
91+
class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender">
92+
<captureExperimentalAttributes>true</captureExperimentalAttributes>
93+
<captureKeyValuePairAttributes>true</captureKeyValuePairAttributes>
94+
</appender>
95+
<root level="INFO">
96+
<appender-ref ref="console"/>
97+
<appender-ref ref="OpenTelemetry"/>
98+
</root>
99+
</configuration>
100+
```
101+
102+
{{% /tab %}}
103+
{{< /tabs >}}

content/en/conf24/1-zero-config-k8s/7-log-observer-connect/2-rebuild-services.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,31 @@ linkTitle: 2. Rebuild PetClinic
44
weight: 2
55
---
66

7-
Before we can build the new services with the updated log format we need to add the Opentelemetry dependency that handles field injection to the `pom.xml` of our services:
7+
Before we can build the new services with the updated log format we need to add the OpenTelemetry dependency that handles field injection to the `pom.xml` of our services:
8+
{{< tabs >}}
9+
{{% tab title="Adding OTel dependencies" %}}
810

911
```bash
1012
. ~/workshop/petclinic/scripts/add_otel.sh
1113
```
1214

15+
{{% /tab %}}
16+
{{% tab title="Output" %}}
17+
18+
```text
19+
Dependencies added successfully in spring-petclinic-admin-server
20+
Dependencies added successfully in spring-petclinic-api-gateway
21+
Dependencies added successfully in spring-petclinic-config-server
22+
Dependencies added successfully in spring-petclinic-discovery-server
23+
Dependencies added successfully in spring-petclinic-customers-service
24+
Dependencies added successfully in spring-petclinic-vets-service
25+
Dependencies added successfully in spring-petclinic-visits-service
26+
Dependency addition complete!
27+
```
28+
29+
{{% /tab %}}
30+
{{< /tabs >}}
31+
1332
The Services are now ready to be built, so run the script that will use the `maven` command to compile/build/package the PetClinic microservices:
1433

1534
{{% notice note %}}

content/en/conf24/1-zero-config-k8s/7-log-observer-connect/3-deploy.md

Lines changed: 107 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,22 +6,99 @@ weight: 3
66

77
To see the changes in effect, we need to redeploy the services, First, let's change the location of the images from the external repo to the local one by running the following script:
88

9+
{{< tabs >}}
10+
{{% tab title="Change deployment to local containers" %}}
11+
912
```bash
1013
. ~/workshop/petclinic/scripts/set_local.sh
1114
```
1215

16+
{{% /tab %}}
17+
{{% tab title="Output" %}}
18+
19+
```text
20+
Script execution completed. Modified content saved to /home/splunk/workshop/petclinic/petclinic-local.yaml
21+
```
22+
23+
{{% /tab %}}
24+
{{< /tabs >}}
25+
1326
The result is a new file on disk called `petclinic-local.yaml`. Switch to the local versions by using the new version of the deployment YAML. First delete the old containers from the original deployment with:
1427

28+
{{< tabs >}}
29+
{{% tab title="Deleting remote Petclinic services" %}}
30+
1531
```bash
1632
kubectl delete -f ~/workshop/petclinic/petclinic-deploy.yaml
1733
```
1834

35+
{{% /tab %}}
36+
{{% tab title="Output" %}}
37+
38+
```text
39+
deployment.apps "config-server" deleted
40+
service "config-server" deleted
41+
deployment.apps "discovery-server" deleted
42+
service "discovery-server" deleted
43+
deployment.apps "api-gateway" deleted
44+
service "api-gateway" deleted
45+
service "api-gateway-external" deleted
46+
deployment.apps "customers-service" deleted
47+
service "customers-service" deleted
48+
deployment.apps "vets-service" deleted
49+
service "vets-service" deleted
50+
deployment.apps "visits-service" deleted
51+
service "visits-service" deleted
52+
deployment.apps "admin-server" deleted
53+
service "admin-server" deleted
54+
service "petclinic-db" deleted
55+
deployment.apps "petclinic-db" deleted
56+
configmap "petclinic-db-initdb-config" deleted
57+
deployment.apps "petclinic-loadgen-deployment" deleted
58+
configmap "scriptfile" deleted
59+
```
60+
61+
{{% /tab %}}
62+
{{< /tabs >}}
63+
1964
followed by:
2065

66+
{{< tabs >}}
67+
{{% tab title="Starting local Petclinic services" %}}
68+
2169
```bash
2270
kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
2371
```
2472

73+
{{% /tab %}}
74+
{{% tab title="Output" %}}
75+
76+
```text
77+
deployment.apps/config-server created
78+
service/config-server created
79+
deployment.apps/discovery-server created
80+
service/discovery-server created
81+
deployment.apps/api-gateway created
82+
service/api-gateway created
83+
service/api-gateway-external created
84+
deployment.apps/customers-service created
85+
service/customers-service created
86+
deployment.apps/vets-service created
87+
service/vets-service created
88+
deployment.apps/visits-service created
89+
service/visits-service created
90+
deployment.apps/admin-server created
91+
service/admin-server created
92+
service/petclinic-db created
93+
deployment.apps/petclinic-db created
94+
configmap/petclinic-db-initdb-config created
95+
deployment.apps/petclinic-loadgen-deployment created
96+
configmap/scriptfile created
97+
```
98+
99+
{{% /tab %}}
100+
{{< /tabs >}}
101+
25102
This will cause the containers to be replaced with the local version, you can verify this by checking the containers:
26103

27104
```bash
@@ -67,25 +144,47 @@ deployment.apps/api-gateway patched
67144

68145
Check the `api-gateway` container (again if you see two `api-gateway` containers, it's the old container being terminated so give it a few seconds):
69146

70-
{{< tabs >}}
71-
{{% tab title="Check Container" %}}
72-
73147
```bash
74148
kubectl describe pods api-gateway | grep Image:
75149
```
76150

77-
{{% /tab %}}
78-
{{% tab title="Output" %}}
151+
The resulting output will show the local api gateway version `localhost:9999` and the auto-instrumentation container:
79152

80153
```text
154+
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.32.1
81155
Image: localhost:9999/spring-petclinic-api-gateway:local
82156
```
83157

84-
{{% /tab %}}
85-
{{< /tabs >}}
86-
87158
Now that the Pods have been patched validate they are all running by executing the following command:
88159

160+
{{< tabs >}}
161+
{{% tab title="Checking if all Pods are running" %}}
162+
89163
```bash
90164
kubectl get pods
91165
```
166+
167+
{{% /tab %}}
168+
{{% tab title="Output" %}}
169+
170+
```text
171+
NAME READY STATUS RESTARTS AGE
172+
splunk-otel-collector-certmanager-cainjector-cd8459647-d42ls 1/1 Running 0 22h
173+
splunk-otel-collector-certmanager-85cbb786b6-xgjgb 1/1 Running 0 22h
174+
splunk-otel-collector-certmanager-webhook-75d888f9f7-477x4 1/1 Running 0 22h
175+
splunk-otel-collector-agent-nmmkm 1/1 Running 0 22h
176+
splunk-otel-collector-k8s-cluster-receiver-7f96c94fd9-fv4p8 1/1 Running 0 22h
177+
splunk-otel-collector-operator-6b56bc9d79-r8p7w 2/2 Running 0 22h
178+
petclinic-loadgen-deployment-765b96d4b9-gm8fp 1/1 Running 0 21h
179+
petclinic-db-774dbbf969-2q6md 1/1 Running 0 21h
180+
config-server-5784c9fbb4-9pdc8 1/1 Running 0 21h
181+
admin-server-849d877b6-pncr2 1/1 Running 0 21h
182+
discovery-server-6d856d978b-7x69f 1/1 Running 0 21h
183+
visits-service-c7cd56876-grfn7 1/1 Running 0 21h
184+
customers-service-6c57cb68fd-hx68n 1/1 Running 0 21h
185+
vets-service-688fd4cb47-z42t5 1/1 Running 0 21h
186+
api-gateway-59f4c7fbd6-prx5f 1/1 Running 0 20h
187+
```
188+
189+
{{% /tab %}}
190+
{{< /tabs >}}

0 commit comments

Comments
 (0)