Skip to content

Commit 8952cda

Browse files
committed
fixes for flow and scripts
1 parent ed65594 commit 8952cda

File tree

5 files changed

+96
-67
lines changed

5 files changed

+96
-67
lines changed

content/en/other/3-auto-instrumentation/3-java-microservices-pet-clinic/10-preparation.md

Lines changed: 28 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,32 @@ linkTitle: 10. Preparation
44
weight: 10
55
---
66

7-
## 1. Deploying the prebuilt containers into Kubernetes
7+
## 1. Validate the settings for your workshop
88

9-
The first thing we need to set up is ... well, an application. The first deployment of our application will be using prebuilt containers to give us the base scenario: a Java microservices-based application running in Kubernetes.
9+
To ensure your instance is configured correctly, we need to confirm that the required environment variables for this workshop are set correctly. In your terminal run the following command:
10+
11+
``` bash
12+
. ~/workshop/petclinic/scripts/check_env.sh
13+
```
14+
15+
In the output check the following environment variables are present and have values set:
16+
17+
```text
18+
ACCESS_TOKEN
19+
REALM
20+
RUM_TOKEN
21+
HEC_TOKEN
22+
HEC_URL
23+
INSTANCE
24+
```
25+
26+
Please make a note of the `INSTANCE` environment variable value as this is the reference to you workshop instance and we will need it to filter the data in the **Splunk Observability Suite** UI.
27+
28+
For this workshop, **all** of the above are required. If any are missing, please contact your instructor.
29+
30+
## 2. Deploying the prebuilt containers into Kubernetes
31+
32+
The second thing we need to do, well ..., is to set up our application. The first deployment of our application will be using prebuilt containers to give us the base scenario: a regular Java microservices-based application running in Kubernetes.
1033

1134
So let's deploy our application:
1235
{{< tabs >}}
@@ -46,11 +69,7 @@ configmap/scriptfile created
4669
{{< /tabs >}}
4770

4871
<!-- {{% notice title="In case of error Unable to read /etc/rancher/k3s/k3s.yaml" style="warning" %}}
49-
In rare occasions, you may encounter the above error at this point, this is due to incorrect file permission on the Kubernetes config file. This can easily be resolved by running the following command:
50-
51-
``` bash
52-
sudo chmod 777 /etc/rancher/k3s/k3s.yaml
53-
```
72+
In rare occasions, you may encounter the above error at this point. please lpg out and back in, and verify the above env variables are all set correctly. If not please, please contact your instructor.
5473
5574
{{% /notice %}} -->
5675
At this point we can verify the deployment by checking if the Pods are running:
@@ -101,7 +120,7 @@ Change into the `spring-petclinic` directory:
101120
cd ~/spring-petclinic-microservices
102121
```
103122

104-
Next, run the script that will use the `maven` command to compile/build the PetClinic microservices:
123+
Next, lets test the download and run the script that will use the `maven` command to compile/build the PetClinic microservices:
105124
{{< tabs >}}
106125
{{% tab title="Running maven" %}}
107126

@@ -141,7 +160,7 @@ This will take a few minutes the first time you run, `maven` will download a lot
141160

142161
## 3. Set up a local Docker Repository
143162

144-
Once we have our Auto instrumentation up and running, we are going to use show some of the additional instrumentation features of Opentelemetry Java. This will be the first time we will touch the source code and add some annotations to it to get even more valuable data from our Java application. Kubernetes will need to pull these new images from somewhere, so let's setup a local repository, so Kubernetes can pull these local images.
163+
Once we have our Auto instrumentation up and running with the existing containers, we are going to use show some of the additional instrumentation features of Opentelemetry Java. That will be the first time we will touch the source code and add some annotations to it to get even more valuable data from our Java application. Kubernetes will need to pull these new images from somewhere, so let's setup a local repository, so Kubernetes can pull those local images.
145164

146165
{{< tabs >}}
147166
{{% tab title="Install Docker Repository" %}}

content/en/other/3-auto-instrumentation/3-java-microservices-pet-clinic/20-otel-collector.md

Lines changed: 7 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -13,41 +13,20 @@ The Splunk OpenTelemetry Collector is the core component of instrumenting infras
1313
* Profiling data
1414
* Host and Application logs
1515

16+
To get Observability signals into the **Splunk Observability Cloud** we need to add an OpenTelemetry Collector to our Kubernetes cluster.
17+
1618
{{% notice title="Delete any existing OpenTelemetry Collectors" style="warning" %}}
17-
If you have completed a Splunk Observability workshop using this EC2 instance, please ensure you have deleted the collector running in Kubernetes before continuing. This can be done by running the following command:
19+
If you have completed a Splunk Observability workshop using this EC2 instance, please ensure you have deleted the collector running in Kubernetes before continuing with this workshop. This can be done by running the following command:
1820

1921
``` bash
2022
helm delete splunk-otel-collector
2123
```
2224

2325
{{% /notice %}}
2426

25-
## 2. Confirm environment variables
26-
27-
To ensure your instance is configured correctly, we need to confirm that the required environment variables for this workshop are set correctly. In your terminal run the following command:
28-
29-
``` bash
30-
. ~/workshop/petclinic/scripts/check_env.sh
31-
```
32-
33-
In the output check the following environment variables are present and have values set:
34-
35-
```text
36-
ACCESS_TOKEN
37-
REALM
38-
RUM_TOKEN
39-
HEC_TOKEN
40-
HEC_URL
41-
INSTANCE
42-
```
43-
44-
Please make a note of the `INSTANCE` environment variable value as this is the reference to you workshop instance and we will need it to filter the data in the **Splunk Observability Suite** UI.
45-
46-
For this workshop, **all** of the above are required. If any are missing, please contact your instructor.
47-
48-
## 3. Install the OpenTelemetry Collector using Helm
27+
## 2. Install the OpenTelemetry Collector using Helm
4928

50-
We are going to install the OpenTelemetry Collector in Operator mode using the Splunk Kubernetes Helm Chart for the Opentelemetry collector. First, we need to add the Splunk Helm chart repository to Helm and update so it knows where to find it:
29+
We are going to install the Splunk distribution of the OpenTelemetry Collector in Operator mode using the Splunk Kubernetes Helm Chart for the Opentelemetry collector. First, we need to add the Splunk Helm chart repository to Helm and update so it knows where to find it:
5130

5231
{{< tabs >}}
5332
{{% tab title="Helm Repo Add" %}}
@@ -73,7 +52,7 @@ Update Complete. ⎈Happy Helming!⎈
7352
{{% /tab %}}
7453
{{< /tabs >}}
7554

76-
Splunk Observability Cloud offers wizards in the **Splunk Observability Suite** UI to walk you through the setup of the Collector on both your infrastructure including Kubernetes, but in interest of time, we will use a setup created earlier and are going to install the OpenTelemetry Collector with the OpenTelemetry Collector Helm chart with some additional options:
55+
Splunk Observability Cloud offers wizards in the **Splunk Observability Suite** UI to walk you through the setup of the Collector on your infrastructure including Kubernetes, but in interest of time, we will use a setup created earlier. As we want the auto instrumentation to be available, we will install the OpenTelemetry Collector with the OpenTelemetry Collector Helm chart with some additional options:
7756

7857
* --set="operator.enabled=true" - this will install the Opentelemetry operator, that will be used to handle auto instrumentation
7958
* --set="certmanager.enabled=true" - This will install the required certificate manager for the operator.
@@ -192,7 +171,7 @@ helm delete splunk-otel-collector
192171

193172
{{% /notice %}}
194173

195-
## 4. Verify the installation by checking Metrics and Logs
174+
## 3. Verify the installation by checking Metrics and Logs
196175

197176
Once the installation is completed, you can login into the **Splunk Observability Cloud** with the URL provided by the Instructor.
198177
First, Navigate to **Kubernetes Navigator** in the **Infrastructure**![infra](../images/infra-icon.png?classes=inline&height=25px) section to see the metrics from your cluster in the **K8s nodes** pane. Change the *Time* filter to the last 15 Minutes (-15m) to focus on the lates data.

content/en/other/3-auto-instrumentation/3-java-microservices-pet-clinic/30-auto-instrumentation.md

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@ The resulting output should say:
3131
Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2
3232
```
3333

34-
This container is pulled from a remote repository `quay.io` and as you will see
34+
This container is pulled from a remote repository `quay.io` and was not build to send traces to the **Splunk Observability Cloud**
3535

36-
Lets add the Java auto instrumentation TAG to the api-gateway service first with the `kubectl patch deployment` command.
36+
Lets enable the Java auto instrumentation on the api-gateway service first by adding the `inject-java` tag to kubernetes with the `kubectl patch deployment` command.
3737
{{< tabs >}}
3838
{{% tab title="Patch api-gateway service" %}}
3939

@@ -108,7 +108,33 @@ deployment.apps/api-gateway patched (no change)
108108
{{% /tab %}}
109109
{{< /tabs >}}
110110

111-
It will take the Petclinic Microservice application a few minutes to start up and fully synchronise, but after its fully initialized, you now should see all the different services in Splunk APM:
111+
It will take the Petclinic Microservice application a few minutes to start up and fully synchronise.
112+
Lets monitor the load generator container until its capable to generate load as show in the output tab.
113+
114+
{{< tabs >}}
115+
{{% tab title="Tail Log" %}}
116+
117+
``` bash
118+
. ~/workshop/petclinic/scripts/tail_logs.sh
119+
```
120+
121+
{{% /tab %}}
122+
{{% tab title="Tail Log Output" %}}
123+
124+
```text
125+
{"severity":"info","msg":"Welcome Text = "Welcome to Petclinic"}
126+
{"severity":"info","msg":"@ALL"
127+
{"severity":"info","msg":"@owner details page"}
128+
{"severity":"info","msg":"@pet details page"}
129+
{"severity":"info","msg":"@add pet page"}
130+
{"severity":"info","msg":"@veterinarians page"}
131+
{"severity":"info","msg":"cookies was"}
132+
```
133+
134+
{{% /tab %}}
135+
{{< /tabs >}}
136+
137+
Once the services are fully initialized, you now should see all the different services appear in Splunk APM:
112138
![all services](../images/apm-full-service.png)
113139
Of course, we want to check the Dependency map by clicking Explore:
114140
![full map](../images/apm-map-full.png)

content/en/other/3-auto-instrumentation/3-java-microservices-pet-clinic/60-log-observer-connect.md

Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,12 @@ To see the changes in effect, we need to redeploy the services, First let chang
203203

204204
The result is a new file on disk called **petclinic-local.yaml**
205205
Let switch to the local version by applying the local version of the deployment yaml.
206+
First delete the old deplyment with
206207

208+
```bash
209+
kubectl delete -f ~/workshop/petclinic/petclinic-local.yaml
210+
```
211+
followed by
207212
```bash
208213
kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
209214
```
@@ -214,7 +219,7 @@ This will cause the containers to be replaced with the local version, you can ve
214219
kubectl describe pods api-gateway |grep Image:
215220
```
216221

217-
The resulting output should say:
222+
The resulting output should say ( again if you see double, its the old container being terminated, give it a few seconds):
218223

219224
```text
220225
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.30.0
@@ -223,6 +228,30 @@ The resulting output should say:
223228

224229
## 6. View Logs
225230

231+
First give the service time to get back into sync and lets tail the load generator log again
232+
{{< tabs >}}
233+
{{% tab title="Tail Log" %}}
234+
235+
``` bash
236+
. ~/workshop/petclinic/scripts/tail_logs.sh
237+
```
238+
239+
{{% /tab %}}
240+
{{% tab title="Tail Log Output" %}}
241+
242+
```text
243+
{"severity":"info","msg":"Welcome Text = "Welcome to Petclinic"}
244+
{"severity":"info","msg":"@ALL"
245+
{"severity":"info","msg":"@owner details page"}
246+
{"severity":"info","msg":"@pet details page"}
247+
{"severity":"info","msg":"@add pet page"}
248+
{"severity":"info","msg":"@veterinarians page"}
249+
{"severity":"info","msg":"cookies was"}
250+
```
251+
252+
{{% /tab %}}
253+
{{< /tabs >}}
254+
226255
From the left-hand menu click on **Log Observer** and ensure **Index** is set to **splunk4rookies-workshop**.
227256

228257
Next, click **Add Filter** search for the field `service_name` select the value `<INSTANCE>-petclinic-service` and click `=` (include). You should now see only the log messages from your PetClinic application.

workshop/petclinic/scripts/update_logback.sh

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,12 @@
11
#!/bin/bash
22

3-
# '<?xml version="1.0" encoding="UTF-8"?>
4-
# <configuration>
5-
# <include resource="org/springframework/boot/logging/logback/base.xml"/>
6-
# <!-- Required for Loglevel managment into the Spring Petclinic Admin Server-->
7-
# <jmxConfigurator/>
8-
# <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
9-
# <encoder>
10-
# <pattern>
11-
# %d{yyyy-MM-dd HH:mm:ss} - %logger{36} - %msg trace_id=%X{trace_id} span_id=%X{span_id} trace_flags=%X{trace_flags} %n service.name=%property{otel.resource.service.name}, deployment.environment=%property{otel.resource.deployment.environment}: %m%n
12-
# </pattern>
13-
# </encoder>
14-
# </appender>
15-
16-
# <!-- Just wrap your logging appender, for example ConsoleAppender, with OpenTelemetryAppender -->
17-
# <appender name="OTEL" class="io.opentelemetry.instrumentation.logback.mdc.v1_0.OpenTelemetryAppender">
18-
# <appender-ref ref="CONSOLE"/>
19-
# </appender>
20-
21-
# <!-- Use the wrapped "OTEL" appender instead of the original "CONSOLE" one -->
22-
# <root level="INFO">
23-
# <appender-ref ref="OTEL"/>
24-
# </root>
25-
# </configuration>'
26-
27-
283
xml_content='<?xml version="1.0" encoding="UTF-8"?>
294
<configuration>
305
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
316
<encoder>
327
<pattern>
33-
logback: %d{HH:mm:ss.SSS} [%thread] %level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
8+
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
9+
</pattern>
3410
</encoder>
3511
</appender>
3612
<appender name="OpenTelemetry"

0 commit comments

Comments
 (0)