You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/other/3-auto-instrumentation/3-java-microservices-k8s/10-preparation.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,6 +57,7 @@ If you have completed a Splunk Observability workshop using this EC2 instance, p
57
57
```bash
58
58
helm delete splunk-otel-collector
59
59
```
60
+
60
61
{{% /notice %}}
61
62
62
63
## 2. The Splunk OpenTelemetry Collector
@@ -69,7 +70,7 @@ The Splunk OpenTelemetry Collector is the core component of instrumenting infras
69
70
* Host and Application logs
70
71
71
72
To get Observability signals (**Metrics, Traces** and **Logs**) into the **Splunk Observability Cloud** we need to add an OpenTelemetry Collector to our Kubernetes cluster.
72
-
For this workshop we will be using the Splunk Kubernetes Helm Chart for the Opentelemetry collector and install the collector in `Operator` mode as this is required for Zero-config.
73
+
For this workshop, we will be using the Splunk Kubernetes Helm Chart for the Opentelemetry collector and installing the collector in `Operator` mode as this is required for Zero-config.
73
74
74
75
## 3. Install the OpenTelemetry Collector using Helm
The Splunk Observability Cloud offers wizards in the **Splunk Observability Suite** UI to walk you through the setup of the Collector on Kubernetes, but in the interest of time, we will use a setup created earlier. As we want the auto instrumentation to be available, we will install the OpenTelemetry Collector with the OpenTelemetry Collector Helm chart with some additional options:
103
104
104
-
* --set="operator.enabled=true" - this will install the Opentelemetry operator, that will be used to handle auto instrumentation
105
-
* --set="certmanager.enabled=true" - This will install the required certificate manager for the operator.
106
-
* --set="splunkObservability.profilingEnabled=true" - This enabled Code profiling via the operator
105
+
*`--set="operator.enabled=true"` - this will install the Opentelemetry operator, that will be used to handle auto instrumentation
106
+
*`--set="certmanager.enabled=true"` - This will install the required certificate manager for the operator.
107
+
*`--set="splunkObservability.profilingEnabled=true"` - This enabled Code profiling via the operator
107
108
108
109
To install the collector run the following commands, do **NOT** edit this:
109
110
@@ -262,7 +263,7 @@ configmap/scriptfile created
262
263
On rare occasions, you may encounter the above error at this point. please log out and back in, and verify the above env variables are all set correctly. If not please, please contact your instructor.
263
264
264
265
{{% /notice %}} -->
265
-
At this point we can verify the deployment by checking if the Pods are running, Not that these containers need to be downloaded and started, this may take a minute or so.
266
+
At this point, we can verify the deployment by checking if the Pods are running, Not that these containers need to be downloaded and started, this may take a minute or so.
266
267
{{< tabs >}}
267
268
{{% tab title="kubectl get pods" %}}
268
269
@@ -301,7 +302,7 @@ Once they are running, the application will take a few minutes to fully start up
301
302
302
303
## 5. Verify the local Docker Repository
303
304
304
-
Once we have tested our Zero Auto-Config Instrumentation the existing containers, we are going to build our own containers to show some of the additional instrumentation features of Opentelemetry Java. Only then we will touch the config files or the source code. Once we build these containers, Kubernetes will need to pull these new images from somewhere. To enable this we have created a local repository to store these new containers, so Kubernetes can pull the images locally.
305
+
Once we have tested our Zero Auto-Config Instrumentation inthe existing containers, we are going to build our containers to show some of the additional instrumentation features of Opentelemetry Java. Only then we will touch the config files or the source code. Once we build these containers, Kubernetes will need to pull these new images from somewhere. To enable this we have created a local repository to store these new containers, so Kubernetes can pull the images locally.
305
306
306
307
We can see if the repository is up and running by checking the inventory with the below command:
Copy file name to clipboardExpand all lines: content/en/other/3-auto-instrumentation/3-java-microservices-k8s/20-verify-setup.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,22 +6,22 @@ weight: 20
6
6
7
7
## 1. Verify the installation by checking Metrics and Logs
8
8
9
-
Once the installation is completed, you can login into the **Splunk Observability Cloud** with the URL provided by the Instructor.
9
+
Once the installation is completed, you can log in to the **Splunk Observability Cloud** with the URL provided by the Instructor.
10
10
11
-
First, Navigate to **Kubernetes Navigator** view in the **Infrastructure** section to see the metrics from your cluster in the **K8s nodes** pane. Once you are in the Kubernetes Navigator view, change the *Time* filter to the last 15 Minutes (-15m) to focus on the latest data.
11
+
First, Navigate to the **Kubernetes Navigator** view in the **Infrastructure** section to see the metrics from your cluster in the **K8s nodes** pane. Once you are in the Kubernetes Navigator view, change the *Time* filter to the last 15 Minutes (-15m) to focus on the latest data.
12
12
13
-
Select your own cluster with the regular filter option at the top of the Navigator and a filter `k8s.cluster.name`**(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). (You can also select your cluster by clicking on its image in the cluster pane.)
13
+
Select your cluster with the regular filter option at the top of the Navigator and a filter `k8s.cluster.name`**(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). (You can also select your cluster by clicking on its image in the cluster pane.)
14
14
You should now only have your cluster visible **(2)**.
15
15
16
16

17
17
18
-
You should see metrics **(3)** of your cluster and the log events **(4)** chart should start to be populated with log line events coming from your cluster. Click on one of the bars to peek at the log lines coming in from you cluster.
18
+
You should see metrics **(3)** of your cluster and the log events **(4)** chart should start to be populated with log line events coming from your cluster. Click on one of the bars to peek at the log lines coming in from your cluster.
19
19
20
20

21
21
22
-
Also, a `Mysql` pane **(5)** should appear, when you click on that pane, you can see the MySQLrelated metrics from your database.
22
+
Also, a `Mysql` pane **(5)** should appear, when you click on that pane, you can see the MySQL-related metrics from your database.
23
23
24
-

24
+

25
25
26
26
Once you see data flowing in from your host (`metrics and logs`) and MySQL shows `metrics` as well we can move on to the actual PetClinic application.
Copy file name to clipboardExpand all lines: content/en/other/3-auto-instrumentation/3-java-microservices-k8s/30-auto-instrumentation.md
+10-12Lines changed: 10 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ spec:
43
43
44
44
## 2. Setting up Java auto instrumentation on the api-gateway pod
45
45
46
-
Lets look how zero-config works with a single pod, the `api-gateway`. If you enable Zero configuration for a pod, the Collector will attach an init-Container to your existing pod, and restart the pod to activate it.
46
+
Let's look at how zero-config works with a single pod, the `api-gateway`. If you enable Zero configuration for a pod, the Collector will attach an init-Container to your existing pod, and restart the pod to activate it.
47
47
48
48
To show what happens when you enable Auto instrumentation, let's do a *For & After* of the content of a pod, the `api-gateway` in this case:
Next to the original pod from before, you should see an initContainer named **opentelemetry-auto-instrumentation**. (If you get two api-gateway containers, the original one is still terminating, so give it a few seconds):
## 3. Enable Java auto instrumentation on all pods
100
100
101
-
Now lets patch all other services so we can see the full interaction between all services with `app.kubernetes.io/part-of=spring-petclinic` as the inject annotation.
101
+
Now let's patch all other services so we can see the full interaction between all services with `app.kubernetes.io/part-of=spring-petclinic` as the inject annotation.
102
102
remember: **This automatically causes pods to restart.**
103
103
104
104
Note, there will be no change for the *config-server, discovery-server, admin-server & api-gateway* as we patched these earlier.
@@ -129,14 +129,13 @@ deployment.apps/api-gateway patched (no change)
129
129
130
130
## 3. Check the result in Splunk APM
131
131
132
-
Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud**with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
132
+
Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
133
133
134
-
After a couple of minuted or so you should see that the Pods are being restarted by the operator and the Zero config container will be added.
135
-
This will look similar like the Screen shot below:
134
+
After a couple of minutes or so you should see that the Pods are being restarted by the operator and the Zero config container will be added. This will look similar to the screenshot below:
Wait for the pods to turn green again.(You may want to refresh the screen, then navigate to the **APM** section to look at the information provide by the traces generated from your service in the **Explore** Pane. Use the filter option and change the *environment* filter **(1)** and search for the name of your workshop instance in the drop down box, it should be the [INSTANCE]-workshop. (where `INSTANCE` is the value from the shell script you run earlier). Make sure it is the only one selected.
138
+
Wait for the pods to turn green again (you may want to refresh the screen), then navigate to the **APM** section to look at the information provided by the traces generated from your service in the **Explore** Pane. Use the filter option to change the *environment* filter **(1)** and search for the name of your workshop instance in the dropdown box, it should be the [INSTANCE]-workshop. (where `INSTANCE` is the value from the shell script you run earlier). Make sure it is the only one selected.
The Example above shows all the interaction between the all our services, Your may still be showing the map in the interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize to make your map to look like t he one above.
149
-
reducing the time will help, if you pick a Custom time of 2 minutes, the initial startup related errors (Red Dots) will disappear from the view.)
147
+
The example above shows all the interactions between all our services. You may still be showing the map in the interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize to make your map to look like the one above. Reducing the time will help, if you pick a custom time of 2 minutes, the initial startup-related errors (Red Dots) will disappear from the view.
150
148
151
149
In the meantime let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard
152
150
153
151
## 5. Examine default R.E.D. Metrics
154
152
155
-
Splunk APM provides a set of built-in dashboards that present charts and visualized metrics to help you see problems occurring in real time and quickly determine whether the problem is associated with a service, a specific endpoint, or the underlying infrastructure. To look at this dashboard for the selected `api-gateway`, make sure you have the `api-gateway` service selected in the Dependency map as show above, then click on the ***View Dashboard** Link **(1)** at the top of the righthand pane.
153
+
Splunk APM provides a set of built-in dashboards that present charts and visualized metrics to help you see problems occurring in real time and quickly determine whether the problem is associated with a service, a specific endpoint, or the underlying infrastructure. To look at this dashboard for the selected `api-gateway`, make sure you have the `api-gateway` service selected in the Dependency map as show above, then click on the ***View Dashboard** Link **(1)** at the top of the right-hand pane.
This dashboard, that is available for each of your instrumented services, offers an overview of the key `request, error, and duration (RED)` metrics based on Monitoring MetricSets created from endpoint spans for your services, endpoints, and Business Workflows. They also present related host and Kubernetes metrics to help you determine whether problems are related to the underlying infrastructure, as in the above image.
162
-
As the dashboards allow you to go back in time with the *Time picker* window **(1)**, its the perfect spot to identify behaviour you wish to be alerted on, and with a click on one of the bell icons **(2)** available in each chart, you can set up an alert to do just that.
159
+
This dashboard, which is available for each of your instrumented services, offers an overview of the key `request, error, and duration (RED)` metrics based on Monitoring MetricSets created from endpoint spans for your services, endpoints, and Business Workflows. They also present related host and Kubernetes metrics to help you determine whether problems are related to the underlying infrastructure, as in the above image.
160
+
As the dashboards allow you to go back in time with the *Time picker* window **(1)**, it's the perfect spot to identify the behavior you wish to be alerted on, and with a click on one of the bell icons **(2)** available in each chart, you can set up an alert to do just that.
163
161
164
162
If you scroll down the page, you get host and Kubernetes metrics related to your service as well.
165
163
Let's move on to look at some of the traces generated by the Zero Config Auto instrumentation.
0 commit comments