Skip to content

Commit a833b5b

Browse files
committed
added draft version of solving-problems-with-o11y-cloud workshop
1 parent df91365 commit a833b5b

19 files changed

+742
-5
lines changed
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
title: Connect to EC2 Instance
3+
linkTitle: 1. Connect to EC2 Instance
4+
weight: 1
5+
time: 5 minutes
6+
---
7+
8+
## Connect to your EC2 Instance
9+
10+
We’ve prepared an Ubuntu Linux instance in AWS/EC2 for each attendee.
11+
12+
Using the IP address and password provided by your instructor, connect to your EC2 instance
13+
using one of the methods below:
14+
15+
* Mac OS / Linux
16+
* ssh splunk@IP address
17+
* Windows 10+
18+
* Use the OpenSSH client
19+
* Earlier versions of Windows
20+
* Use Putty
21+
Lines changed: 307 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,307 @@
1+
---
2+
title: Deploy the OpenTelemetry Collector and Customize Config
3+
linkTitle: 2. Deploy the OpenTelemetry Collector and Customize Config
4+
weight: 2
5+
time: 15 minutes
6+
---
7+
8+
The first step to "getting data in" is to deploy an OpenTelemetry collector,
9+
which receives and processes telemetry data in our environment before exporting it to Splunk
10+
Observability Cloud.
11+
12+
We'll be using Kubernetes for this workshop, and will deploy the collector in our K8s cluster using Helm.
13+
14+
## What is Helm?
15+
16+
Helm is a package manager for Kubernetes which provides the following benefits:
17+
18+
* Manage Complexity
19+
* deal with a single values.yaml file rather than dozens of manifest files
20+
* Easy Updates
21+
* in-place upgrades
22+
* Rollback support
23+
* Just use helm rollback to roll back to an older version of a release
24+
25+
## Install the Collector using Helm
26+
27+
We'll use a script to install the collector, which you can find in the following location:
28+
29+
``` bash
30+
/home/splunk/workshop/tagging/1-deploy-otel-collector.sh
31+
```
32+
33+
This script first ensures that the environment variables set in the `~./profile` file are read:
34+
35+
``` bash
36+
source ~/.profile
37+
```
38+
39+
It then installs the `splunk-otel-collector-chart` Helm chart and ensures it's up-to-date:
40+
41+
``` bash
42+
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
43+
helm repo update
44+
```
45+
46+
And finally, it uses `helm install` to install the collector:
47+
48+
``` bash
49+
helm install splunk-otel-collector --version 0.111.0 \
50+
--set="splunkObservability.realm=$REALM" \
51+
--set="splunkObservability.accessToken=$ACCESS_TOKEN" \
52+
--set="clusterName=$INSTANCE-k3s-cluster" \
53+
--set="environment=tagging-workshop-$INSTANCE" \
54+
splunk-otel-collector-chart/splunk-otel-collector \
55+
-f otel/values.yaml
56+
```
57+
58+
> Note that the `helm install` command references a `values.yaml` file, which is used
59+
> to customize the collector configuration. We'll explore this is more detail below.
60+
61+
We can use the following command to install the collector:
62+
63+
{{< tabs >}}
64+
{{% tab title="Script" %}}
65+
66+
``` bash
67+
cd /home/splunk/workshop/tagging
68+
./1-deploy-otel-collector.sh
69+
```
70+
71+
{{% /tab %}}
72+
{{% tab title="Example Output" %}}
73+
74+
``` bash
75+
"splunk-otel-collector-chart" has been added to your repositories
76+
Hang tight while we grab the latest from your chart repositories...
77+
...Successfully got an update from the "splunk-otel-collector-chart" chart repository
78+
Update Complete. ⎈Happy Helming!
79+
NAME: splunk-otel-collector
80+
LAST DEPLOYED: Mon Dec 23 18:47:38 2024
81+
NAMESPACE: default
82+
STATUS: deployed
83+
REVISION: 1
84+
NOTES:
85+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
86+
```
87+
88+
{{% /tab %}}
89+
{{< /tabs >}}
90+
91+
## Confirm the Collector is Running
92+
93+
We can confirm whether the collector is running with the following command:
94+
95+
{{< tabs >}}
96+
{{% tab title="Script" %}}
97+
98+
``` bash
99+
kubectl get pods
100+
```
101+
102+
{{% /tab %}}
103+
{{% tab title="Example Output" %}}
104+
105+
``` bash
106+
NAME READY STATUS RESTARTS AGE
107+
splunk-otel-collector-agent-kfvjb 1/1 Running 0 2m33s
108+
splunk-otel-collector-certmanager-7d89558bc9-2fqnx 1/1 Running 0 2m33s
109+
splunk-otel-collector-certmanager-cainjector-796cc6bd76-hz4sp 1/1 Running 0 2m33s
110+
splunk-otel-collector-certmanager-webhook-6959cd5f8-qd5b6 1/1 Running 0 2m33s
111+
splunk-otel-collector-k8s-cluster-receiver-57569b58c8-8ghds 1/1 Running 0 2m33s
112+
splunk-otel-collector-operator-6fd9f9d569-wd5mn 2/2 Running 0 2m33s
113+
```
114+
115+
{{% /tab %}}
116+
{{< /tabs >}}
117+
118+
## Confirm your K8s Cluster is in O11y Cloud
119+
120+
In Splunk Observability Cloud, navigate to **Infrastructure** -> **Kubernetes** -> **Kubernetes Nodes**,
121+
and then Filter on your Cluster Name (which is `$INSTANCE-k3s-cluster`):
122+
123+
![Kubernetes node](../images/k8snode.png)
124+
125+
## Get the Collector Configuration
126+
127+
Before we customize the collector config, how do we determine what the current configuration
128+
looks like?
129+
130+
In a Kubernetes environment, the collector configuration is stored using a Config Map.
131+
132+
We can see which config maps exist in our cluster with the following command:
133+
134+
{{< tabs >}}
135+
{{% tab title="Script" %}}
136+
137+
``` bash
138+
kubectl get cm -l app=splunk-otel-collector
139+
```
140+
141+
{{% /tab %}}
142+
{{% tab title="Example Output" %}}
143+
144+
``` bash
145+
NAME DATA AGE
146+
splunk-otel-collector-otel-k8s-cluster-receiver 1 3h37m
147+
splunk-otel-collector-otel-agent 1 3h37m
148+
```
149+
150+
{{% /tab %}}
151+
{{< /tabs >}}
152+
153+
We can then view the config map of the collector agent as follows:
154+
155+
{{< tabs >}}
156+
{{% tab title="Script" %}}
157+
158+
``` bash
159+
kubectl describe cm splunk-otel-collector-otel-agent
160+
```
161+
162+
{{% /tab %}}
163+
{{% tab title="Example Output" %}}
164+
165+
``` bash
166+
Name: splunk-otel-collector-otel-agent
167+
Namespace: default
168+
Labels: app=splunk-otel-collector
169+
app.kubernetes.io/instance=splunk-otel-collector
170+
app.kubernetes.io/managed-by=Helm
171+
app.kubernetes.io/name=splunk-otel-collector
172+
app.kubernetes.io/version=0.113.0
173+
chart=splunk-otel-collector-0.113.0
174+
helm.sh/chart=splunk-otel-collector-0.113.0
175+
heritage=Helm
176+
release=splunk-otel-collector
177+
Annotations: meta.helm.sh/release-name: splunk-otel-collector
178+
meta.helm.sh/release-namespace: default
179+
180+
Data
181+
====
182+
relay:
183+
----
184+
exporters:
185+
otlphttp:
186+
headers:
187+
X-SF-Token: ${SPLUNK_OBSERVABILITY_ACCESS_TOKEN}
188+
metrics_endpoint: https://ingest.us1.signalfx.com/v2/datapoint/otlp
189+
traces_endpoint: https://ingest.us1.signalfx.com/v2/trace/otlp
190+
(followed by the rest of the collector config in yaml format)
191+
```
192+
193+
{{% /tab %}}
194+
{{< /tabs >}}
195+
196+
197+
## How to Update the Collector Configuration in K8s
198+
199+
We can customize the collector configuration in K8s using the `values.yaml` file.
200+
201+
> See [this file](https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml)
202+
> for a comprehensive list of customization options that are available in the `values.yaml` file.
203+
204+
Let's look at an example.
205+
206+
### Add the Debug Exporter
207+
208+
Suppose we want to see the traces that are sent to the collector. We can use the debug exporter for this purpose, which can be helpful for troubleshooting OpenTelemetry-related issues.
209+
210+
Let's add the debug exporter to the bottom of the `/home/splunk/workshop/tagging/otel/values.yaml` file as follows:
211+
212+
``` yaml
213+
splunkObservability:
214+
logsEnabled: false
215+
profilingEnabled: true
216+
infrastructureMonitoringEventsEnabled: true
217+
certmanager:
218+
enabled: true
219+
operator:
220+
enabled: true
221+
222+
agent:
223+
config:
224+
receivers:
225+
kubeletstats:
226+
insecure_skip_verify: true
227+
auth_type: serviceAccount
228+
endpoint: ${K8S_NODE_IP}:10250
229+
metric_groups:
230+
- container
231+
- pod
232+
- node
233+
- volume
234+
k8s_api_config:
235+
auth_type: serviceAccount
236+
extra_metadata_labels:
237+
- container.id
238+
- k8s.volume.type
239+
extensions:
240+
zpages:
241+
endpoint: 0.0.0.0:55679
242+
exporters:
243+
debug:
244+
verbosity: detailed
245+
service:
246+
pipelines:
247+
traces:
248+
exporters:
249+
- sapm
250+
- signalfx
251+
- debug
252+
```
253+
254+
Once the file is saved, we can apply the changes with:
255+
256+
{{< tabs >}}
257+
{{% tab title="Script" %}}
258+
259+
``` bash
260+
cd /home/splunk/workshop/tagging
261+
262+
helm upgrade splunk-otel-collector --version 0.111.0 \
263+
--set="splunkObservability.realm=$REALM" \
264+
--set="splunkObservability.accessToken=$ACCESS_TOKEN" \
265+
--set="clusterName=$INSTANCE-k3s-cluster" \
266+
--set="environment=tagging-workshop-$INSTANCE" \
267+
splunk-otel-collector-chart/splunk-otel-collector \
268+
-f otel/values.yaml
269+
```
270+
271+
{{% /tab %}}
272+
{{% tab title="Example Output" %}}
273+
274+
``` bash
275+
Release "splunk-otel-collector" has been upgraded. Happy Helming!
276+
NAME: splunk-otel-collector
277+
LAST DEPLOYED: Mon Dec 23 19:08:08 2024
278+
NAMESPACE: default
279+
STATUS: deployed
280+
REVISION: 2
281+
NOTES:
282+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
283+
```
284+
285+
{{% /tab %}}
286+
{{< /tabs >}}
287+
288+
Whenever a change to the collector config is made via a `values.yaml` file, it's helpful
289+
to review the actual configuration applied to the collector by looking at the config map:
290+
291+
``` bash
292+
kubectl describe cm splunk-otel-collector-otel-agent
293+
```
294+
295+
We can see that the debug exporter was added to the traces pipeline as desired:
296+
297+
``` yaml
298+
traces:
299+
exporters:
300+
- sapm
301+
- signalfx
302+
- debug
303+
```
304+
305+
306+
We'll explore the output of the debug exporter once we deploy an application
307+
in our cluster and start capturing traces.

0 commit comments

Comments
 (0)