@@ -37,12 +37,52 @@ splunk-otel-collector-otel-agent 1 3h37m
3737{{% /tab %}}
3838{{< /tabs >}}
3939
40- We can then view the config map of the collector agent as follows:
40+ > Why are there two config maps?
41+
42+ We can then view the config map of the collector agent as follows:
43+
44+ {{< tabs >}}
45+ {{% tab title="Script" %}}
46+
47+ ``` bash
48+ kubectl describe cm splunk-otel-collector-otel-agent
49+ ```
50+
51+ {{% /tab %}}
52+ {{% tab title="Example Output" %}}
4153
4254``` bash
43- kubectl describe cm my-splunk-otel-collector-otel-agent
55+ Name: splunk-otel-collector-otel-agent
56+ Namespace: default
57+ Labels: app=splunk-otel-collector
58+ app.kubernetes.io/instance=splunk-otel-collector
59+ app.kubernetes.io/managed-by=Helm
60+ app.kubernetes.io/name=splunk-otel-collector
61+ app.kubernetes.io/version=0.113.0
62+ chart=splunk-otel-collector-0.113.0
63+ helm.sh/chart=splunk-otel-collector-0.113.0
64+ heritage=Helm
65+ release=splunk-otel-collector
66+ Annotations: meta.helm.sh/release-name: splunk-otel-collector
67+ meta.helm.sh/release-namespace: default
68+
69+ Data
70+ ====
71+ relay:
72+ ----
73+ exporters:
74+ otlphttp:
75+ headers:
76+ X-SF-Token: ${SPLUNK_OBSERVABILITY_ACCESS_TOKEN}
77+ metrics_endpoint: https://ingest.us1.signalfx.com/v2/datapoint/otlp
78+ traces_endpoint: https://ingest.us1.signalfx.com/v2/trace/otlp
79+ (followed by the rest of the collector config in yaml format)
4480```
4581
82+ {{% /tab %}}
83+ {{< /tabs >}}
84+
85+
4686## How to Update the Collector Configuration in K8s
4787
4888In our earlier example running the collector on a Linux instance,
@@ -76,27 +116,215 @@ environment: otel-$INSTANCE
76116
77117Once the file is saved, we can apply the changes with:
78118
119+ {{< tabs >}}
120+ {{% tab title="Script" %}}
121+
79122` ` ` bash
80123helm upgrade splunk-otel-collector -f values.yaml \
81124splunk-otel-collector-chart/splunk-otel-collector
82125```
83126
127+ {{% /tab %}}
128+ {{% tab title="Example Output" %}}
129+
130+ ``` bash
131+ Release " splunk-otel-collector" has been upgraded. Happy Helming!
132+ NAME: splunk-otel-collector
133+ LAST DEPLOYED: Fri Dec 20 01:17:03 2024
134+ NAMESPACE: default
135+ STATUS: deployed
136+ REVISION: 2
137+ TEST SUITE: None
138+ NOTES:
139+ Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
140+ ```
141+
142+ {{% /tab %}}
143+ {{< /tabs >}}
144+
84145We can then view the config map and ensure the changes were applied:
85146
86147{{< tabs >}}
87148{{% tab title="Script" %}}
88149
89150``` bash
90- kubectl describe cm my- splunk-otel-collector-otel-k8s-cluster-receiver
151+ kubectl describe cm splunk-otel-collector-otel-k8s-cluster-receiver
91152```
92153
93154{{% /tab %}}
94155{{% tab title="Example Output" %}}
95156
157+ Ensure ` smartagent/kubernetes-events ` is included in the agent config now:
158+
96159``` bash
97- TODO: sample output
160+ smartagent/kubernetes-events:
161+ alwaysClusterReporter: true
162+ type: kubernetes-events
163+ whitelistedEvents:
164+ - involvedObjectKind: Pod
165+ reason: Created
166+ - involvedObjectKind: Pod
167+ reason: Unhealthy
168+ - involvedObjectKind: Pod
169+ reason: Failed
170+ - involvedObjectKind: Job
171+ reason: FailedCreate
98172```
99173
100174{{% /tab %}}
101175{{< /tabs >}}
102176
177+ > Note that we specified the cluster receiver config map since that's
178+ > where these particular changes get applied.
179+
180+ ## Add the Debug Exporter
181+
182+ The Debug exporter can be helpful when troubleshooting various OpenTelememtry issues.
183+
184+ Suppose we want to see the traces and logs that are sent to the collector, so we can
185+ inspect them before sending them to Splunk. We can use the debug exporter for this purpose.
186+
187+ Let's add the debug exporter to the values.yaml file as follows:
188+
189+ ``` yaml
190+ splunkObservability :
191+ realm : us1
192+ accessToken : ***
193+ infrastructureMonitoringEventsEnabled : true
194+ clusterName : $INSTANCE-cluster
195+ environment : otel-$INSTANCE
196+ agent :
197+ config :
198+ exporters :
199+ debug :
200+ verbosity : detailed
201+ service :
202+ pipelines :
203+ traces :
204+ exporters :
205+ - debug
206+ logs :
207+ exporters :
208+ - debug
209+ processors :
210+ - memory_limiter
211+ - batch
212+ - resourcedetection
213+ - resource
214+ receivers :
215+ - otlp
216+
217+ ```
218+
219+ > Note that our agent configuration already includes a traces pipeline (you can
220+ > verify this by review the agent config map), so we only needed to add the debug
221+ > exporter. However, there wasn't a logs pipeline in our config, because we didn't
222+ > enable logs when we installed the collector initially, so we'll need to add the
223+ > full pipeline now.
224+
225+ Once the file is saved, we can apply the changes with:
226+
227+ {{< tabs >}}
228+ {{% tab title="Script" %}}
229+
230+ ``` bash
231+ helm upgrade splunk-otel-collector -f values.yaml \
232+ splunk-otel-collector-chart/splunk-otel-collector
233+ ```
234+
235+ {{% /tab %}}
236+ {{% tab title="Example Output" %}}
237+
238+ ``` bash
239+ Release " splunk-otel-collector" has been upgraded. Happy Helming!
240+ NAME: splunk-otel-collector
241+ LAST DEPLOYED: Fri Dec 20 01:32:03 2024
242+ NAMESPACE: default
243+ STATUS: deployed
244+ REVISION: 3
245+ TEST SUITE: None
246+ NOTES:
247+ Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm us1.
248+ ```
249+
250+ {{% /tab %}}
251+ {{< /tabs >}}
252+
253+ Exercise the application a few times using curl, then tail the agent collector logs with the
254+ following command:
255+
256+ ``` bash
257+ kubectl logs -l component=otel-collector-agent -f
258+ ```
259+
260+ You should see traces written to the agent collector logs such as the following:
261+
262+ ````
263+ 2024-12-20T01:43:52.929Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2}
264+ 2024-12-20T01:43:52.929Z info ResourceSpans #0
265+ Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
266+ Resource attributes:
267+ -> splunk.distro.version: Str(1.8.0)
268+ -> telemetry.distro.name: Str(splunk-otel-dotnet)
269+ -> telemetry.distro.version: Str(1.8.0)
270+ -> os.type: Str(linux)
271+ -> os.description: Str(Debian GNU/Linux 12 (bookworm))
272+ -> os.build_id: Str(6.8.0-1021-aws)
273+ -> os.name: Str(Debian GNU/Linux)
274+ -> os.version: Str(12)
275+ -> host.name: Str(derek-1)
276+ -> process.owner: Str(app)
277+ -> process.pid: Int(1)
278+ -> process.runtime.description: Str(.NET 8.0.11)
279+ -> process.runtime.name: Str(.NET)
280+ -> process.runtime.version: Str(8.0.11)
281+ -> container.id: Str(78b452a43bbaa3354a3cb474010efd6ae2367165a1356f4b4000be031b10c5aa)
282+ -> telemetry.sdk.name: Str(opentelemetry)
283+ -> telemetry.sdk.language: Str(dotnet)
284+ -> telemetry.sdk.version: Str(1.9.0)
285+ -> service.name: Str(helloworld)
286+ -> deployment.environment: Str(otel-derek-1)
287+ -> k8s.pod.ip: Str(10.42.0.15)
288+ -> k8s.pod.labels.app: Str(helloworld)
289+ -> k8s.pod.name: Str(helloworld-84865965d9-nkqsx)
290+ -> k8s.namespace.name: Str(default)
291+ -> k8s.pod.uid: Str(38d39bc6-1309-4022-a569-8acceef50942)
292+ -> k8s.node.name: Str(derek-1)
293+ -> k8s.cluster.name: Str(derek-1-cluster)
294+ ````
295+
296+ And log entries such as:
297+
298+ ````
299+ 2024-12-20T01:43:53.215Z info Logs {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2}
300+ 2024-12-20T01:43:53.215Z info ResourceLog #0
301+ Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
302+ Resource attributes:
303+ -> splunk.distro.version: Str(1.8.0)
304+ -> telemetry.distro.name: Str(splunk-otel-dotnet)
305+ -> telemetry.distro.version: Str(1.8.0)
306+ -> os.type: Str(linux)
307+ -> os.description: Str(Debian GNU/Linux 12 (bookworm))
308+ -> os.build_id: Str(6.8.0-1021-aws)
309+ -> os.name: Str(Debian GNU/Linux)
310+ -> os.version: Str(12)
311+ -> host.name: Str(derek-1)
312+ -> process.owner: Str(app)
313+ -> process.pid: Int(1)
314+ -> process.runtime.description: Str(.NET 8.0.11)
315+ -> process.runtime.name: Str(.NET)
316+ -> process.runtime.version: Str(8.0.11)
317+ -> container.id: Str(78b452a43bbaa3354a3cb474010efd6ae2367165a1356f4b4000be031b10c5aa)
318+ -> telemetry.sdk.name: Str(opentelemetry)
319+ -> telemetry.sdk.language: Str(dotnet)
320+ -> telemetry.sdk.version: Str(1.9.0)
321+ -> service.name: Str(helloworld)
322+ -> deployment.environment: Str(otel-derek-1)
323+ -> k8s.node.name: Str(derek-1)
324+ -> k8s.cluster.name: Str(derek-1-cluster)
325+ ````
326+
327+ If you return to Splunk Observability Cloud though, you'll notice that traces are
328+ no longer being sent there by the application.
329+
330+ Why do you think that is? We'll explore it in the next section.
0 commit comments