Skip to content

Commit 8f29126

Browse files
committed
Refactored HPA workshop for latest collector version
1 parent 52bb79f commit 8f29126

File tree

11 files changed

+256
-262
lines changed

11 files changed

+256
-262
lines changed

content/en/ninja-workshops/2-hpa/1-deploy-otel.md

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ helm install splunk-otel-collector --version {{< otel-version >}} \
5050
--set="splunkObservability.realm=$REALM" \
5151
--set="splunkObservability.accessToken=$ACCESS_TOKEN" \
5252
--set="clusterName=$INSTANCE-k3s-cluster" \
53-
--set="logsEngine=otel" \
5453
--set="splunkPlatform.endpoint=$HEC_URL" \
5554
--set="splunkPlatform.token=$HEC_TOKEN" \
5655
--set="splunkPlatform.index=splunk4rookies-workshop" \
@@ -78,25 +77,16 @@ kubectl get pods
7877
{{% tab title="kubectl get pods Output" %}}
7978

8079
``` text
81-
NAME READY STATUS RESTARTS AGE
82-
splunk-otel-collector-agent-pvstb 2/2 Running 0 19s
83-
splunk-otel-collector-k8s-cluster-receiver-6c454894f8-mqs8n 1/1 Running 0 19s
80+
NAME READY STATUS RESTARTS AGE
81+
splunk-otel-collector-agent-ks9jn 1/1 Running 0 27s
82+
splunk-otel-collector-agent-lqs4j 0/1 Running 0 27s
83+
splunk-otel-collector-agent-zsqbt 1/1 Running 0 27s
84+
splunk-otel-collector-k8s-cluster-receiver-76bb6b555-7fhzj 0/1 Running 0 27s
8485
```
8586

8687
{{% /tab %}}
8788
{{< /tabs >}}
8889

89-
<!--
90-
{{% notice title="Note" style="info" %}}
91-
92-
If you are using the Kubernetes Integration setup from the Data Management page from the O11y UI, you find that the guide will use
93-
`--generate-name splunk-otel-collector-chart/splunk-otel-collector` instead of just `splunk-otel-collector-chart/splunk-otel-collector` as we do in the above example.
94-
95-
This will generate a unique name/label for the collector install and Pods by adding a unique number at the end of the object name, allowing you to install multiple collectors in your Kubernetes environment with different configurations.
96-
97-
Just make sure you use the correct label that is generated by the Helm chart if you wish to use the `helm` and `kubectl` commands from this workshop on an install done with the `--generate-name` option.
98-
{{% /notice %}}
99-
-->
10090
Use the label set by the `helm` install to tail logs (You will need to press `ctrl + c` to exit).
10191

10292
{{< tabs >}}
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
---
2+
title: K8s Namespaces and DNS
3+
linkTitle: 2. K8s Namespaces and DNS
4+
weight: 2
5+
---
6+
7+
## 1. Namespaces in Kubernetes
8+
9+
Most of our customers will make use of some kind of private or public cloud service to run Kubernetes. They often choose to have only a few large Kubernetes clusters as it is easier to manage centrally.
10+
11+
Namespaces are a way to organize these large Kubernetes clusters into virtual sub-clusters. This can be helpful when different teams or projects share a Kubernetes cluster as this will give them the easy ability to just see and work with their resources.
12+
13+
Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. Components are only **visible** when selecting a namespace or when adding the `--all-namespaces` flag to `kubectl` instead of allowing you to view just the components relevant to your project by selecting your namespace.
14+
15+
Most customers will want to install the applications into a separate namespace. This workshop will follow that best practice.
16+
17+
## 2. DNS and Services in Kubernetes
18+
19+
The Domain Name System (DNS) is a mechanism for linking various sorts of information with easy-to-remember names, such as IP addresses. Using a DNS system to translate request names into IP addresses makes it easy for end-users to reach their target domain name effortlessly.
20+
21+
Most Kubernetes clusters include an internal DNS service configured by default to offer a lightweight approach for service discovery. Even when Pods and Services are created, deleted, or shifted between nodes, built-in service discovery simplifies applications to identify and communicate with services on the Kubernetes clusters.
22+
23+
In short, the DNS system for Kubernetes will create a DNS entry for each Pod and Service. In general, a Pod has the following DNS resolution:
24+
25+
``` text
26+
pod-name.my-namespace.pod.cluster-domain.example
27+
```
28+
29+
For example, if a Pod in the `default` namespace has the Pod name `my_pod`, and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name:
30+
31+
``` text
32+
my_pod.default.pod.cluster.local
33+
```
34+
35+
Any Pods exposed by a Service have the following DNS resolution available:
36+
37+
``` text
38+
my_pod.service-name.my-namespace.svc.cluster-domain.example
39+
```
40+
41+
More information can be found here: [**DNS for Service and Pods**](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
---
2+
title: Apache OTel Receiver
3+
linkTitle: 3. Apache OTel Receiver
4+
weight: 3
5+
---
6+
7+
## 1. Review OTel receiver for PHP/Apache
8+
9+
Inspect the YAML file `~/workshop/k3s/otel-apache.yaml` and validate the contents using the following command:
10+
11+
``` bash
12+
cat ~/workshop/k3s/otel-apache.yaml
13+
```
14+
15+
This file contains the configuration for the OpenTelemetry agent to monitor the PHP/Apache deployment.
16+
17+
```yaml
18+
agent:
19+
config:
20+
receivers:
21+
receiver_creator:
22+
receivers:
23+
apache:
24+
rule: type == "port" && pod.name matches "apache" && port == 80
25+
config:
26+
endpoint: http://php-apache-svc.apache.svc.cluster.local/server-status?auto
27+
```
28+
29+
## 2. Observation Rules in the OpenTelemetry config
30+
31+
The above file contains an observation rule for Apache using the OTel `receiver_creator`. This receiver can instantiate other receivers at runtime based on whether observed endpoints match a configured rule.
32+
33+
The configured rules will be evaluated for each endpoint discovered. If the rule evaluates to true, then the receiver for that rule will be started as configured against the matched endpoint.
34+
35+
In the file above we tell the OpenTelemetry agent to look for Pods that match the name `apache` and have port `80` open. Once found, the agent will configure an Apache receiver to read Apache metrics from the configured URL. Note, the K8s DNS-based URL in the above YAML for the service.
36+
37+
To use the Apache configuration, you can upgrade the existing Splunk OpenTelemetry Collector Helm chart to use the `otel-apache.yaml` file with the following command:
38+
39+
{{< tabs >}}
40+
{{% tab title="Helm Upgrade" %}}
41+
42+
``` bash
43+
helm upgrade splunk-otel-collector \
44+
--set="splunkObservability.realm=$REALM" \
45+
--set="splunkObservability.accessToken=$ACCESS_TOKEN" \
46+
--set="clusterName=$INSTANCE-k3s-cluster" \
47+
--set="splunkPlatform.endpoint=$HEC_URL" \
48+
--set="splunkPlatform.token=$HEC_TOKEN" \
49+
--set="splunkPlatform.index=splunk4rookies-workshop" \
50+
splunk-otel-collector-chart/splunk-otel-collector \
51+
-f ~/workshop/k3s/otel-collector.yaml \
52+
-f ~/workshop/k3s/otel-apache.yaml
53+
```
54+
55+
{{% /tab %}}
56+
{{< /tabs >}}
57+
58+
{{% notice title="NOTE" style="info" %}}
59+
The **REVISION** number of the deployment has changed, which is a helpful way to keep track of your changes.
60+
61+
``` text
62+
Release "splunk-otel-collector" has been upgraded. Happy Helming!
63+
NAME: splunk-otel-collector
64+
LAST DEPLOYED: Mon Nov 4 14:56:25 2024
65+
NAMESPACE: default
66+
STATUS: deployed
67+
REVISION: 2
68+
TEST SUITE: None
69+
NOTES:
70+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Platform endpoint "https://http-inputs-workshop.splunkcloud.com:443/services/collector/event".
71+
72+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm eu0.
73+
```
74+
75+
{{% /notice %}}
76+
77+
## 3. Kubernetes ConfigMaps
78+
79+
A ConfigMap is an object in Kubernetes consisting of key-value pairs that can be injected into your application. With a ConfigMap, you can separate configuration from your Pods.
80+
81+
Using ConfigMap, you can prevent hardcoding configuration data. ConfigMaps are useful for storing and sharing non-sensitive, unencrypted configuration information.
82+
83+
The OpenTelemetry collector/agent uses ConfigMaps to store the configuration of the agent and the K8s Cluster receiver. You can/will always verify the current configuration of an agent after a change by running the following commands:
84+
85+
``` bash
86+
kubectl get cm
87+
```
88+
89+
{{% notice title="Workshop Question" style="tip" icon="question" %}}
90+
How many ConfigMaps are used by the collector?
91+
{{% /notice %}}
92+
93+
When you have a list of ConfigMaps from the namespace, select the one for the `otel-agent` and view it with the following command:
94+
95+
``` bash
96+
kubectl get cm splunk-otel-collector-otel-agent -o yaml
97+
```
98+
99+
{{% notice title="NOTE" style="info" %}}
100+
The option `-o yaml` will output the content of the ConfigMap in a readable YAML format.
101+
{{% /notice %}}
102+
103+
{{% notice title="Workshop Question" style="tip" icon="question" %}}
104+
Is the configuration from `otel-apache.yaml` visible in the ConfigMap for the collector agent?
105+
{{% /notice %}}

0 commit comments

Comments
 (0)