Skip to content

Commit 2136d91

Browse files
committed
Fixed HPA workshop collector issue
1 parent a679b5e commit 2136d91

16 files changed

+39
-42
lines changed

content/en/other/4-hpa/1-deploy-otel.md

Lines changed: 8 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -20,17 +20,7 @@ You will be able to connect to the workshop instance by using SSH from your Mac,
2020
Your workshop instance has been pre-configured with the correct **Access Token** and **Realm** for this workshop. There is no need for you to configure these.
2121
{{% /notice %}}
2222

23-
## 3. Namespaces in Kubernetes
24-
25-
Most of our customers will make use of some kind of private or public cloud service to run Kubernetes. They often choose to have only a few large Kubernetes clusters as it is easier to manage centrally.
26-
27-
Namespaces are a way to organize these large Kubernetes clusters into virtual sub-clusters. This can be helpful when different teams or projects share a Kubernetes cluster as this will give them the easy ability to just see and work with their resources.
28-
29-
Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. Components are only **visible** when selecting a namespace or when adding the `--all-namespaces` flag to `kubectl` instead of allowing you to view just the components relevant to your project by selecting your namespace.
30-
31-
Most customers will want to install the Splunk OpenTelemetry Collector into a separate namespace. This workshop will follow that best practice.
32-
33-
## 4. Install Splunk OTel using Helm
23+
## 3. Install Splunk OTel using Helm
3424

3525
Install the OpenTelemetry Collector using the Splunk Helm chart. First, add the Splunk Helm chart repository and update.
3626

@@ -45,10 +35,10 @@ helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel
4535
{{% tab title="helm repo add output" %}}
4636

4737
```text
48-
Using ACCESS_TOKEN={REDACTED}
38+
Using ACCESS_TOKEN=<REDACTED>
4939
Using REALM=eu0
5040
"splunk-otel-collector-chart" has been added to your repositories
51-
Using ACCESS_TOKEN={REDACTED}
41+
Using ACCESS_TOKEN=<REDACTED>
5242
Using REALM=eu0
5343
Hang tight while we grab the latest from your chart repositories...
5444
...Successfully got an update from the "splunk-otel-collector-chart" chart repository
@@ -58,7 +48,7 @@ Update Complete. ⎈Happy Helming!⎈
5848
{{% /tab %}}
5949
{{< /tabs >}}
6050

61-
Install the OpenTelemetry Collector Helm chart into a new **splunk** namespace with the following commands, do **NOT** edit this:
51+
Install the OpenTelemetry Collector Helm with the following commands, do **NOT** edit this:
6252

6353
{{< tabs >}}
6454
{{% tab title="helm install" %}}
@@ -75,8 +65,6 @@ helm install splunk-otel-collector \
7565
--set="splunkPlatform.token=$HEC_TOKEN" \
7666
--set="splunkPlatform.index=splunk4rookies-workshop" \
7767
splunk-otel-collector-chart/splunk-otel-collector \
78-
--namespace splunk \
79-
--create-namespace \
8068
-f ~/workshop/k3s/otel-collector.yaml
8169
```
8270

@@ -85,15 +73,15 @@ splunk-otel-collector-chart/splunk-otel-collector \
8573

8674
## 5. Verify Deployment
8775

88-
You can monitor the progress of the deployment by running `kubectl get pods` and adding `-n splunk` to the command to see the pods in the `splunk` namespace which should typically report that the new pods are up and running after about 30 seconds.
76+
You can monitor the progress of the deployment by running `kubectl get pods` which should typically report that the new pods are up and running after about 30 seconds.
8977

9078
Ensure the status is reported as **Running** before continuing.
9179

9280
{{< tabs >}}
9381
{{% tab title="kubectl get pods" %}}
9482

9583
``` bash
96-
kubectl get pods -n splunk
84+
kubectl get pods
9785
```
9886

9987
{{% /tab %}}
@@ -125,7 +113,7 @@ Use the label set by the `helm` install to tail logs (You will need to press `ct
125113
{{% tab title="kubectl logs" %}}
126114

127115
``` bash
128-
kubectl logs -l app=splunk-otel-collector -f --container otel-collector -n splunk
116+
kubectl logs -l app=splunk-otel-collector -f --container otel-collector
129117
```
130118

131119
{{% /tab %}}
@@ -139,7 +127,7 @@ Or use the installed `k9s` terminal UI.
139127
If you make an error installing the Splunk OpenTelemetry Collector you can start over by deleting the installation using:
140128

141129
``` sh
142-
helm delete splunk-otel-collector -n splunk
130+
helm delete splunk-otel-collector
143131
```
144132

145133
{{% /notice %}}

content/en/other/4-hpa/2-check-new-navigator-short.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Now, let's find your cluster by filtering on the field `k8s.cluster.name` in the
4242
{{% notice title="Note" style="info" %}}
4343
You can enter a partial name into the search box, such as `emea-ws-7*`, to quickly find your Cluster.
4444

45-
Also, it's a very good idea to switch the default time from the default **-3h** back to the last 15 minutes (**-15m**).
45+
Also, it's a very good idea to switch the default time from the default **-4h** back to the last 15 minutes (**-15m**).
4646
{{% /notice %}}
4747

4848
![Workloads](../images/k8s-workload-filter.png)
@@ -98,19 +98,19 @@ To filter to a specific workload, you can click on three dots **...** next to th
9898

9999
![workload-add-filter](../images/workload-add-filter.png)
100100

101-
This will add the selected workload to your filters. Try this for the **splunk-otel-collector-k8s-cluster-receiver** workload. It will then list a single workload in the **splunk** namespace.
101+
This will add the selected workload to your filters. It would then list a single workload in the **default** namespace.
102102

103-
The Heat map above will also filter down to a single-colored square. Click on the square to see more information about the workload.
103+
![workload-add-filter](../images/heatmap-filter-down.png)
104+
105+
From the Heatmap above find the **splunk-otel-collector-k8s-cluster-receiver** in the **default** namespace and click on the square to see more information about the workload.
104106

105107
![workload-add-filter](../images/k8s-workload-detail.png)
106108

107109
{{% notice title="Workshop Question" style="tip" icon="question" %}}
108110
What are the CPU request & CPU limit units for the otel-collector?
109111
{{% /notice %}}
110112

111-
At this point, you can drill into the information of the pods, but that is outside the scope of this workshop, for now reset your view by removing the filter for the **splunk-otel-collector-k8s-cluster-receiver** workload and setting the **Color by** option to **Pods Running**.
112-
113-
![workload-add-filter](../images/k8s-workload-remove-filter.png)
113+
At this point, you can drill into the information of the pods, but that is outside the scope of this workshop.
114114

115115
## 3. Navigator Sidebar
116116

content/en/other/4-hpa/3-deploy-apache.md

Lines changed: 22 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,17 @@ linkTitle: 3. Deploying PHP/Apache
44
weight: 3
55
---
66

7-
## 1. DNS and Services in Kubernetes
7+
## 1. Namespaces in Kubernetes
8+
9+
Most of our customers will make use of some kind of private or public cloud service to run Kubernetes. They often choose to have only a few large Kubernetes clusters as it is easier to manage centrally.
10+
11+
Namespaces are a way to organize these large Kubernetes clusters into virtual sub-clusters. This can be helpful when different teams or projects share a Kubernetes cluster as this will give them the easy ability to just see and work with their resources.
12+
13+
Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. Components are only **visible** when selecting a namespace or when adding the `--all-namespaces` flag to `kubectl` instead of allowing you to view just the components relevant to your project by selecting your namespace.
14+
15+
Most customers will want to install the applications into a separate namespace. This workshop will follow that best practice.
16+
17+
## 2. DNS and Services in Kubernetes
818

919
The Domain Name System (DNS) is a mechanism for linking various sorts of information with easy-to-remember names, such as IP addresses. Using a DNS system to translate request names into IP addresses makes it easy for end-users to reach their target domain name effortlessly.
1020

@@ -30,7 +40,7 @@ my_pod.service-name.my-namespace.svc.cluster-domain.example
3040

3141
More information can be found here: [**DNS for Service and Pods**](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
3242

33-
## 2. Review OTel receiver for PHP/Apache
43+
## 3. Review OTel receiver for PHP/Apache
3444

3545
Inspect the YAML file `~/workshop/k3s/otel-apache.yaml` and validate the contents using the following command:
3646

@@ -55,7 +65,7 @@ agent:
5565
service.name: php-apache
5666
```
5767
58-
## 3. Observation Rules in the OpenTelemetry config
68+
## 4. Observation Rules in the OpenTelemetry config
5969
6070
The above file contains an observation rule for Apache using the OTel `receiver_creator`. This receiver can instantiate other receivers at runtime based on whether observed endpoints match a configured rule.
6171

@@ -80,7 +90,6 @@ helm upgrade splunk-otel-collector \
8090
--set="splunkPlatform.token=$HEC_TOKEN" \
8191
--set="splunkPlatform.index=splunk4rookies-workshop" \
8292
splunk-otel-collector-chart/splunk-otel-collector \
83-
--namespace splunk \
8493
-f ~/workshop/k3s/otel-collector.yaml \
8594
-f ~/workshop/k3s/otel-apache.yaml
8695
```
@@ -94,16 +103,16 @@ The **REVISION** number of the deployment has changed, which is a helpful way to
94103
``` text
95104
Release "splunk-otel-collector" has been upgraded. Happy Helming!
96105
NAME: splunk-otel-collector
97-
LAST DEPLOYED: Tue Jan 31 16:57:22 2023
98-
NAMESPACE: splunk
106+
LAST DEPLOYED: Tue Feb 6 11:17:15 2024
107+
NAMESPACE: default
99108
STATUS: deployed
100109
REVISION: 2
101110
TEST SUITE: None
102111
```
103112

104113
{{% /notice %}}
105114

106-
## 4. Kubernetes ConfigMaps
115+
## 5. Kubernetes ConfigMaps
107116

108117
A ConfigMap is an object in Kubernetes consisting of key-value pairs that can be injected into your application. With a ConfigMap, you can separate configuration from your Pods.
109118

@@ -112,7 +121,7 @@ Using ConfigMap, you can prevent hardcoding configuration data. ConfigMaps are u
112121
The OpenTelemetry collector/agent uses ConfigMaps to store the configuration of the agent and the K8s Cluster receiver. You can/will always verify the current configuration of an agent after a change by running the following commands:
113122

114123
``` bash
115-
kubectl get cm -n splunk
124+
kubectl get cm
116125
```
117126

118127
{{% notice title="Workshop Question" style="tip" icon="question" %}}
@@ -122,7 +131,7 @@ How many ConfigMaps are used by the collector?
122131
When you have a list of ConfigMaps from the namespace, select the one for the `otel-agent` and view it with the following command:
123132

124133
``` bash
125-
kubectl get cm splunk-otel-collector-otel-agent -n splunk -o yaml
134+
kubectl get cm splunk-otel-collector-otel-agent -o yaml
126135
```
127136

128137
{{% notice title="NOTE" style="info" %}}
@@ -133,7 +142,7 @@ The option `-o yaml` will output the content of the ConfigMap in a readable YAML
133142
Is the configuration from `otel-apache.yaml` visible in the ConfigMap for the collector agent?
134143
{{% /notice %}}
135144

136-
## 5. Review PHP/Apache deployment YAML
145+
## 6. Review PHP/Apache deployment YAML
137146

138147
Inspect the YAML file `~/workshop/k3s/php-apache.yaml` and validate the contents using the following command:
139148

@@ -171,7 +180,7 @@ spec:
171180
resources:
172181
limits:
173182
cpu: "8"
174-
memory: "9Mi"
183+
memory: "8Mi"
175184
requests:
176185
cpu: "6"
177186
memory: "4Mi"
@@ -190,7 +199,7 @@ spec:
190199
run: php-apache
191200
```
192201

193-
## 6. Deploy PHP/Apache
202+
## 7. Deploy PHP/Apache
194203

195204
Create an `apache` namespace then deploy the PHP/Apache application to the cluster.
196205

@@ -221,5 +230,5 @@ What metrics for your Apache instance are being reported in the Apache Navigator
221230
{{% notice title="Workshop Question" style="tip" icon="question" %}}
222231
Using Log Observer what is the issue with the PHP/Apache deployment?
223232

224-
**Tip:** Adjust your **Table settings** by clicking on the cog to use only `object.involvedObject.name`, `object.message` and `k8s.cluster.name`. Make sure you unselect `_raw`!
233+
**Tip:** Adjust your filters to use: `object = php-apache-svc` and `k8s.cluster.name = <your_cluster>`.
225234
{{% /notice %}}

content/en/other/4-hpa/4-fix-apache.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You do this by placing a Resource section in the deployment of your application/
1515
resources:
1616
limits: # Maximum amount of CPU & memory for peek use
1717
cpu: "8" # Maximum of 8 cores of CPU allowed at for peek use
18-
memory: "9Mi" # Maximum allowed 9Mb of memory
18+
memory: "8Mi" # Maximum allowed 8Mb of memory
1919
requests: # Request are the expected amount of CPU & memory for normal use
2020
cpu: "6" # Requesting 4 cores of a CPU
2121
memory: "4Mi" # Requesting 4Mb of memory
@@ -49,7 +49,7 @@ Find the resources section and reduce the CPU limits to **1** and the CPU reques
4949
resources:
5050
limits:
5151
cpu: "1"
52-
memory: "9Mi"
52+
memory: "8Mi"
5353
requests:
5454
cpu: "0.5"
5555
memory: "4Mi"
118 KB
Loading
31.3 KB
Loading
2.11 KB
Loading
15.5 KB
Loading
3.66 KB
Loading
-85.4 KB
Binary file not shown.

0 commit comments

Comments
 (0)