You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/other/hpa/1-deploy-otel.md
+11-9Lines changed: 11 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,25 +8,23 @@ weight: 1
8
8
9
9
We will be starting this workshop using the new Kubernetes Navigator so please check that you are already using the new Navigator.
10
10
11
-
When you select **Infrastructure** from the main menu on the left, followed by selecting **Kubernetes**, you should see two services panes (**K8s nodes** and **K8s workloads**) for Kubernetes, similar like the ones below:
11
+
When you select **Infrastructure** from the main menu on the left, followed by selecting **Kubernetes**, you should see two service panes (**K8s nodes** and **K8s workloads**) for Kubernetes, similar to the ones below:
12
12
13
13

14
14
15
15
## 2. Connect to EC2 instance
16
16
17
17
You will be able to connect to the workshop instance by using SSH from your Mac, Linux or Windows device. Open the link to the sheet provided by your instructor. This sheet contains the IP addresses and the password for the workshop instances.
18
18
19
-
To use SSH, open a terminal on your system and type `ssh [email protected]` (replacing x.x.x.x with the IP address assigned to you). The password for this workshop is also provided in the sheet.
20
-
21
-
{{% notice title="Note" style="info" %}}
19
+
{{% notice style="info" %}}
22
20
Your workshop instance has been pre-configured with the correct **Access Token** and **Realm** for this workshop. There is no need for you to configure these.
23
21
{{% /notice %}}
24
22
25
23
## 3. Namespaces in Kubernetes
26
24
27
25
Most of our customers will make use of some kind of private or public cloud service to run Kubernetes. They often choose to have only a few large Kubernetes clusters as it is easier to manage centrally.
28
26
29
-
Namespaces are a way to organize these large Kubernetes clusters into virtual sub-clusters. This can be helpful when different teams or projects share a Kubernetes cluster as this will give them the easy ability to just see and work with their own resources.
27
+
Namespaces are a way to organize these large Kubernetes clusters into virtual sub-clusters. This can be helpful when different teams or projects share a Kubernetes cluster as this will give them the easy ability to just see and work with their resources.
30
28
31
29
Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. Components are only **visible** when selecting a namespace or when adding the `--all-namespaces` flag to `kubectl` instead of allowing you to view just the components relevant to your project by selecting your namespace.
If you are using the Kubernetes Integration setup from the Data Management page from the O11y UI, you find that the guide will use
114
+
If you are using the Kubernetes Integration setup from the Data Management page from the O11y UI, you find that the guide will use
113
115
`--generate-name splunk-otel-collector-chart/splunk-otel-collector` instead of just `splunk-otel-collector-chart/splunk-otel-collector` as we do in the above example.
114
116
115
-
This will generate an unique name/label for the collector install and Pods by adding a unique number at the end of the object name, allowing you to install multiple collectors in your Kubernetes environment with different configurations.
117
+
This will generate a unique name/label for the collector install and Pods by adding a unique number at the end of the object name, allowing you to install multiple collectors in your Kubernetes environment with different configurations.
116
118
117
119
Just make sure you use the correct label that is generated by the Helm chart if you wish to use the `helm` and `kubectl` commands from this workshop on an install done with the `--generate-name` option.
Copy file name to clipboardExpand all lines: content/en/other/hpa/2-check-new-navigator-short.md
+24-26Lines changed: 24 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,29 +1,27 @@
1
1
---
2
-
title: Tour of the Kubernetes Navigator v2
3
-
linkTitle: 2. Kubernetes Navigator v2
2
+
title: Tour of the Kubernetes Navigator
3
+
linkTitle: 2. Kubernetes Navigator
4
4
weight: 2
5
5
---
6
6
7
7
## 1. Cluster vs Workload View
8
8
9
9
The Kubernetes Navigator offers you two separate use cases to view your Kubernetes data.
10
10
11
-
* The **K8s workloads** is focusing on providing information in regards to workloads a.k.a. *your deployments*.
12
-
* The **K8s nodes** is focusing on providing insight into the performance of clusters, nodes, pods and containers.
11
+
* The **K8s workloads** are focusing on providing information in regards to workloads a.k.a. *your deployments*.
12
+
* The **K8s nodes** are focusing on providing insight into the performance of clusters, nodes, pods and containers.
13
13
14
-
You will initially select either view depending on your need (you can switch between the view on the fly if required). The most common one we will use in this workshop is the workload view and we will focus on that specifically.
14
+
You will initially select either view depending on your need (you can switch between the view on the fly if required). The most common one we will use in this workshop is the workload view and we will focus on that specifically.
15
15
16
-
### 1.1 Finding your K8s Cluster name
16
+
### 1.1 Finding your K8s Cluster Name
17
17
18
-
Your first task is to identify and find your own cluster. The cluster will be named after your EC2 instance name.
19
-
20
-
To confirm your EC2 instance name, look at the prompt of your EC2 instance. For example, if you are assigned the 7th EC2 instance, the prompt will show:
18
+
Your first task is to identify and find your cluster. The cluster will be named as determined by the preconfigured environment variable `INSTANCE`. To confirm the cluster name enter the following command in your terminal:
21
19
22
20
``` bash
23
-
ubuntu@emea-ws-7 ~ $
21
+
echo $INSTANCE-k3s-cluster
24
22
```
25
23
26
-
This means your cluster will be named: `emea-ws-7-k3s-cluster`. Please make a note of your cluster name as you will need this later in the workshop for filtering.
24
+
Please make a note of your cluster name as you will need this later in the workshop for filtering.
27
25
28
26
## 2. Workloads & Workload Details Pane
29
27
@@ -39,12 +37,12 @@ Initially, you will see all the workloads for all clusters that are reported int
39
37
40
38

41
39
42
-
Now, let's find your own cluster by filtering on the field `k8s.cluster.name` in the filter toolbar (as marked with a blue stripe).
40
+
Now, let's find your cluster by filtering on the field `k8s.cluster.name` in the filter toolbar (as marked with a blue stripe).
43
41
44
42
{{% notice title="Note" style="info" %}}
45
-
You can enter a partial name into the search box, such as 'emea-ws-7*', to quickly find your Cluster.
43
+
You can enter a partial name into the search box, such as `emea-ws-7*`, to quickly find your Cluster.
46
44
47
-
Also, it's a very good idea to switch the default time from the default **-3h** back to last 15 minutes (**-15m**).
45
+
Also, it's a very good idea to switch the default time from the default **-3h** back to the last 15 minutes (**-15m**).
48
46
{{% /notice %}}
49
47
50
48

@@ -55,15 +53,15 @@ You should now just see information for your own cluster.
55
53
How many workloads are running & how many namespaces are in your Cluster?
56
54
{{% /notice %}}
57
55
58
-
### 2.1 Using the Navigator Selection chart
56
+
### 2.1 Using the Navigator Selection Chart
59
57
60
-
The **K8s workloads** table is a common feature used across most of the Navigator's and will offer you a list view of the data you are viewing. In our case, it shows a list of `Pods Failed` grouped by `k8s.namespace.name`.
58
+
The **K8s workloads** table is a common feature used across most of the Navigators and will offer you a list view of the data you are viewing. In our case, it shows a list of `Pods Failed` grouped by `k8s.namespace.name`.
Now let's change the list view to a heat map view by selecting either the Heat map icon or List icon in the upper-right corner of the screen (as marked with the purple line).
62
+
Now let's change the list view to a heat map view by selecting either the Heat map icon or the List icon in the upper-right corner of the screen (as marked with the purple line).
65
63
66
-
Changing this option will result in the following visualisation:
64
+
Changing this option will result in the following visualization:
67
65
68
66

69
67
@@ -72,17 +70,17 @@ In this view, you will note that each workload is now a colored square. These sq
72
70
73
71
Another valuable option in this screen is **Find Outliers** which provides historical analytics of your clusters based on what is selected in the **Color by** dropdown.
74
72
75
-
Now, let's select the **File system usage (bytes)** from the **Color by** dropdown box, then click on the **Find outliers** dropdown *as marked by a yellow line* in the above image and make sure you change the **Scope** in the dialog to **Per k8s.namespace.name** and **Deviation from Median** as below:
73
+
Now, let's select the **File system usage (bytes)** from the **Color by** drop-down box, then click on the **Find outliers** drop-down *as marked by a yellow line* in the above image and make sure you change the **Scope** in the dialog to **Per k8s.namespace.name** and **Deviation from Median** as below:
76
74
77
75

78
76
79
-
The **Find outliers** view is very useful when you need to view a selection of your workloads (or any service depending on the Navigator used) and quickly need to figure out if something has changed.
77
+
The **Find Outliers** view is very useful when you need to view a selection of your workloads (or any service depending on the Navigator used) and quickly need to figure out if something has changed.
80
78
81
79
It will give you fast insight into items (workloads in our case) that are performing differently (both increased or decreased) which helps to make it easier to spot problems.
82
80
83
-
### 2.2 The Deployment overview pane
81
+
### 2.2 The Deployment Overview pane
84
82
85
-
The Deployment overview pane gives you a quick insight of the status of your deployments. You can see at once if the pods of your deployments are Pending, Running, Succeeded, Failed or in an Unknown state.
83
+
The Deployment Overview pane gives you a quick insight into the status of your deployments. You can see at once if the pods of your deployments are Pending, Running, Succeeded, Failed or in an Unknown state.
@@ -102,15 +100,15 @@ To filter to a specific workload, you can click on three dots **...** next to th
102
100
103
101
This will add the selected workload to your filters. Try this for the **splunk-otel-collector-k8s-cluster-receiver** workload. It will then list a single workload in the **splunk** namespace.
104
102
105
-
The Heat map above will also filter down to a single coloured square. Click on the square to see more information about the workload.
103
+
The Heat map above will also filter down to a single-colored square. Click on the square to see more information about the workload.
What are the CPU request & CPU limit units for the otel-collector?
111
109
{{% /notice %}}
112
110
113
-
At this point you can drill into the information of the pods, but that is outside the scope of this workshop, for now reset your view by removing the filter for the **splunk-otel-collector-k8s-cluster-receiver** workload and setting the **Color by** option to **Pods Running**.
111
+
At this point, you can drill into the information of the pods, but that is outside the scope of this workshop, for now reset your view by removing the filter for the **splunk-otel-collector-k8s-cluster-receiver** workload and setting the **Color by** option to **Pods Running**.
The Navigator Sidebar will expand and a link to the discovered service will be added as seen in the image below:
129
127
130
-

128
+

131
129
132
-
This will allow for easy switching between Navigators. The same applies for your Apache server instance, it will have a Navigator Sidebar allowing you to quickly jump back to the Kubernetes Navigator.
130
+
This will allow for easy switching between Navigators. The same applies to your Apache server instance, it will have a Navigator Sidebar allowing you to quickly jump back to the Kubernetes Navigator.
More information can be found here: [DNS for Service and Pods](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
31
+
More information can be found here: [**DNS for Service and Pods**](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
32
32
33
33
## 2. Review OTel receiver for PHP/Apache
34
34
@@ -61,7 +61,7 @@ The above file contains an observation rule for Apache using the OTel `receiver_
61
61
62
62
The configured rules will be evaluated for each endpoint discovered. If the rule evaluates to true, then the receiver for that rule will be started as configured against the matched endpoint.
63
63
64
-
In the file above we tell the OpenTelemetry agent to look for Pods that match the name `apache` and have port `80` open. Once found, the agent will configure an Apache receiver to read Apache metrics from the configured URL. Note, the K8s DNSbased URL in the above YAML for the service.
64
+
In the file above we tell the OpenTelemetry agent to look for Pods that match the name `apache` and have port `80` open. Once found, the agent will configure an Apache receiver to read Apache metrics from the configured URL. Note, the K8s DNS-based URL in the above YAML for the service.
65
65
66
66
To use the Apache configuration, you can upgrade the existing Splunk OpenTelemetry Collector Helm chart to use the `otel-apache.yaml` file with the following command:
A ConfigMap is an object in Kubernetes consisting of key-value pairs which can be injected into your application. With a ConfigMap, you can separate configuration from your Pods.
108
+
A ConfigMap is an object in Kubernetes consisting of key-value pairs that can be injected into your application. With a ConfigMap, you can separate configuration from your Pods.
105
109
106
110
Using ConfigMap, you can prevent hardcoding configuration data. ConfigMaps are useful for storing and sharing non-sensitive, unencrypted configuration information.
107
111
@@ -115,7 +119,7 @@ kubectl get cm -n splunk
115
119
How many ConfigMaps are used by the collector?
116
120
{{% /notice %}}
117
121
118
-
When you have list of ConfigMaps from the namespace, select the one for the `otel-agent` and view it with the following command:
122
+
When you have a list of ConfigMaps from the namespace, select the one for the `otel-agent` and view it with the following command:
119
123
120
124
``` bash
121
125
kubectl get cm splunk-otel-collector-otel-agent -n splunk -o yaml
This file contains the configuration for the PHP/Apache deployment and will create a new StatefulSet with a single replica of the PHP/Apache image.
141
145
142
-
A stateless application is one that does not care which network it is using, and it does not need permanent storage. Examples of stateless apps may include web servers such as Apache, Nginx, or Tomcat.
146
+
A stateless application does not care which network it is using, and it does not need permanent storage. Examples of stateless apps may include web servers such as Apache, Nginx, or Tomcat.
143
147
144
148
```yaml
145
149
apiVersion: apps/v1
@@ -188,7 +192,7 @@ spec:
188
192
189
193
## 6. Deploy PHP/Apache
190
194
191
-
Create an apache namespace then deploy the PHP/Apache application to the cluster.
195
+
Create an `apache` namespace then deploy the PHP/Apache application to the cluster.
Copy file name to clipboardExpand all lines: content/en/other/hpa/4-fix-apache.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ weight: 4
7
7
8
8
Especially in Production Kubernetes Clusters, CPU and Memory are considered precious resources. Cluster Operators will normally require you to specify the amount of CPU and Memory your Pod or Service will require in the deployment, so they can have the Cluster automatically manage on which Node(s) your solution will be placed.
9
9
10
-
You do this by placing a Resource section in the deployment of you application/Pod
10
+
You do this by placing a Resource section in the deployment of your application/Pod
11
11
12
12
**Example:**
13
13
@@ -21,13 +21,13 @@ resources:
21
21
memory: "4Mi"# Requesting 4Mb of memory
22
22
```
23
23
24
-
More information can be found here: [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
24
+
More information can be found here: [**Resource Management for Pods and Containers**](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
25
25
26
26
If your application or Pod will go over the limits set in your deployment, Kubernetes will kill and restart your Pod to protect the other applications on the Cluster.
27
27
28
28
Another scenario that you will run into is when there is not enough Memory or CPU on a Node. In that case, the Cluster will try to reschedule your Pod(s) on a different Node with more space.
29
29
30
-
If that fails, or if there is not enough space when you deploy your application, the Cluster will put your workload/deployment in schedule mode until there is enough room on any of the available Nodes to deploy the Pods according their limits.
30
+
If that fails, or if there is not enough space when you deploy your application, the Cluster will put your workload/deployment in schedule mode until there is enough room on any of the available Nodes to deploy the Pods according to their limits.
31
31
32
32
## 2. Fix PHP/Apache Deployment
33
33
@@ -55,7 +55,7 @@ resources:
55
55
memory: "4Mi"
56
56
```
57
57
58
-
Save the changes youhave made. (Hint: Use `Esc` followed by `:wq!` to save your changes).
58
+
Save the changes you have made. (Hint: Use `Esc` followed by `:wq!` to save your changes).
59
59
60
60
Now, we must delete the existing StatefulSet and re-create it. StatefulSets are immutable, so we must delete the existing one and re-create it with the new changes.
61
61
@@ -91,7 +91,7 @@ Monitor the Apache web servers Navigator dashboard for a few minutes.
91
91
What is happening with the # Hosts reporting chart?
92
92
{{% /notice %}}
93
93
94
-
## 4. Fix memory issue
94
+
## 4. Fix the memory issue
95
95
96
96
If you navigate back to the Apache dashboard, you will notice that metrics are no longer coming in. We have another resource issue and this time we are Out of Memory. Let's edit the stateful set and increase the memory to what is shown in the image below:
0 commit comments