Skip to content

Commit a22086e

Browse files
committed
Markdown linting
1 parent 6de348c commit a22086e

File tree

5 files changed

+366
-338
lines changed

5 files changed

+366
-338
lines changed

content/en/tko/session-2/docs/code_to_python.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Code to Kubernetes - Python
33
linkTitle: Code to Kubernetes
4-
weight: 4
4+
weight: 5
55
---
66

77
## Code to Kubernetes - Python
@@ -81,9 +81,9 @@ python3 review.py
8181
```
8282

8383
Verify that the service is working
84-
- Use curl in your terminal
85-
- Or hit the URL http://{Your_EC2_IP_address}:5000 and http://{Your_EC2_IP_address}:5000/get_review with a browser
8684

85+
- Use curl in your terminal
86+
- Or hit the URL `http://{Your_EC2_IP_address}:5000` and `http://{Your_EC2_IP_address}:5000/get_review` with a browser
8787

8888
``` bash
8989
curl localhost:5000
@@ -241,10 +241,10 @@ Notes about review.service.yaml:
241241
- the selector associates this service to pods with the label app with the value being review
242242
- the review service exposes the review pods as a network service
243243
- other pods can now ping 'review' and they will hit a review pod.
244-
- a pod would get a review if it ran 'curl http://review:5000'
244+
- a pod would get a review if it ran `curl http://review:5000`
245245
- NodePort service
246246
- the service is accessible to the K8 host by the nodePort, 30000
247-
- Another machine that has this can get a review if it ran 'curl http://<k8 host ip>:30000'
247+
- Another machine that has this can get a review if it ran `curl http://<k8 host ip>:30000`
248248

249249
Apply the review deployment and service
250250

Lines changed: 169 additions & 151 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Deploy complex environments and capture meterics
33
linkTitle: Deployment
4-
weight: 3
4+
weight: 4
55
---
66

77
**Objective:** Learn how to efficiently deploy complex infrastructure components such as Kafka and MongoDB to demonstrate metrics collection with Splunk O11y IM integrations
@@ -12,167 +12,185 @@ weight: 3
1212

1313
A prospect uses Kafka and MongoDB in their environment. Since there are integrations for these services, you’d like to demonstrate this to the prospect. What is a quick and efficient way to set up a live environment with these services and have metrics collected?
1414

15-
1. Where can I find helm charts?
16-
a. Google “myservice helm chart”
17-
b. https://artifacthub.io/ (**Note:** Look for charts from trusted organizations, with high star count and frequent updates)
18-
2. Review Apache Kafka packaged by Bitnami. We will deploy the helm chart with these options enabled:
19-
a. `replicaCount=3`
20-
b. `metrics.jmx.enabled=true`
21-
c. `metrics.kafka.enabled=true`
22-
d. `deleteTopicEnable=true`
23-
3. Review MongoDB(R) packaged by Bitnami. We will deploy the helm chart with these options enabled:
24-
a. `version 12.1.31`
25-
b. `metrics.enabled=true`
26-
c. `global.namespaceOverride=default`
27-
d. `auth.rootUser=root`
28-
e. `auth.rootPassword=splunk`
29-
f. `auth.enabled=false`
30-
4. Install Kafka and MongoDB with helm charts
31-
32-
``` bash
33-
helm repo add bitnami https://charts.bitnami.com/bitnami
34-
35-
helm install kafka --set replicaCount=3 --set metrics.jmx.enabled=true --set metrics.kafka.enabled=true --set deleteTopicEnable=true bitnami/kafka
36-
37-
helm install mongodb --set metrics.enabled=true bitnami/mongodb --set global.namespaceOverride=default --set auth.rootUser=root --set auth.rootPassword=splunk --set auth.enabled=false --version 12.1.31
38-
39-
40-
###verify the helm chart installation
41-
helm list
42-
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
43-
kafka default 1 2022-11-14 11:21:36.328956822 -0800 PST deployed kafka-19.1.3 3.3.1
44-
mongodb default 1 2022-11-14 11:19:36.507690487 -0800 PST deployed mongodb-12.1.31 5.0.10
45-
46-
###verify the helm chart installation
47-
48-
kubectl get pods
49-
NAME READY STATUS RESTARTS AGE
50-
kafka-exporter-595778d7b4-99ztt 0/1 ContainerCreating 0 17s
51-
mongodb-b7c968dbd-jxvsj 0/2 Pending 0 6s
52-
kafka-1 0/2 ContainerCreating 0 16s
53-
kafka-2 0/2 ContainerCreating 0 16s
54-
kafka-zookeeper-0 0/1 Pending 0 17s
55-
kafka-0 0/2 Pending 0 17s
56-
```
57-
58-
5. Use information for each Helm chart and Splunk O11y Data Setup to generate values.yaml for capturing metrics from Kafka and MongoDB. **Note:** values.yaml for the different services will be passed to the Splunk Helm Chart at installation time. These will configure the OTEL collector to capture metrics from these services.
15+
### 1. Where can I find helm charts?
16+
17+
- Google “myservice helm chart”
18+
- `https://artifacthub.io/` (**Note:** Look for charts from trusted organizations, with high star count and frequent updates)
19+
20+
### 2. Review Apache Kafka packaged by Bitnami
21+
22+
We will deploy the helm chart with these options enabled:
23+
24+
- `replicaCount=3`
25+
- `metrics.jmx.enabled=true`
26+
- `metrics.kafka.enabled=true`
27+
- `deleteTopicEnable=true`
28+
29+
### 3. Review MongoDB(R) packaged by Bitnami
30+
31+
We will deploy the helm chart with these options enabled:
32+
33+
- `version 12.1.31`
34+
- `metrics.enabled=true`
35+
- `global.namespaceOverride=default`
36+
- `auth.rootUser=root`
37+
- `auth.rootPassword=splunk`
38+
- `auth.enabled=false`
39+
40+
### 4. Install Kafka and MongoDB with helm charts
41+
42+
``` bash
43+
helm repo add bitnami https://charts.bitnami.com/bitnami
44+
45+
helm install kafka --set replicaCount=3 --set metrics.jmx.enabled=true --set metrics.kafka.enabled=true --set deleteTopicEnable=true bitnami/kafka
46+
47+
helm install mongodb --set metrics.enabled=true bitnami/mongodb --set global.namespaceOverride=default --set auth.rootUser=root --set auth.rootPassword=splunk --set auth.enabled=false --version 12.1.31
48+
49+
50+
###verify the helm chart installation
51+
helm list
52+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
53+
kafka default 1 2022-11-14 11:21:36.328956822 -0800 PST deployed kafka-19.1.3 3.3.1
54+
mongodb default 1 2022-11-14 11:19:36.507690487 -0800 PST deployed mongodb-12.1.31 5.0.10
55+
56+
###verify the helm chart installation
57+
58+
kubectl get pods
59+
NAME READY STATUS RESTARTS AGE
60+
kafka-exporter-595778d7b4-99ztt 0/1 ContainerCreating 0 17s
61+
mongodb-b7c968dbd-jxvsj 0/2 Pending 0 6s
62+
kafka-1 0/2 ContainerCreating 0 16s
63+
kafka-2 0/2 ContainerCreating 0 16s
64+
kafka-zookeeper-0 0/1 Pending 0 17s
65+
kafka-0 0/2 Pending 0 17s
66+
```
67+
68+
Use information for each Helm chart and Splunk O11y Data Setup to generate values.yaml for capturing metrics from Kafka and MongoDB.
69+
70+
{{% alert title="Note" color="info" %}}
71+
`values.yaml` for the different services will be passed to the Splunk Helm Chart at installation time. These will configure the OTEL collector to capture metrics from these services.
72+
{{% /alert %}}
5973

6074
- References:
61-
a. Apache Kafka packaged by Bitnami
62-
b. Configure application receivers for databases » Apache Kafka
63-
c. Kafkametricsreceiver
75+
- Apache Kafka packaged by Bitnami
76+
- Configure application receivers for databases » Apache Kafka
77+
- Kafkametricsreceiver
6478

65-
- Example kafka.values.yaml:
79+
#### 4.1 Example kafka.values.yaml
6680

67-
``` yaml
68-
otelAgent:
81+
``` yaml
82+
otelAgent:
83+
config:
84+
receivers:
85+
receiver_creator:
86+
receivers:
87+
smartagent/kafka:
88+
rule: type == "pod" && name matches "kafka"
89+
config:
90+
#endpoint: '`endpoint`:5555'
91+
port: 5555
92+
type: collectd/kafka
93+
clusterName: sl-kafka
94+
otelK8sClusterReceiver:
95+
k8sEventsEnabled: true
96+
config:
97+
receivers:
98+
kafkametrics:
99+
brokers: kafka:9092
100+
protocol_version: 2.0.0
101+
scrapers:
102+
- brokers
103+
- topics
104+
- consumers
105+
service:
106+
pipelines:
107+
metrics:
108+
receivers:
109+
#- prometheus
110+
- k8s_cluster
111+
- kafkametrics
112+
```
113+
114+
#### 4.2 Example mongodb.values.yaml
115+
116+
``` yaml
117+
otelAgent:
69118
config:
70119
receivers:
71120
receiver_creator:
72121
receivers:
73-
smartagent/kafka:
74-
rule: type == "pod" && name matches "kafka"
122+
smartagent/mongodb:
123+
rule: type == "pod" && name matches "mongo"
75124
config:
76-
#endpoint: '`endpoint`:5555'
77-
port: 5555
78-
type: collectd/kafka
79-
clusterName: sl-kafka
80-
otelK8sClusterReceiver:
81-
k8sEventsEnabled: true
125+
type: collectd/mongodb
126+
host: mongodb.default.svc.cluster.local
127+
port: 27017
128+
databases: ["admin", "O11y", "local", "config"]
129+
sendCollectionMetrics: true
130+
sendCollectionTopMetrics: true
131+
```
132+
133+
#### 4.3 Example zookeeper.values.yaml
134+
135+
``` yaml
136+
otelAgent:
82137
config:
83138
receivers:
84-
kafkametrics:
85-
brokers: kafka:9092
86-
protocol_version: 2.0.0
87-
scrapers:
88-
- brokers
89-
- topics
90-
- consumers
91-
service:
92-
pipelines:
93-
metrics:
139+
receiver_creator:
94140
receivers:
95-
#- prometheus
96-
- k8s_cluster
97-
- kafkametrics
98-
```
141+
smartagent/zookeeper:
142+
rule: type == "pod" && name matches "kafka-zookeeper"
143+
config:
144+
type: collectd/zookeeper
145+
host: kafka-zookeeper
146+
port: 2181
147+
```
99148
100-
- Example mongodb.values.yaml:
149+
### 5. Install the Splunk OTEL helm chart
101150
102-
``` yaml
103-
otelAgent:
104-
config:
105-
receivers:
106-
receiver_creator:
107-
receivers:
108-
smartagent/mongodb:
109-
rule: type == "pod" && name matches "mongo"
110-
config:
111-
type: collectd/mongodb
112-
host: mongodb.default.svc.cluster.local
113-
port: 27017
114-
databases: ["admin", "O11y", "local", "config"]
115-
sendCollectionMetrics: true
116-
sendCollectionTopMetrics: true
117-
```
118-
119-
- Example zookeeper.values.yaml:
120-
121-
``` yaml
122-
otelAgent:
123-
config:
124-
receivers:
125-
receiver_creator:
126-
receivers:
127-
smartagent/zookeeper:
128-
rule: type == "pod" && name matches "kafka-zookeeper"
129-
config:
130-
type: collectd/zookeeper
131-
host: kafka-zookeeper
132-
port: 2181
133-
```
134-
135-
6. Install the Splunk OTEL helm chart:
136-
137-
``` bash
138-
export SPLUNK_ACCESS_TOKEN=<your access token>
139-
export SPLUNK_REALM=<your realm>
140-
export clusterName=<your cluster name>
141-
142-
cd ../otel_yamls
143-
helm repo add splunk-otel-collector-chart https://splunk.github.io/splunk-otel-collector-chart
144-
helm repo update
145-
146-
helm install --set provider=' ' --set distro=' ' --set splunkObservability.accessToken=$SPLUNK_ACCESS_TOKEN --set clusterName=$clusterName --set splunkObservability.realm=$SPLUNK_REALM --set otelCollector.enabled='false' --set splunkObservability.logsEnabled='true' --set gateway.enabled='false' --values kafka.values.yaml --values mongodb.values.yaml --values zookeeper.values.yaml --values alwayson.values.yaml --values k3slogs.yaml --generate-name splunk-otel-collector-chart/splunk-otel-collector
147-
```
148-
149-
7. Verify that the Kafka, MongoDB and Splunk OTEL Collector helm charts are installed. Note that names may differ.
150-
151-
``` text
152-
$helm list
153-
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
154-
kafka default 1 2021-12-07 12:48:47.066421971 -0800 PST deployed kafka-14.4.1 2.8.1
155-
mongodb default 1 2021-12-07 12:49:06.132771625 -0800 PST deployed mongodb-10.29.2 4.4.10
156-
splunk-otel-collector-1638910184 default 1 2021-12-07 12:49:45.694013749 -0800 PST deployed splunk-otel-collector-0.37.1 0.37.1
157-
158-
$ kubectl get pods
159-
NAME READY STATUS RESTARTS AGE
160-
kafka-zookeeper-0 1/1 Running 0 18m
161-
kafka-2 2/2 Running 1 18m
162-
mongodb-79cf87987f-gsms8 2/2 Running 0 18m
163-
kafka-1 2/2 Running 1 18m
164-
kafka-exporter-7c65fcd646-dvmtv 1/1 Running 3 18m
165-
kafka-0 2/2 Running 1 18m
166-
splunk-otel-collector-1638910184-agent-27s5c 2/2 Running 0 17m
167-
splunk-otel-collector-1638910184-k8s-cluster-receiver-8587qmh9l 1/1 Running 0 17m
168-
```
169-
170-
8. Verify that out of the box dashboards for Kafka, MongoDB and Zookeeper are populated in the Infrastructure Monitor landing page. Drill down into each component to view granular details for each service.
171-
172-
- Infrastructure Monitoring Landing page:
173-
174-
- K8 Navigator:
175-
176-
- MongoDB Dashboard:
177-
178-
- Kafka Dashboard:
151+
``` bash
152+
export SPLUNK_ACCESS_TOKEN=<your access token>
153+
export SPLUNK_REALM=<your realm>
154+
export clusterName=<your cluster name>
155+
156+
cd ../otel_yamls
157+
helm repo add splunk-otel-collector-chart https://splunk.github.io/splunk-otel-collector-chart
158+
helm repo update
159+
160+
helm install --set provider=' ' --set distro=' ' --set splunkObservability.accessToken=$SPLUNK_ACCESS_TOKEN --set clusterName=$clusterName --set splunkObservability.realm=$SPLUNK_REALM --set otelCollector.enabled='false' --set splunkObservability.logsEnabled='true' --set gateway.enabled='false' --values kafka.values.yaml --values mongodb.values.yaml --values zookeeper.values.yaml --values alwayson.values.yaml --values k3slogs.yaml --generate-name splunk-otel-collector-chart/splunk-otel-collector
161+
```
162+
163+
### 6. Verify installation
164+
165+
Verify that the Kafka, MongoDB and Splunk OTEL Collector helm charts are installed, note that names may differ.
166+
167+
``` text
168+
$helm list
169+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
170+
kafka default 1 2021-12-07 12:48:47.066421971 -0800 PST deployed kafka-14.4.1 2.8.1
171+
mongodb default 1 2021-12-07 12:49:06.132771625 -0800 PST deployed mongodb-10.29.2 4.4.10
172+
splunk-otel-collector-1638910184 default 1 2021-12-07 12:49:45.694013749 -0800 PST deployed splunk-otel-collector-0.37.1 0.37.1
173+
174+
$ kubectl get pods
175+
NAME READY STATUS RESTARTS AGE
176+
kafka-zookeeper-0 1/1 Running 0 18m
177+
kafka-2 2/2 Running 1 18m
178+
mongodb-79cf87987f-gsms8 2/2 Running 0 18m
179+
kafka-1 2/2 Running 1 18m
180+
kafka-exporter-7c65fcd646-dvmtv 1/1 Running 3 18m
181+
kafka-0 2/2 Running 1 18m
182+
splunk-otel-collector-1638910184-agent-27s5c 2/2 Running 0 17m
183+
splunk-otel-collector-1638910184-k8s-cluster-receiver-8587qmh9l 1/1 Running 0 17m
184+
```
185+
186+
### 7. Verify dashboards
187+
188+
Verify that out of the box dashboards for Kafka, MongoDB and Zookeeper are populated in the Infrastructure Monitor landing page. Drill down into each component to view granular details for each service.
189+
190+
- Infrastructure Monitoring Landing page:
191+
192+
- K8 Navigator:
193+
194+
- MongoDB Dashboard:
195+
196+
- Kafka Dashboard:

content/en/tko/session-2/docs/getting_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: GDI - Real Time Enrichment Workshop
33
linkTitle: Getting started
4-
weight: 2
4+
weight: 3
55
---
66

77
Please note to begin the following lab, you must have completed the prework:

0 commit comments

Comments
 (0)