You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The list of metrics collected can be found [here](/docs/integrations/containers-orchestration/kafka/#kafka-metrics).
50
49
51
-
52
-
## Collecting logs and metrics for Strimzi Kafka Pods
50
+
## Collecting logs and metrics for Strimzi Kafka pods
53
51
54
52
Collection architecture is similar to Kafka and described [here](/docs/integrations/containers-orchestration/strimzi-kafka/#collecting-logs-and-metrics-for-strimzi-kafka-pods).
55
53
56
54
This section provides instructions for configuring log and metric collection for the Sumo Logic App for Strimzi Kafka.
57
55
58
-
### Prerequisites for Kafka Cluster Deployment
56
+
### Prerequisites for Kafka cluster deployment
59
57
60
58
Before configuring the collection, you will require the following items:
61
59
@@ -65,7 +63,7 @@ Before configuring the collection, you will require the following items:
65
63
66
64
3. Download the [kafka-metrics-sumologic-telegraf.yaml](https://drive.google.com/file/d/1pvMqYiJu7_nEv2F2RsPKIn_WWs8BKcxQ/view?usp=sharing). If you already have an existing yaml, you will have to merge the contents of both files. This file contains the Kafka resource.
67
65
68
-
### Deploying Sumo Logic Kubernetes Collection
66
+
### Deploying Sumo Logic Kubernetes collection
69
67
70
68
1. Create a new namespace to deploy resources. The below command creates a **sumologiccollection** namespace.
71
69
@@ -92,7 +90,7 @@ Before configuring the collection, you will require the following items:
92
90
93
91
A collector will be created in your Sumo Logic org with the cluster name provided in the above command. You can verify it by referring to the [collection page](/docs/send-data/collection/).
94
92
95
-
### Configure Metrics Collection
93
+
### Configure metrics collection
96
94
97
95
Follow these steps to collect metrics from a Kubernetes environment:
98
96
@@ -143,8 +141,7 @@ Follow these steps to collect metrics from a Kubernetes environment:
143
141
144
142
For more information on configuring the Joloka input plugin for Telegraf, see [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2).
145
143
146
-
147
-
### Configure Logs Collection
144
+
### Configure logs collection
148
145
If your Kafka helm chart/pod is writing the logs to standard output, then the [Sumologic Kubernetes Collection](/docs/integrations/containers-orchestration/kubernetes/#collecting-metrics-and-logs-for-the-kubernetes-app) will automatically capture the logs from stdout and will send the logs to Sumologic. If no, then you have to use [tailing-sidecar](https://github.com/SumoLogic/tailing-sidecar/blob/main/README.md) approach.
149
146
150
147
1.**Add labels on your Kafka pods**
@@ -177,16 +174,15 @@ If your Kafka helm chart/pod is writing the logs to standard output, then the [S
The **Strimzi Kafka -Zookeeper** dashboard provides an at-a-glance view of the state of your partitions, active controllers, leaders, throughput, and network across Kafka brokers and clusters.
### Strimzi Kafka - Failures and Delayed Operations
328
319
329
320
The **Strimzi Kafka - Failures and Delayed Operations** dashboard gives you insight into all failures and delayed operations associated with your Kafka clusters.
The Strimzi Kafka - Topic Overview dashboard helps you quickly identify under-replicated partitions and incoming bytes by Kafka topic, server, and cluster.
The Strimzi Kafka - Topic Details dashboard gives you insight into throughput, partition sizes, and offsets across Kafka brokers, topics, and clusters.
0 commit comments