Expose Zookeeper JMX port for Datadog agent. #4287
-
Hi, So far, I have been successful in enabling the Jmx port for kafka brokers and correctly marking the pods so that Datadog can identify & pull metrics. ✅ I want to do the same for Zookeeper metrics and have added similar marker annotations to the Zookeeper pods. However, the Datadog check reports error possibly because the port is not open/reachable. Traceback (most recent call last):
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/zk/zk.py", line 248, in _send_command
with closing(socket.create_connection((self.host, self.port))) as sock:
File "/opt/datadog-agent/embedded/lib/python3.8/socket.py", line 808, in create_connection
raise err
File "/opt/datadog-agent/embedded/lib/python3.8/socket.py", line 796, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out Unlike KafkaClusterSpec, there doesn't seem to be a JmxOptions for Zookeeper. Please advise. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 11 replies
-
The JMX access is currently supported only in Kafka brokers (and in 0.22.0 it will be added to Kafka Connect and Mirror Maker 2 as well). Historically, this has been always contributed by users - as maintainers we prefer to use the Prometheus endpoint for monitoring. So I think the JMX access to Zookeeper is waiting for someone to contribute it. If you would want to look into it, these are the PRs which added the KAfka and Kafka Connect JMS access: |
Beta Was this translation helpful? Give feedback.
-
Update: Datadog supports Prometheus & OpenMetrics metrics collection - https://docs.datadoghq.com/agent/kubernetes/prometheus/ I added the following annotations to configure it: ...
kafkaExporter:
# Annotations for Datadog JMX auto-discovery
template:
pod:
metadata:
annotations:
ad.datadoghq.com/my-cluster-kafka-exporter.check_names: '["openmetrics"]'
ad.datadoghq.com/my-cluster-kafka-exporter.init_configs: '[{}]'
ad.datadoghq.com/my-cluster-kafka-exporter.instances: |
[
{
"prometheus_url": "http://%%host%%:%%port%%/metrics",
"namespace": "my_kafka",
"metrics": ["go_gc_duration_seconds","go_goroutines","go_memstats_alloc_bytes","go_memstats_alloc_bytes_total","go_memstats_buck_hash_sys_bytes","go_memstats_frees_total","go_memstats_gc_sys_bytes","go_memstats_heap_alloc_bytes","go_memstats_heap_idle_bytes","go_memstats_heap_inuse_bytes","go_memstats_heap_objects","go_memstats_heap_released_bytes_total","go_memstats_heap_sys_bytes","go_memstats_last_gc_time_seconds","go_memstats_lookups_total","go_memstats_mallocs_total","go_memstats_mcache_inuse_bytes","go_memstats_mcache_sys_bytes","go_memstats_mspan_inuse_bytes","go_memstats_mspan_sys_bytes","go_memstats_next_gc_bytes","go_memstats_other_sys_bytes","go_memstats_stack_inuse_bytes","go_memstats_stack_sys_bytes","go_memstats_sys_bytes","kafka_brokers","kafka_consumergroup_current_offset","kafka_consumergroup_lag","kafka_exporter_build_info","kafka_topic_partition_current_offset","kafka_topic_partition_in_sync_replica","kafka_topic_partition_leader","kafka_topic_partition_leader_is_preferred","kafka_topic_partition_oldest_offset","kafka_topic_partition_replicas","kafka_topic_partition_under_replicated_partition","kafka_topic_partitions","process_cpu_seconds_total","process_max_fds","process_open_fds","process_resident_memory_bytes","process_start_time_seconds","process_virtual_memory_bytes"]
}
] Soon, the metrics appeared in Datadog 🎉 |
Beta Was this translation helpful? Give feedback.
-
this is how we did and datadog docs about openmetrics also keep we allow our datadog agent take this metrics from strimzi namespace here is the snipped or kafka-exporter from
|
Beta Was this translation helpful? Give feedback.
Update: Datadog supports Prometheus & OpenMetrics metrics collection - https://docs.datadoghq.com/agent/kubernetes/prometheus/
I added the following annotations to configure it: