@@ -23,11 +23,21 @@ within Kubelet.
2323
2424If you are using Kubernetes, consider the Kubelet stats receiver
2525because many Kubernetes nodes do not expose cAdvisor on a network port,
26- even though they are running it within Kubelet. Because the Splunk Distribution
27- of OpenTelemetry Collector has limitations in managed environments such as Amazon EKS,
28- you can use a :ref: `Prometheus receiver <prometheus_receiver >` to collect specific
29- cgroup metrics exposed by cAdvisor, such as ``container_cpu_cfs_* ``.
30- The kubeletstats receiver also does not expose these metrics by default.
26+ even though they are running it within Kubelet.
27+
28+
29+ In some managed Kubernetes environments, such as Amazon EKS, you cannot
30+ directly access cAdvisor metrics due to the cluster provider's design
31+ choices to enhance security and control. Instead, when exposed through
32+ the Kubernetes proxy metric server, you can access these metrics, but a
33+ specific configuration is required to collect them. For example, in
34+ Amazon EKS, the kubeletstats receiver cannot directly collect cAdvisor metrics.
35+
36+ To address this limitation, you are recommended to use the
37+ :ref: `Prometheus receiver <prometheus_receiver >` to scrape
38+ metrics from the proxy metric server. This constraint applies to
39+ managed environments and is not a restriction of the Splunk Distribution of
40+ OpenTelemetry Collector.
3141
3242cAdvisor with Docker
3343---------------------
@@ -113,47 +123,52 @@ section of your configuration file:
113123Prometheus receiver
114124###################
115125
126+ The following example shows how to configure a Prometheus receiver
127+ to scrape cAdvisor metrics securely from Kubernetes nodes using TLS and authorization
128+ credentials.
129+
116130.. code :: yaml
117131
118- receivers :
119- prometheus/cadvisor :
120- config :
121- scrape_configs :
122- - job_name : ' kubernetes-nodes-cadvisor'
123- scheme : https
124- bearer_token_file : /var/run/secrets/kubernetes.io/serviceaccount/token
125- kubernetes_sd_configs :
126- - role : node
127- relabel_configs :
128- - target_label : __address__
129- replacement : kubernetes.default.svc:443
130- - source_labels : [__meta_kubernetes_node_name]
131- regex : (.+)
132- target_label : __metrics_path__
133- replacement : /api/v1/nodes/$$1/proxy/metrics/cadvisor
134- metric_relabel_configs :
135- - source_labels : [__name__]
136- regex : ' container_cpu_cfs.*'
137- action : keep
138- - source_labels : [pod]
139- target_label : k8s.pod.name
140- - source_labels : [namespace]
141- target_label : k8s.namespace.name
142- - source_labels : [container]
143- target_label : k8s.container.name
144- - action : labeldrop
145- regex : ' pod|namespace|name|id|container'
146- service :
147- pipelines :
148- metrics/scrapers :
149- exporters :
150- - signalfx
151- processors :
152- - memory_limiter
153- - batch
154- - resource/add_environment
155- receivers :
156- - prometheus/cadvisor
132+ agent :
133+ config :
134+ receivers :
135+ prometheus/cadvisor :
136+ config :
137+ scrape_configs :
138+ - job_name : cadvisor
139+ tls_config :
140+ ca_file : /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
141+ authorization :
142+ credentials_file : /var/run/secrets/kubernetes.io/serviceaccount/token
143+ scheme : https
144+ kubernetes_sd_configs :
145+ - role : node
146+ relabel_configs :
147+ - replacement : ' kubernetes.default.svc.cluster.local:443'
148+ target_label : __address__
149+ - regex : (.+)
150+ replacement : ' /api/v1/nodes/$${1}/proxy/metrics/cadvisor'
151+ source_labels :
152+ - __meta_kubernetes_node_name
153+ target_label : __metrics_path__
154+ service :
155+ pipelines :
156+ metrics :
157+ exporters :
158+ - signalfx
159+ processors :
160+ - memory_limiter
161+ - batch
162+ - resourcedetection
163+ - resource
164+ receivers :
165+ - hostmetrics
166+ - kubeletstats
167+ - otlp
168+ - prometheus/cadvisor
169+ - receiver_creator
170+ - signalfx
171+
157172
158173 Configuration settings
159174^^^^^^^^^^^^^^^^^^^^^^
0 commit comments