Replies: 1 comment 3 replies
-
|
Is there anything in the spec that describes what to do for these scenarios? If your workload runs directly in a VM then adding a custom resource detector that looks up the I say this because depending on your runtime, e.g. containers in k8s, you will likely not be able to easily resolve the host name of the node you are running on. It will require special privileges or a means of overriding the host name from the container. (e.g. using the downward api and adding an envar like The collector has a k8s attribute processor that it may use to enriches metrics using the remote ip (pod ip) and it could look up those details via the kube API https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/k8sattributesprocessor/README.md |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I’ve started using OpenTelemetry Metrics with the OTel Collector and VictoriaMetrics.
I noticed that the collected data includes three metrics from three pods,
which appear as a “stair-step” pattern on the graph:
(what you see 3 datapoints continusly repeats with interval 1 minute, because there 3 pods and have different values)
By default, the resource attributes don’t include information that distinguishes which pod each metric originates from.
I’m planning to add either the hostname or the Kubernetes pod name as a resource attribute via env variable.
Would it make sense to expose such identifiers by default, so users can view metrics broken down by hostname?
This approach works well for container-based setups, but it may not help in scenarios where multiple processes run on the same node.
In those cases, the existing
process.pidmight help with combination with host.name.Beta Was this translation helpful? Give feedback.
All reactions