You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* auto-instrumentation of the workloads using OpenTelemetry instrumentation libraries
9
+
*[auto-instrumentation](https://opentelemetry.io/docs/concepts/instrumentation/automatic/) of the workloads using OpenTelemetry instrumentation libraries
10
10
11
11
## Documentation
12
12
@@ -66,7 +66,7 @@ This will create an OpenTelemetry Collector instance named `simplest`, exposing
66
66
67
67
The `config` node holds the `YAML` that should be passed down as-is to the underlying OpenTelemetry Collector instances. Refer to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) documentation for a reference of the possible entries.
68
68
69
-
At this point, the Operator does *not* validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.
69
+
> 🚨 **NOTE:**At this point, the Operator does *not* validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.
70
70
71
71
The Operator does examine the configuration file to discover configured receivers and their ports. If it finds receivers with ports, it creates a pair of kubernetes services, one headless, exposing those ports within the cluster. The headless service contains a `service.beta.openshift.io/serving-cert-secret-name` annotation that will cause OpenShift to create a secret containing a certificate and key. This secret can be mounted as a volume and the certificate and key used in those receivers' TLS configurations.
72
72
@@ -83,7 +83,13 @@ The default and only other acceptable value for `.Spec.UpgradeStrategy` is `auto
83
83
84
84
### Deployment modes
85
85
86
-
The `CustomResource` for the `OpenTelemetryCollector` exposes a property named `.Spec.Mode`, which can be used to specify whether the collector should run as a `DaemonSet`, `Sidecar`, `StatefulSet` or `Deployment` (default). Look at [this sample](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/daemonset-features/01-install.yaml) for a reference of `DaemonSet`.
86
+
The `CustomResource` for the `OpenTelemetryCollector` exposes a property named `.Spec.Mode`, which can be used to specify whether the Collector should run as a [`DaemonSet`](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), [`Sidecar`](https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods), [`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) or [`Deployment`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) (default).
In the above case, `myapp` and `myapp2` containers will be instrumented, `myapp3` will not.
331
337
332
-
**NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first pod should be the only pod you want instrumented.
338
+
> 🚨 **NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first pod should be the only pod you want instrumented.
333
339
334
340
#### Use customized or vendor instrumentation
335
341
336
342
By default, the operator uses upstream auto-instrumentation libraries. Custom auto-instrumentation can be configured by
337
-
overriding the image fields in a CR.
343
+
overriding the `image` fields in a CR.
338
344
339
345
```yaml
340
346
apiVersion: opentelemetry.io/v1alpha1
@@ -381,7 +387,7 @@ List of all available attributes can be found at [otel-webserver-module](https:/
381
387
382
388
#### Inject OpenTelemetry SDK environment variables only
383
389
384
-
You can configure the OpenTelemetry SDK for applications which can't currently be autoinstrumented by using `inject-sdk` in place of (e.g.) `inject-python` or `inject-java`. This will inject environment variables like `OTEL_RESOURCE_ATTRIBUTES`, `OTEL_TRACES_SAMPLER`, and `OTEL_EXPORTER_OTLP_ENDPOINT`, that you can configure in the `Instrumentation`, but will not actually provide the SDK.
390
+
You can configure the OpenTelemetry SDK for applications which can't currently be autoinstrumented by using `inject-sdk` in place of `inject-python` or `inject-java`, for example. This will inject environment variables like `OTEL_RESOURCE_ATTRIBUTES`, `OTEL_TRACES_SAMPLER`, and `OTEL_EXPORTER_OTLP_ENDPOINT`, that you can configure in the `Instrumentation`, but will not actually provide the SDK.
@@ -409,7 +415,7 @@ Language not specified in the table are always supported and cannot be disabled.
409
415
410
416
### Target Allocator
411
417
412
-
The OpenTelemetry Operator comes with an optional component, the Target Allocator (TA). When creating an OpenTelemetryCollector Custom Resource (CR) and setting the TA as enabled, the Operator will create a new deployment and service to serve specific `http_sd_config` directives for each Collector pod as part of that CR. It will also change the Prometheus receiver configuration in the CR, so that it uses the [http_sd_config](https://prometheus.io/docs/prometheus/latest/http_sd/) from the TA. The following example shows how to get started with the Target Allocator:
418
+
The OpenTelemetry Operator comes with an optional component, the [Target Allocator](/cmd/otel-allocator/README.md) (TA). When creating an OpenTelemetryCollector Custom Resource (CR) and setting the TA as enabled, the Operator will create a new deployment and service to serve specific `http_sd_config` directives for each Collector pod as part of that CR. It will also change the Prometheus receiver configuration in the CR, so that it uses the [http_sd_config](https://prometheus.io/docs/prometheus/latest/http_sd/) from the TA. The following example shows how to get started with the Target Allocator:
413
419
414
420
```yaml
415
421
apiVersion: opentelemetry.io/v1alpha1
@@ -482,7 +488,7 @@ Behind the scenes, the OpenTelemetry Operator will convert the Collector’s con
482
488
483
489
Note how the Operator removes any existing service discovery configurations (e.g., `static_configs`, `file_sd_configs`, etc.) from the `scrape_configs` section and adds an `http_sd_configs` configuration pointing to a Target Allocator instance it provisioned.
484
490
485
-
The OpenTelemetry Operator will also convert the Target Allocator's promethueus configuration after the reconciliation into the following:
491
+
The OpenTelemetry Operator will also convert the Target Allocator's Prometheus configuration after the reconciliation into the following:
Copy file name to clipboardExpand all lines: cmd/otel-allocator/README.md
+30-14Lines changed: 30 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,25 @@
1
1
# Target Allocator
2
2
3
-
The TargetAllocator is an optional separately deployed component of an OpenTelemetry Collector setup, which is used to
4
-
distribute targets of the PrometheusReceiver on all deployed Collector instances. The release version matches the
3
+
Target Allocator is an optional component of the OpenTelemetry Collector [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR). The release version matches the
5
4
operator's most recent release as well.
6
5
7
-
In essence, Prometheus Receiver configs are overridden with a http_sd_configs directive that points to the
8
-
Allocator, these are then loadbalanced/sharded to the collectors. The Prometheus Receiver configs that are overridden
9
-
are what will be distributed with the same name. In addition to picking up receiver configs, the TargetAllocator
10
-
can discover targets via Prometheus CRs (currently ServiceMonitor, PodMonitor) which it presents as scrape configs
11
-
and jobs on the `/scrape_configs` and `/jobs` endpoints respectively.
6
+
In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector's [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
7
+
8
+
The TA serves two functions:
9
+
* Even distribution of Prometheus targets among a pool of Collectors
10
+
* Discovery of Prometheus Custom Resources
11
+
12
+
## Even Distribution of Prometheus Targets
13
+
14
+
The Target Allocator's first job is to discover targets to scrape and collectors to allocate targets to. Then it can distribute the targets it discovers among the collectors. This means that the OTel Collectors collect the metrics instead of a Prometheus [scraper](https://uzxmx.github.io/prometheus-scrape-internals.html). Metrics are ingested by the OTel Collectors by way of the [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
15
+
16
+
## Discovery of Prometheus Custom Resources
17
+
18
+
The Target Allocator also provides for the discovery of [Prometheus Operator CRs](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md), namely the [ServiceMonitor and PodMonitor](https://github.com/open-telemetry/opentelemetry-operator/tree/main/cmd/otel-allocator#target-allocator). The ServiceMonitor and the PodMonitor don’t do any scraping themselves; their purpose is to inform the Target Allocator (or Prometheus) to add a new job to their scrape configuration. These metrics are then ingested by way of the Prometheus Receiver on the OpenTelemetry Collector.
19
+
20
+
Even though Prometheus is not required to be installed in your Kubernetes cluster to use the Target Allocator for Prometheus CR discovery, the TA does require that the ServiceMonitor and PodMonitor be installed. These CRs are bundled with Prometheus Operator; however, they can be installed standalone as well.
21
+
22
+
The easiest way to do this is by going to the [Prometheus Operator’s Releases page](https://github.com/prometheus-operator/prometheus-operator/releases), grabbing a copy of the latest `bundle.yaml` file (for example, [this one](https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.66.0/bundle.yaml)), and stripping out all of the YAML except the ServiceMonitor and PodMonitor YAML definitions.
12
23
13
24
# Usage
14
25
The `spec.targetAllocator:` controls the TargetAllocator general properties. Full API spec can be found here: [api.md#opentelemetrycollectorspectargetallocator](../../docs/api.md#opentelemetrycollectorspectargetallocator)
@@ -44,14 +55,21 @@ spec:
44
55
exporters: [logging]
45
56
```
46
57
58
+
In essence, Prometheus Receiver configs are overridden with a `http_sd_config` directive that points to the
59
+
Allocator, these are then loadbalanced/sharded to the Collectors. The [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md) configs that are overridden
60
+
are what will be distributed with the same name.
61
+
47
62
## PrometheusCR specifics
63
+
48
64
TargetAllocator discovery of PrometheusCRs can be turned on by setting
49
-
`.spec.targetAllocator.prometheusCR.enabled` to `true`
65
+
`.spec.targetAllocator.prometheusCR.enabled`to `true`, which it presents as scrape configs
66
+
and jobs on the `/scrape_configs` and `/jobs` endpoints respectively.
50
67
51
68
The CRs can be filtered by labels as documented here: [api.md#opentelemetrycollectorspectargetallocatorprometheuscr](../../docs/api.md#opentelemetrycollectorspectargetallocatorprometheuscr)
52
69
53
-
The prometheus receiver in the deployed collector also has to know where the Allocator service exists. This is done by a
54
-
OpenTelemetry Collector operator specific config.
70
+
The Prometheus Receiver in the deployed Collector also has to know where the Allocator service exists. This is done by a
71
+
OpenTelemetry Collector Operator-specific config.
72
+
55
73
```yaml
56
74
config: |
57
75
receivers:
@@ -64,15 +82,13 @@ OpenTelemetry Collector operator specific config.
0 commit comments