Skip to content

Commit 55d4ed9

Browse files
authored
Update OTel Operator readme and Target Allocator readme (#1951)
1 parent cdaa2c4 commit 55d4ed9

File tree

3 files changed

+60
-22
lines changed

3 files changed

+60
-22
lines changed

.chloggen/readme-updates.yaml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
2+
change_type: enhancement
3+
4+
# The name of the component, or a single word describing the area of concern, (e.g. operator, target allocator, github action)
5+
component: Documentation
6+
7+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
8+
note: Update OTel Operator and Target Allocator readmes.
9+
10+
# One or more tracking issues related to the change
11+
issues: [1952]
12+
13+
# (Optional) One or more lines of additional information to render under the primary note.
14+
# These lines will be padded with 2 spaces and then inserted directly into the document.
15+
# Use pipe (|) for multiline entries.
16+
subtext:

README.md

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The OpenTelemetry Operator is an implementation of a [Kubernetes Operator](https
66

77
The operator manages:
88
* [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector)
9-
* auto-instrumentation of the workloads using OpenTelemetry instrumentation libraries
9+
* [auto-instrumentation](https://opentelemetry.io/docs/concepts/instrumentation/automatic/) of the workloads using OpenTelemetry instrumentation libraries
1010

1111
## Documentation
1212

@@ -66,7 +66,7 @@ This will create an OpenTelemetry Collector instance named `simplest`, exposing
6666

6767
The `config` node holds the `YAML` that should be passed down as-is to the underlying OpenTelemetry Collector instances. Refer to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) documentation for a reference of the possible entries.
6868

69-
At this point, the Operator does *not* validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.
69+
> 🚨 **NOTE:** At this point, the Operator does *not* validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.
7070
7171
The Operator does examine the configuration file to discover configured receivers and their ports. If it finds receivers with ports, it creates a pair of kubernetes services, one headless, exposing those ports within the cluster. The headless service contains a `service.beta.openshift.io/serving-cert-secret-name` annotation that will cause OpenShift to create a secret containing a certificate and key. This secret can be mounted as a volume and the certificate and key used in those receivers' TLS configurations.
7272

@@ -83,7 +83,13 @@ The default and only other acceptable value for `.Spec.UpgradeStrategy` is `auto
8383

8484
### Deployment modes
8585

86-
The `CustomResource` for the `OpenTelemetryCollector` exposes a property named `.Spec.Mode`, which can be used to specify whether the collector should run as a `DaemonSet`, `Sidecar`, `StatefulSet` or `Deployment` (default). Look at [this sample](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/daemonset-features/01-install.yaml) for a reference of `DaemonSet`.
86+
The `CustomResource` for the `OpenTelemetryCollector` exposes a property named `.Spec.Mode`, which can be used to specify whether the Collector should run as a [`DaemonSet`](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), [`Sidecar`](https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods), [`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) or [`Deployment`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) (default).
87+
88+
See below for examples of each deployment mode:
89+
- [`Deployment`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/ingress/00-install.yaml)
90+
- [`DaemonSet`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/daemonset-features/01-install.yaml)
91+
- [`StatefulSet`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/smoke-statefulset/00-install.yaml)
92+
- [`Sidecar`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/instrumentation-python/00-install-collector.yaml)
8793

8894
#### Sidecar injection
8995

@@ -329,12 +335,12 @@ spec:
329335

330336
In the above case, `myapp` and `myapp2` containers will be instrumented, `myapp3` will not.
331337

332-
**NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first pod should be the only pod you want instrumented.
338+
> 🚨 **NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first pod should be the only pod you want instrumented.
333339

334340
#### Use customized or vendor instrumentation
335341

336342
By default, the operator uses upstream auto-instrumentation libraries. Custom auto-instrumentation can be configured by
337-
overriding the image fields in a CR.
343+
overriding the `image` fields in a CR.
338344

339345
```yaml
340346
apiVersion: opentelemetry.io/v1alpha1
@@ -381,7 +387,7 @@ List of all available attributes can be found at [otel-webserver-module](https:/
381387

382388
#### Inject OpenTelemetry SDK environment variables only
383389

384-
You can configure the OpenTelemetry SDK for applications which can't currently be autoinstrumented by using `inject-sdk` in place of (e.g.) `inject-python` or `inject-java`. This will inject environment variables like `OTEL_RESOURCE_ATTRIBUTES`, `OTEL_TRACES_SAMPLER`, and `OTEL_EXPORTER_OTLP_ENDPOINT`, that you can configure in the `Instrumentation`, but will not actually provide the SDK.
390+
You can configure the OpenTelemetry SDK for applications which can't currently be autoinstrumented by using `inject-sdk` in place of `inject-python` or `inject-java`, for example. This will inject environment variables like `OTEL_RESOURCE_ATTRIBUTES`, `OTEL_TRACES_SAMPLER`, and `OTEL_EXPORTER_OTLP_ENDPOINT`, that you can configure in the `Instrumentation`, but will not actually provide the SDK.
385391

386392
```bash
387393
instrumentation.opentelemetry.io/inject-sdk: "true"
@@ -409,7 +415,7 @@ Language not specified in the table are always supported and cannot be disabled.
409415

410416
### Target Allocator
411417

412-
The OpenTelemetry Operator comes with an optional component, the Target Allocator (TA). When creating an OpenTelemetryCollector Custom Resource (CR) and setting the TA as enabled, the Operator will create a new deployment and service to serve specific `http_sd_config` directives for each Collector pod as part of that CR. It will also change the Prometheus receiver configuration in the CR, so that it uses the [http_sd_config](https://prometheus.io/docs/prometheus/latest/http_sd/) from the TA. The following example shows how to get started with the Target Allocator:
418+
The OpenTelemetry Operator comes with an optional component, the [Target Allocator](/cmd/otel-allocator/README.md) (TA). When creating an OpenTelemetryCollector Custom Resource (CR) and setting the TA as enabled, the Operator will create a new deployment and service to serve specific `http_sd_config` directives for each Collector pod as part of that CR. It will also change the Prometheus receiver configuration in the CR, so that it uses the [http_sd_config](https://prometheus.io/docs/prometheus/latest/http_sd/) from the TA. The following example shows how to get started with the Target Allocator:
413419

414420
```yaml
415421
apiVersion: opentelemetry.io/v1alpha1
@@ -482,7 +488,7 @@ Behind the scenes, the OpenTelemetry Operator will convert the Collector’s con
482488

483489
Note how the Operator removes any existing service discovery configurations (e.g., `static_configs`, `file_sd_configs`, etc.) from the `scrape_configs` section and adds an `http_sd_configs` configuration pointing to a Target Allocator instance it provisioned.
484490

485-
The OpenTelemetry Operator will also convert the Target Allocator's promethueus configuration after the reconciliation into the following:
491+
The OpenTelemetry Operator will also convert the Target Allocator's Prometheus configuration after the reconciliation into the following:
486492

487493
```yaml
488494
config:

cmd/otel-allocator/README.md

Lines changed: 30 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,25 @@
11
# Target Allocator
22

3-
The TargetAllocator is an optional separately deployed component of an OpenTelemetry Collector setup, which is used to
4-
distribute targets of the PrometheusReceiver on all deployed Collector instances. The release version matches the
3+
Target Allocator is an optional component of the OpenTelemetry Collector [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR). The release version matches the
54
operator's most recent release as well.
65

7-
In essence, Prometheus Receiver configs are overridden with a http_sd_configs directive that points to the
8-
Allocator, these are then loadbalanced/sharded to the collectors. The Prometheus Receiver configs that are overridden
9-
are what will be distributed with the same name. In addition to picking up receiver configs, the TargetAllocator
10-
can discover targets via Prometheus CRs (currently ServiceMonitor, PodMonitor) which it presents as scrape configs
11-
and jobs on the `/scrape_configs` and `/jobs` endpoints respectively.
6+
In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector's [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
7+
8+
The TA serves two functions:
9+
* Even distribution of Prometheus targets among a pool of Collectors
10+
* Discovery of Prometheus Custom Resources
11+
12+
## Even Distribution of Prometheus Targets
13+
14+
The Target Allocator's first job is to discover targets to scrape and collectors to allocate targets to. Then it can distribute the targets it discovers among the collectors. This means that the OTel Collectors collect the metrics instead of a Prometheus [scraper](https://uzxmx.github.io/prometheus-scrape-internals.html). Metrics are ingested by the OTel Collectors by way of the [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
15+
16+
## Discovery of Prometheus Custom Resources
17+
18+
The Target Allocator also provides for the discovery of [Prometheus Operator CRs](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md), namely the [ServiceMonitor and PodMonitor](https://github.com/open-telemetry/opentelemetry-operator/tree/main/cmd/otel-allocator#target-allocator). The ServiceMonitor and the PodMonitor don’t do any scraping themselves; their purpose is to inform the Target Allocator (or Prometheus) to add a new job to their scrape configuration. These metrics are then ingested by way of the Prometheus Receiver on the OpenTelemetry Collector.
19+
20+
Even though Prometheus is not required to be installed in your Kubernetes cluster to use the Target Allocator for Prometheus CR discovery, the TA does require that the ServiceMonitor and PodMonitor be installed. These CRs are bundled with Prometheus Operator; however, they can be installed standalone as well.
21+
22+
The easiest way to do this is by going to the [Prometheus Operator’s Releases page](https://github.com/prometheus-operator/prometheus-operator/releases), grabbing a copy of the latest `bundle.yaml` file (for example, [this one](https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.66.0/bundle.yaml)), and stripping out all of the YAML except the ServiceMonitor and PodMonitor YAML definitions.
1223

1324
# Usage
1425
The `spec.targetAllocator:` controls the TargetAllocator general properties. Full API spec can be found here: [api.md#opentelemetrycollectorspectargetallocator](../../docs/api.md#opentelemetrycollectorspectargetallocator)
@@ -44,14 +55,21 @@ spec:
4455
exporters: [logging]
4556
```
4657
58+
In essence, Prometheus Receiver configs are overridden with a `http_sd_config` directive that points to the
59+
Allocator, these are then loadbalanced/sharded to the Collectors. The [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md) configs that are overridden
60+
are what will be distributed with the same name.
61+
4762
## PrometheusCR specifics
63+
4864
TargetAllocator discovery of PrometheusCRs can be turned on by setting
49-
`.spec.targetAllocator.prometheusCR.enabled` to `true`
65+
`.spec.targetAllocator.prometheusCR.enabled` to `true`, which it presents as scrape configs
66+
and jobs on the `/scrape_configs` and `/jobs` endpoints respectively.
5067

5168
The CRs can be filtered by labels as documented here: [api.md#opentelemetrycollectorspectargetallocatorprometheuscr](../../docs/api.md#opentelemetrycollectorspectargetallocatorprometheuscr)
5269

53-
The prometheus receiver in the deployed collector also has to know where the Allocator service exists. This is done by a
54-
OpenTelemetry Collector operator specific config.
70+
The Prometheus Receiver in the deployed Collector also has to know where the Allocator service exists. This is done by a
71+
OpenTelemetry Collector Operator-specific config.
72+
5573
```yaml
5674
config: |
5775
receivers:
@@ -64,15 +82,13 @@ OpenTelemetry Collector operator specific config.
6482
interval: 30s
6583
collector_id: "${POD_NAME}"
6684
```
67-
Upstream documentation here: [Prometheusreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver#opentelemetry-operator)
85+
86+
Upstream documentation here: [PrometheusReceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver#opentelemetry-operator)
6887

6988
The TargetAllocator service is named based on the OpenTelemetryCollector CR name. `collector_id` should be unique per
7089
collector instance, such as the pod name. The `POD_NAME` environment variable is convenient since this is supplied
7190
to collector instance pods by default.
7291

73-
The Prometheus CRDs also have to exist for the Allocator to pick them up. The best place to get them is from
74-
prometheus-operator: [Releases](https://github.com/prometheus-operator/prometheus-operator/releases). Only the CRDs for
75-
CRs that the Allocator watches for need to be deployed. They can be picked out from the bundle.yaml file.
7692

7793
### RBAC
7894
The ServiceAccount that the TargetAllocator runs as, has to have access to the CRs. A role like this will provide that

0 commit comments

Comments
 (0)