diff --git a/README.md b/README.md index 7c50aa4..0735182 100644 --- a/README.md +++ b/README.md @@ -1,169 +1,1447 @@ -## `OTel <--> KEDA` add-on -[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/otel-add-on)](https://artifacthub.io/packages/search?repo=otel-add-on) -### Description +# otel-add-on -This is an external scaler for KEDA that intergrates with OpenTelemetry (OTel) collector. The helm chart deploys also OTel -collector (using the [upstream helm chart](https://github.com/open-telemetry/opentelemetry-helm-charts)) where one can set up -filtering so that scaler receives only those metrics that are needed for scaling decisions -([example](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L133-L147)). +![Version: v0.1.2](https://img.shields.io/badge/Version-v0.1.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.1.2](https://img.shields.io/badge/AppVersion-v0.1.2-informational?style=flat-square) -The application consist of three parts: -- OTLP receiver -- simple metric storage -- external scaler for KEDA +[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/otel-add-on)](https://artifacthub.io/packages/search?repo=otel-add-on) -#### Receiver +A Helm chart for KEDA otel-add-on -This component is implementation of OTLP Receiver [spec](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/opencensusreceiver), -so that it spawns a GRPC server ([by default](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L60) on port `4317`) -and stores all incoming metrics in the short term storage - simple metric storage. +``` +:::^. .::::^: ::::::::::::::: .:::::::::. .^. +7???~ .^7????~. 7??????????????. :?????????77!^. .7?7. +7???~ ^7???7~. ~!!!!!!!!!!!!!!. :????!!!!7????7~. .7???7. +7???~^7????~. :????: :~7???7. :7?????7. +7???7????!. ::::::::::::. :????: .7???! :7??77???7. +7????????7: 7???????????~ :????: :????: :???7?5????7. +7????!~????^ !77777777777^ :????: :????: ^???7?#P7????7. +7???~ ^????~ :????: :7???! ^???7J#@J7?????7. +7???~ :7???!. :????: .:~7???!. ~???7Y&@#7777????7. +7???~ .7???7: !!!!!!!!!!!!!!! :????7!!77????7^ ~??775@@@GJJYJ?????7. +7???~ .!????^ 7?????????????7. :?????????7!~: !????G@@@@@@@@5??????7: +::::. ::::: ::::::::::::::: .::::::::.. .::::JGGGB@@@&7::::::::: + _ _ _ _ ?@@#~ + ___ | |_ ___| | __ _ __| | __| | ___ _ __ P@B^ + / _ \| __/ _ \ | / _` |/ _` |/ _` |___ / _ \| '_ \ :&G: + | (_) | || __/ | | (_| | (_| | (_| |___| (_) | | | | !5. + \___/ \__\___|_| \__,_|\__,_|\__,_| \___/|_| |_| , + . +``` -#### Simple Metric Storage +**Homepage:** -Very simple metric storage designed to remember last couple of measurements (~ 10-100) for each metric vector. It can be -[configured](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L11-L12) -with number of seconds to remember. Then during the write operation, it removes the stale measurements, so it effectively works as a -cyclic buffer. Metrics are stored together with labels (key-value pairs) for later querying. +## Usage -#### External Scaler +Check available version in OCI repo: +```bash +crane ls ghcr.io/kedify/charts/otel-add-on | grep -E '^v?[0-9]' +``` -This component also spawns GRPC server ([by default](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L61) on port `4318`) -and can talk to KEDA operator by implementing the External Scaler [contract](https://keda.sh/docs/2.15/concepts/external-scalers/). +Install specific version: +```bash +helm upgrade -i oci://ghcr.io/kedify/charts/otel-add-on --version=v0.1.1 +``` -It queries the internal in-memory metric storage for metric value and sends it to KEDA operator. The metric query is specified as a metadata on KEDA's -ScaledObject CR, and it provides a limited subset of features as PromQL. +Advanced stuff: +```bash +# check /examples dir in project root +find ./../../examples -name '*-values.yaml' +``` -### Architecture -![diagram](./diagram.png "Diagram") -- ([diagram link](https://excalidraw.com/#json=P5ptHj7eQHF3qpCyDDehT,gVJvYLtm0qVR2sStjUlapA)) -- [1] [OTLP format](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc) -- [2] [OTLP metric receiver](https://github.com/open-telemetry/opentelemetry-collector/blob/d17559b6e89a6f97b6800a6538bbf82430d94678/receiver/otlpreceiver/otlp.go#L101) -- [3] [processors](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor) -- [4] https://opencensus.io - obsolete, will be replaced by OTel -- [5] [OpenCensus receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/opencensusreceiver) -- [6] [Prometheus receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver) -- [7] [OTLP exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md) +## Source Code -### Example use-cases +* +* -#### 1. convert and react on metrics from OpenCensus +## Requirements -By specifying an [opencensus receiver](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L112) in the helm chart values for OTel collector, -we will get the ability to get those metrics into our scaler. +Kubernetes: `>= 1.19.0-0` -#### 2. convert and react on metrics from any other upstream receiver -OTel collector contains [numerous](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver) integrations on the receiver part. -All of these various receivers open new ways of how to turn metric from OTel receiver into KEDA scaler. For instance by using -[sqlqueryreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/sqlqueryreceiver), one can achieve similar goals as with -[MySQL](https://keda.sh/docs/2.15/scalers/mysql/) or [PostgreSQL](https://keda.sh/docs/2.15/scalers/postgresql/) scalers. -By using [githubreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/githubreceiver), one can hook to -metrics from GitBub, etc. +| Repository | Name | Version | +|------------|------|---------| +| https://open-telemetry.github.io/opentelemetry-helm-charts | otelCollector(opentelemetry-collector) | 0.131.0 | +| https://open-telemetry.github.io/opentelemetry-helm-charts | otelOperator(opentelemetry-operator) | 0.93.0 | -#### 3. process the metrics before reaching the scaler -OTel collector provides [various processors](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor) -that are being applied on all incoming metrics/spans/traces and one achieve for instance metric [filtering](https://github.com/kedify/otel-add-on/blob/v0.0.0-1/helmchart/otel-add-on/values.yaml#L138-L143) -this way. So that not all the metric data are passed to scaler's short term memory. This way we can keep the OTel scaler pretty lightweight. +## OTel Collector Sub-Chart -OTTL lang: -- [spec](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/LANGUAGE.md) -- [functions](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl/ottlfuncs) +This helm chart, when enabled by `--set otelCollector.enabled=true`, installs the OTel collector using +its upstream [helm chart](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector). -If the simple metric query is not enough and one requires to combine multiple metric vectors into one or perform simple -arithmetic operations on the metrics, there is the [Metrics Generation Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/metricsgenerationprocessor) -available as an option +To check all the possible values for this dependent helm chart, consult [values.yaml](https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/values.yaml) +or [docs](https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/README.md). -#### 4. OTel patterns (metric pipelines) -Basically any scenario described in [OTel patterns](https://github.com/jpkrohling/opentelemetry-collector-deployment-patterns) or [architecture](https://opentelemetry.io/docs/collector/architecture/) -should be supported. So no matter how the OTel collectors are deployed, whether it's a fleet of sidecar containers deployed alongside each workload or -some complex pipeline that spans multiple Kubernetes clusters, you will be covered. +All these values goes under `otelCollector` section. -## Installation +Example: -### First install KEDA -```bash -helm repo add kedacore https://kedacore.github.io/charts -helm repo update -helm upgrade -i keda kedacore/keda --namespace keda --create-namespace +```yaml +settings: + metricStore: + retentionSeconds: 60 +otelCollector: + enabled: true + # + alternateConfig: + receivers: + ... ``` -### Then install this add-on +## OTel Operator Sub-Chart +Using the `otelCollector` sub-chart (described in the previous section) we can install one instance of OTel collector. However, all +the helm chart values needs to be passed in advance including the OTel collector configuration section. One limitation of Helm is the absence +of templating the helm chart values itself. This would be very useful, because some things in the OTel configuration are dynamic (addresses etc.) + +We can achieve that by using the upstream OTel [Operator](https://opentelemetry.io/docs/platforms/kubernetes/operator/) and render[^fn1] its CRDs using this helm chart. + +Configuration of `OpenTelemetryCollector` CR is driven by: +- `.otelOperatorCrDefaultTemplate` (defaults) +- `.otelOperatorCrs` (overrides) + +> [!TIP] +> If there is a default set on `.otelOperatorCrDefaultTemplate` level, say: +> ```yaml +> otelOperatorCrDefaultTemplate: +> alternateExporters: +> otlp/central: +> protocols: +> grpc: +> endpoint: external-backup:4317 +> ``` +> and we want to make the field `alternateExporters` empty, we can do that by: +> ```yaml +> otelOperatorCrDefaultTemplate: +> alternateExporters: +> otlp/central: +> protocols: +> grpc: +> endpoint: external-backup:4317 +> otelOperatorCrs: +> - enabled: true +> name: "nonDefault" +> alternateExporters: null +> ``` +> Otherwise, the behavior of the config merge is as expected ([code](https://github.com/kedify/otel-add-on/blob/v0.0.13/helmchart/otel-add-on/templates/_helpers.tpl#L64-L79)). +> Also if the `alternateExporters` field in the merged config is empty, we will create an implicit exporter that will feed the metrics into KEDA OTel scaler with preconfigured service name. +> If from any reason you would like to disable all the exporters for the OTel collector, add only a dummy `debug` exporter: +> ```bash +> noglob helm template oci://ghcr.io/kedify/charts/otel-add-on --version=v0.1.1 \ +> --set otelOperatorCrs[0].alternateExporters.debug.verbosity=basic \ +> --set otelOperatorCrs[0].enabled=true +> ``` + +So one can deploy whole metric pipeline including multiple OTel collectors with different settings as one helm release using this chart. +You can check the description for `otelOperatorCrDefaultTemplate` in Values [section](#values) for such example. + +For: + - receivers + - exporters + - processors + - extensions + - pipelines +You can use the `alternate{Receivers,Exporters,Processors,Extensions,Pipelines}` config options on both CR level and default-template level to +tweak the OTel collector config. This has the benefit that it will also enable it under the `.service.pipelines` option so there is no need to +repeat yourself. However, if you want to provide the full OTel collector configuration, you can do that by putting it under `alternateOtelConfig` (again CR level or default template). +When `alternateOtelConfig` is set, all the `alternate{Receivers,Exporters,Processors,Extensions,Pipelines}` are ignored. + +## Values + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
KeyDescriptionDefault
+image.repository
+ +(Type: string)
+ Image to use for the Deployment + +
+
+"ghcr.io/kedify/otel-add-on"
+
+
+
+image.pullPolicy
+
+(Type: image-pull-policy)
+ image pull policy for KEDA OTel Scaler pod + +
+
+"Always"
+
+
+
+image.tag
+ +(Type: string)
+ Image version to use for the Deployment, if not specified, it defaults to .Chart.AppVersion + +
+
+""
+
+
+
+settings
.metricStore
.retentionSeconds

+ +(Type: int)
+ how long the metrics should be kept in the short term (in memory) storage + +
+
+120
+
+
+
+settings
.metricStore
.lazySeries

+ +(Type: bool)
+ if enabled, no metrics will be stored until there is a request for such metric from KEDA operator. + +
+
+false
+
+
+
+settings
.metricStore
.lazyAggregates

+ +(Type: bool)
+ if enabled, the only aggregate that will be calculated on the fly is the one referenced in the metric query (by default, we calculate and store all of them - sum, rate, min, max, etc.) + +
+
+false
+
+
+
+settings
.metricStore
.errIfNotFound

+ +(Type: bool)
+ when enabled, the scaler will be returning error to KEDA's GetMetrics() call + +
+
+false
+
+
+
+settings
.metricStore
.valueIfNotFound

+ +(Type: float)
+ default value, that is reported in case of error or if the value is not in the mem store + +
+
+0
+
+
+
+settings
.isActivePollingIntervalMilliseconds

+ +(Type: int)
+ how often (in milliseconds) should the IsActive method be tried + +
+
+500
+
+
+
+settings.internalMetricsPort
+ +(Type: int)
+ internal (mostly golang) metrics will be exposed on :8080/metrics + +
+
+8080
+
+
+
+settings.restApiPort
+ +(Type: int)
+ port where rest api should be listening + +
+
+9090
+
+
+
+settings.logs.logLvl
+ +(Type: string)
+ Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity + +
+
+"info"
+
+
+
+settings.logs.stackTracesLvl
+ +(Type: string)
+ one of: info, error, panic + +
+
+"error"
+
+
+
+settings.logs.noColor
+ +(Type: bool)
+ if anything else than 'false', the log will not contain colors + +
+
+false
+
+
+
+settings.logs.noBanner
+ +(Type: bool)
+ if anything else than 'false', the log will not print the ascii logo + +
+
+false
+
+
+
+settings.tls.caFile
+ +(Type: optional)
+ path to CA certificate. When provided, the client certificate will be verified using this CA where "client" ~ another OTLP exporter. + +
+
+""
+
+
+
+settings.tls.certFile
+ +(Type: optional)
+ path to TLS certificate that will be used for OTLP receiver + +
+
+""
+
+
+
+settings.tls.keyFile
+ +(Type: optional)
+ path to TLS key that will be used for OTLP receiver + +
+
+""
+
+
+
+settings.tls.reloadInterval
+ +(Type: optional)
+ specifies the duration after which the certificates will be reloaded. This is useful when using the CertManager for rotating the certs mounted as Secrets. + +
+
+"5m"
+
+
+
+settings.tls.keda.certFile
+ +(Type: optional)
+ path to TLS certificate that will be used for KEDA gRPC server. If empty, defaults to settings.tls.certFile + +
+
+""
+
+
+
+settings.tls.keda.keyFile
+ +(Type: optional)
+ path to TLS key that will be used for KEDA gRPC server. If empty, defaults to settings.tls.keyFile + +
+
+""
+
+
+
+settings.tls.secrets
+ +(Type: optional)
+ list of secrets that will be mounted to deployment's pod. One entry in this list, will create one volume and one volumeMount for pod. This is a convenient way for mounting the certs for TLS, but using .volumes & .volumeMounts for anything advanced will also work. + +
+
+[]
+
+
+
+deploymentStrategy
+
+(Type: strategy)
+ one of: RollingUpdate, Recreate + +
+
+"RollingUpdate"
+
+
+
+deployScaler
+ +(Type: bool)
+ when disabled, the deployment with KEDA Scaler will not be rendered + +
+
+true
+
+
+
+validatingAdmissionPolicy
.enabled

+ +(Type: bool)
+ whether the ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding resources should be also rendered + +
+
+false
+
+
+
+asciiArt
+ +(Type: bool)
+ should the ascii logo be printed when this helm chart is installed + +
+
+true
+
+
+
+imagePullSecrets
+
+(Type: specifying-imagepullsecrets-on-a-pod)
+ imagePullSecrets for KEDA OTel Scaler pod + +
+
+[]
+
+
+
+serviceAccount.create
+ +(Type: bool)
+ should the service account be also created and linked in the deployment + +
+
+true
+
+
+
+serviceAccount.annotations
+ +(Type: object)
+ further custom annotation that will be added on the service account + +
+
+{}
+
+
+
+serviceAccount.name
+ +(Type: string)
+ name of the service account, defaults to otel-add-on.fullname ~ release name if not overriden + +
+
+""
+
+
+
+podAnnotations
+ +(Type: object)
+ additional custom pod annotations that will be used for pod + +
+
+{}
+
+
+
+podLabels
+ +(Type: object)
+ additional custom pod labels that will be used for pod + +
+
+{}
+
+
+
+podSecurityContext
+
+(Type: pod-security-standards)
+ securityContext for KEDA OTel Scaler pod + +
+
+{}
+
+
+
+securityContext
+
+(Type: pod-security-standards)
+ securityContext for KEDA OTel Scaler container + +
+
+capabilities:
+    drop:
+        - ALL
+readOnlyRootFilesystem: true
+runAsNonRoot: true
+runAsUser: 1000
+
+
+
+
+service.type
+
+(Type: publishing-services-service-types)
+ Under this service, the otel add on needs to be reachable by KEDA operator and OTel collector + +
+
+"ClusterIP"
+
+
+
+service.otlpReceiverPort
+ +(Type: int)
+ OTLP receiver will be opened on this port. OTel exporter configured in the OTel collector needs to have this value set. + +
+
+4317
+
+
+
+service.kedaExternalScalerPort
+ +(Type: int)
+ KEDA external scaler will be opened on this port. ScaledObject's .spec.triggers[].metadata.scalerAddress needs to be set to this svc and this port. + +
+
+4318
+
+
+
+service.name
+ +(Type: string)
+ name under which the scaler should be exposed, if left empty, it will try .Values.fullnameOverride and if this is empty, name of the release is used this should handle the case when one needs to install multiple instances of this chart into one cluster and at the same time provide a way to specify a stable address + +
+
+""
+
+
+
+resources
+
+(Type: manage-resources-containers)
+ resources for the OTel Scaler pod + +
+
+limits:
+    cpu: 500m
+    memory: 256Mi
+requests:
+    cpu: 500m
+    memory: 128Mi
+
+
+
+
+nodeSelector
+
+(Type: nodeselector)
+ node selector for KEDA OTel Scaler pod + +
+
+{}
+
+
+
+tolerations
+
+(Type: taint-and-toleration)
+ tolerations for KEDA OTel Scaler pod + +
+
+[]
+
+
+
+affinity
+
+(Type: affinity-and-anti-affinity)
+ affinity for KEDA OTel Scaler pod + +
+
+{}
+
+
+
+kubectlImage
+ +(Type: yaml)
+ helper container image that creates the OpenTelemetryCollector CR as post-install helm hook + +
+
+tag: "v1.33.1"
+repository: ghcr.io/kedify/kubectl
+pullPolicy: Always
+pullSecrets: []
+
+
+
+
+otelOperatorCrDefaultTemplate
+ +(Type: raw)
+ +**This field defines the default template for `OpenTelemetryCollector` CR** + +Vast majority of the fields has its counterpart described in OpenTelemetryCollector CRD. +In order to check their description, install the CRD and run: ```bash -helm upgrade -i keda-otel-scaler -nkeda oci://ghcr.io/kedify/charts/otel-add-on --version=v0.1.1 + kubectl explain otelcol.spec ``` +These defaults are then used as a base layer of configuration for all the items in the `.otelOperatorCrs` list. +So given we have this values: -### Create an example scaled object -```bash -k apply -f examples/so.yaml +```yaml +otelOperatorCrDefaultTemplate: + mode: deployment +otelOperatorCrs: + - enabled: true + name: "foo" + mode: "daemonset" + - enabled: true + name: "bar" ``` +It will render[^fn1] two OpenTelemetryCollector CRs called `foo` and `bar` where `foo` will have the `.spec.mode` set to +`daemonset` and `foo` will inherit the default mode from `.otelOperatorCrDefaultTemplate.mode` => `deployment` +[^fn1]: Well in fact it doesn't render the OpenTelemetryCollector CRs directly, but nested as part of a ConfigMap. Then this +CM is read durin post-install hook and CR is created. This is because we can't render CRD and its instances in one helm command. -### Advanced setups + > [!NOTE] + > When specifying custom receivers, processors, exporters or extensions. Use `alternate{Receivers,Processors,Exporters,Extensions}`. + > And there is no need to enable these under pipeline section. This is done automagically [here](https://github.com/kedify/otel-add-on/blob/main/helmchart/otel-add-on/templates/install-otc/otc-configmap.yaml). -Check some prepared examples in the [`./examples`](./examples) directory and also check the `dev.Makefile` if you want to -set up mTLS between a collector and this scaler. + > [!TIP] + > For overriding the whole OTel config, use the `.alternateOtelConfig` field. -```bash -λ make -f dev.Makefile -Usage: - make -Demos - demo-podinfo setup ./examples/metric-pull - demo-podinfo-tls setup ./examples/metric-pull with TLS - demo-otel-upstream setup ./examples/metric-push - demo-operator setup ./examples/otel-operator - demo-operator-tls setup ./examples/otel-operator with TLS - -λ make -f dev.Makefile demo-podinfo-tls -... +Advanced example: +
+Expand + +`values.yaml:` +```yaml +otelOperator: + enabled: true +otelOperatorCrDefaultTemplate: + mode: deployment + alternateReceivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + alternateExporters: + otlp: + endpoint: col-fanout-collector:4317 + tls: + insecure: true +otelOperatorCrs: + - enabled: true + name: "col-in-1" + - enabled: true + name: "col-in-2" + - enabled: true + name: "col-fanout" + alternateExporters: + otlp/external: + endpoint: external-collector:4317 + otlp/keda: + endpoint: keda-otel-scaler.keda.svc:4317 + tls: + insecure: true + alternateConnectors: + routing: + default_pipelines: [metrics/all] + table: + - context: metric + condition: metric.name == "http_requests_total" + pipelines: [metrics/keda] + alternatePipelines: + metrics/in: + receivers: [otlp] + exporters: [routing] + metrics/keda: + receivers: [routing] + exporters: [otlp/keda] + metrics/all: + receivers: [routing] + exporters: [otlp/external] +``` + resulting architecture: +```mermaid +graph TD; + A[col-in-1] -- metrics --> C{col-fanout} + B[col-in-2] -- metrics, traces --> C{col-fanout} + C -- one metric --> D(KEDA Scaler) + C -- all --> E((external-col)) ``` -## Troubleshooting +
+
+
+
 
-To figure out the actual value of a metric query, there is a simple REST api that can be used:
+
+
+
+otelOperatorCrDefaultTemplate
.debug

-```bash -(kubectl port-forward svc/keda-otel-scaler 9090&) -# swagger doc: http://localhost:9090/swagger/index.html -# test the existing metric query -curl -X 'POST' \ - 'http://localhost:9090/memstore/query' \ - -H 'accept: application/json' \ - -H 'Content-Type: application/json' \ - -d '{ - "operationOverTime": "last_one", - "query": "kube_deployment_status_replicas_available{deployment=foo,namespace=observability}" -}' -{ - "ok": true, - "operation": "query", - "error": "", - "value": 1 -} +(Type: bool)
+ container image for post-install helm hook that help with OpenTelemetryCollector CR installation + +
+
+false
+
+
+
+otelOperatorCrDefaultTemplate
.mode

+ +(Type: string)
+ how the otel collector should be deployed: sidecar, statefulset, deployment, daemonset note: make sure the CertManager is installed and admission webhooks are enabled for the OTel operator when using mode=sidecar + +
+
+"deployment"
+
+
+
+otelOperatorCrDefaultTemplate
.targetAllocatorEnabled

+ +(Type: bool)
+ whether TargetAllocator feature (Prometheus Custom Resources for service discovery) should be enabled (details) make sure the mode is not set to sidecar when this is enabled + +
+
+false
+
+
+
+otelOperatorCrDefaultTemplate
.targetAllocatorClusterRoles

-# check the data if the labels are stored there correctly: -curl -X 'GET' \ - 'http://localhost:9090/memstore/data' \ - -H 'accept: application/json' | jq '.kube_deployment_status_replicas_available[].labels' +(Type: list)
+ list of existing cluster roles that will be bound to the service account (in order to be able to work with {Pod,Service}Monitor CRD) + +
+
+[
+  "kube-prometheus-stack-operator",
+  "kube-prometheus-stack-prometheus"
+]
+
+
+
+otelOperatorCrDefaultTemplate
.targetAllocator
.prometheusCR
.serviceMonitorSelector

+ +(Type: object)
+ further narrow the ServiceMonitor CRs (labels) + +
+
+{}
+
+
+
+otelOperatorCrDefaultTemplate
.targetAllocator
.prometheusCR
.podMonitorSelector

+ +(Type: object)
+ further narrow the PodMonitor CRs + +
+
+{}
+
+
+
+otelOperatorCrDefaultTemplate
.tls

+ +(Type: object)
+ TLS settings for OTel collector's exporter that feeds the metrics to KEDA OTel scaler it is not in scope of this helm chart to create the secrets with certificate, however this is a convenient way of configuring volumes and volumeMounts for each secret with cert. It has the same structure as tls settings for the scaler (check .Values.tls). One significant difference is that here we specify a client cert for OTLP exporter, while .Values.tls specify the server cert for OTLP receiver + +
+
+{}
+
+
+
+otelOperatorCrDefaultTemplate
.resources

+
+(Type: manage-resources-containers)
+ resources for the OTel collector container + +
+
+limits:
+    cpu: 400m
+    memory: 128Mi
+requests:
+    cpu: 200m
+    memory: 64Mi
+
+
+
+
+otelOperatorCrDefaultTemplate
.alternateOtelConfig

+ +(Type: object)
+ free form OTel configuration that will be used for the OpenTelemetryCollector CR (no checks) this is mutually exclusive w/ all the following options + +
+
+{}
+
+
+
+otelOperatorCrDefaultTemplate
.prometheusScrapeConfigs

+ +(Type: list)
+ static targets for prometheus receiver, this needs to take into account the deployment mode of the collector (127.0.0.1 in case of a sidecar mode will mean something else than for statefulset mode) + +
+
+[
+  {
+    "job_name": "otel-collector",
+    "scrape_interval": "3s",
+    "static_configs": [
+      {
+        "targets": [
+          "0.0.0.0:8080"
+        ]
+      }
+    ]
+  }
+]
+
+
+
+otelOperatorCrDefaultTemplate
.alternateReceivers

+ +(Type: object)
+ mutually exclusive with prometheusScrapeConfigs option + +
+
+{}
+
+
+
+otelOperatorCrDefaultTemplate
.includeMetrics

+ +(Type: list)
+ if not empty, only following metrics will be sent. This translates to filter/metrics processor. Empty array means include all. + +
+
+[]
+
+
+
+otelOperatorCrs
+ +(Type: yaml)
+ create also OpenTelemetryCollector CRs that will be reconciled by OTel Operator it takes all the default settings defined in otelOperatorCrDefaultTemplate and allows overriding them here + +
+
+# -- if enabled, the OpenTelemetryCollector CR will be created using post-install hook job_name
+- enabled: false
+  # -- name of the OpenTelemetryCollector CR. If left empty, the release name will be used.
+  name: ""
+  # -- in what k8s namespace the OpenTelemetryCollector CR should be created. If left empty, the release namespace will be used.
+  namespace: ""
+- name: target-allocator
+  enabled: false
+  targetAllocatorEnabled: true
+  mode: deployment
+
+
+
+
+otelOperatorCrs[0]
+ +(Type: object)
+ if enabled, the OpenTelemetryCollector CR will be created using post-install hook job_name + +
+
 {
-  "deployment": "keda-operator",
-  "namespace": "keda"
+  "enabled": false,
+  "name": "",
+  "namespace": ""
 }
-```
+
+
+
+otelOperatorCrs[0].name
-### Configuration for OTel collector +(Type: string)
+ name of the OpenTelemetryCollector CR. If left empty, the release name will be used. + +
+
+""
+
+
+
+otelOperatorCrs[0].namespace
-This repo has the OTel collector helm chart as a dependency and some issues in the configuration are guarded -by their upstream JSON Schema, but some are not, and it's a good idea to run the validator (especially if it's part -of a CI/CD pipeline): +(Type: string)
+ in what k8s namespace the OpenTelemetryCollector CR should be created. If left empty, the release namespace will be used. + +
+
+""
+
+
+
+otelOperator
-``` -# 1) download the binary -VERSION=0.117.0 -curl -sL https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v${VERSION}/otelcol-contrib_${VERSION}_$(go env GOOS)_$(go env GOARCH).tar.gz | tar xvz - -C . +(Type: yaml)
+ values for OTel operator helm chart - these values overrides the defaults defined here by default the operator is disabled + +
+
+enabled: false
+fullnameOverride: otel-operator
+manager:
+    collectorImage:
+        #      repository: otel/opentelemetry-collector-k8s
+        repository: otel/opentelemetry-collector-contrib
+    env:
+        ENABLE_WEBHOOKS: "false"
+    serviceAccount:
+        name: otel-operator
+admissionWebhooks:
+    create: false
 
-# 2) run the validator against your helm chart values
-./otel-contrib validate --config=<(cat values.yaml | yq '.opentelemetry-collector.alternateConfig')
-```
+
+
+
+otelCollector
+ +(Type: yaml)
+ values for OTel collector helm chart - these values overrides the defaults defined here by default the collector is disabled + +
+
+# -- If enabled, the OTel collector sub-chart will be rendered
+enabled: false
+# -- Valid values are "daemonset", "deployment", "sidecar" and "statefulset"
+mode: deployment
+image:
+    #    repository: otel/opentelemetry-collector-k8s
+    repository: otel/opentelemetry-collector-contrib
+    # -- Container image - OTel collector distribution
+fullnameOverride: otelcol
+#  ports:
+#    opencensus:
+#      enabled: true
+#      containerPort: 55678
+#      servicePort: 55678
+#      hostPort: 55678
+#      protocol: TCP
+# -- Configuration for OTel collector that will be installed
+# @notationType -- yaml
+alternateConfig:
+    receivers: {}
+    processors:
+        resourcedetection/env:
+            detectors: [env]
+            timeout: 2s
+            override: false
+        transform:
+            metric_statements:
+                - context: datapoint
+                  statements:
+                    - set(attributes["namespace"], resource.attributes["k8s.namespace.name"])
+                    - set(attributes["pod"], resource.attributes["k8s.pod.name"])
+                    - set(attributes["deployment"], resource.attributes["k8s.deployment.name"])
+    exporters:
+        otlp:
+            # make sure this is the same hostname and port as .service (when using different namespace)
+            endpoint: keda-otel-scaler.keda.svc:4317
+            compression: "none"
+            tls:
+                insecure: true
+        debug:
+            verbosity: detailed
+    service:
+        extensions:
+            - health_check
+        pipelines:
+            metrics:
+                receivers: []
+                processors: [resourcedetection/env, transform]
+                exporters: [debug, otlp]
+    extensions:
+        health_check:
+            endpoint: ${env:MY_POD_IP}:13133
+
+
+
+
+otelCollector.enabled
+ +(Type: bool)
+ If enabled, the OTel collector sub-chart will be rendered + +
+
+false
+
+
+
+otelCollector.mode
+ +(Type: string)
+ Valid values are "daemonset", "deployment", "sidecar" and "statefulset" + +
+
+"deployment"
+
+
+
+otelCollector.alternateConfig
+ +(Type: yaml)
+ Configuration for OTel collector that will be installed + +
+
+receivers: {}
+processors:
+    resourcedetection/env:
+        detectors: [env]
+        timeout: 2s
+        override: false
+    transform:
+        metric_statements:
+            - context: datapoint
+              statements:
+                - set(attributes["namespace"], resource.attributes["k8s.namespace.name"])
+                - set(attributes["pod"], resource.attributes["k8s.pod.name"])
+                - set(attributes["deployment"], resource.attributes["k8s.deployment.name"])
+exporters:
+    otlp:
+        # make sure this is the same hostname and port as .service (when using different namespace)
+        endpoint: keda-otel-scaler.keda.svc:4317
+        compression: "none"
+        tls:
+            insecure: true
+    debug:
+        verbosity: detailed
+service:
+    extensions:
+        - health_check
+    pipelines:
+        metrics:
+            receivers: []
+            processors: [resourcedetection/env, transform]
+            exporters: [debug, otlp]
+extensions:
+    health_check:
+        endpoint: ${env:MY_POD_IP}:13133
+
+
+
+
+ + -Alternatively, you may want to use online tools such as [otelbin.io](https://www.otelbin.io/). +

+ +

diff --git a/artifacthub/otel-add-on-scaler/0.1.2/README.md b/artifacthub/otel-add-on-scaler/0.1.2/README.md new file mode 120000 index 0000000..8a33348 --- /dev/null +++ b/artifacthub/otel-add-on-scaler/0.1.2/README.md @@ -0,0 +1 @@ +../../../README.md \ No newline at end of file diff --git a/artifacthub/otel-add-on-scaler/0.1.2/artifacthub-pkg.yml b/artifacthub/otel-add-on-scaler/0.1.2/artifacthub-pkg.yml new file mode 100644 index 0000000..31401ff --- /dev/null +++ b/artifacthub/otel-add-on-scaler/0.1.2/artifacthub-pkg.yml @@ -0,0 +1,15 @@ +# full spec: https://github.com/artifacthub/hub/blob/master/docs/metadata/artifacthub-pkg.yml +version: 0.1.2 +name: otel-add-on-scaler +displayName: KEDA OTel Addon Scaler +createdAt: 2025-10-07T18:33:14Z +description: KEDA External Scaler that can obtain metrics from OTel collector and use them for autoscaling. +homeURL: https://github.com/kedify/otel-add-on +logoURL: https://raw.githubusercontent.com/kedacore/keda/main/images/keda-logo-500x500-white.png +links: + - name: GitHub repo + url: https://github.com/kedify/otel-add-on +keywords: + - keda + - otel + - scaler