-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[Exporterhelper Exporter] Enable metadata population in exporterhelper when sending_queue is enabled #14139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #14139 +/- ##
==========================================
- Coverage 92.17% 92.17% -0.01%
==========================================
Files 668 669 +1
Lines 41463 41515 +52
==========================================
+ Hits 38219 38265 +46
- Misses 2212 2216 +4
- Partials 1032 1034 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
|
@bogdandrutu (sorry for direct ping) did you by any chance have time to review above pr? Do I have to add someone else specific? Any hint will be really appreciated |
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
CodSpeed Performance ReportMerging #14139 will improve performances by 25.7%Comparing
|
| Benchmark | BASE |
HEAD |
Change | |
|---|---|---|---|---|
| ⚡ | BenchmarkBatchMetricProcessor2k |
2.1 µs | 1.7 µs | +25.7% |
Signed-off-by: Andreas Gkizas <[email protected]>
…etry-collector into otlphttp_mergge_metadata
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
|
|
||
| // WithQueueBatch enables queueing and batching for an exporter. | ||
| // This option should be used with the new exporter helpers New[Traces|Metrics|Logs]RequestExporter. | ||
| // If cfg.MetadataKeys is set, it will automatically configure the partitioner and merge function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On puprose kept the comment for parity with base_exporter.go function
Signed-off-by: Andreas Gkizas <[email protected]>
axw
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates, just a couple of minor things remaining from me.
| // Automatically configure partitioner if MetadataKeys is set | ||
| if len(cfg.MetadataKeys) > 0 { | ||
| if set.Partitioner != nil { | ||
| return errors.New("cannot use metadata_keys when a custom partitioner is already configured") | ||
| } | ||
| if set.MergeCtx != nil { | ||
| return errors.New("cannot use metadata_keys when a custom merge function is already configured") | ||
| } | ||
| set.Partitioner = queuebatch.NewMetadataKeysPartitioner(cfg.MetadataKeys) | ||
| set.MergeCtx = queuebatch.NewMetadataKeysMergeCtx(cfg.MetadataKeys) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I think we should probably compose them. But we can always do that in a followup - it would be an enhancement over what you have implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it was quicker just to error for now.
Do you want me to create a follow up once this merged?
Some details that we consider are things like:
Which logic to merge first, probably the custom logic?
How to check metadata within each custom partition with overall metadata_keys if those exist?
| // Empty value and unset metadata are treated as distinct cases. | ||
| // | ||
| // Entries are case-insensitive. Duplicated entries will trigger a validation error. | ||
| MetadataKeys []string `mapstructure:"metadata_keys"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to provide an option to split batches using keys from other sources, not only request-context metadata, but also resource attributes, instrumentation name etc. #12795 has more details. This interface would conflict with that functionality. I’m not sure it’s worth having two configuration options that overlap or conflict with each other. I’d prefer to make progress toward #12795 instead. @axw WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the reminder @dmitryax, good points. I'd also love to make some progress on that - I'll have try to have a play with some ideas today.
This interface would conflict with that functionality.
Why would it conflict? Could we not have both? i.e. config for partitioning by metadata, as well as say support for partitioner extensions? It should also be possible to compose them, as I suggested in another comment thread, e.g. partition first by metadata, and then partition those batches via an extension.
(Playing devil's advocate a little bit, not necessarily saying that's what we should do.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent a while looking into this today, and I'm struggling to see a good path for adding resource/scope/record-level partitioning to exporterhelper.
I previously had a bit of a warped understanding of the Partitioner interface. I had in mind that it was operating on plog.Logs and friends, but it's actually operating on request.Request. That interface totally hides the data type, which makes splitting/merging batches complicated except for in generic ways, i.e. sizer-based splitting/merging.
I started looking into adding partitioning before we even get to batching, but stopped pretty quickly because that's effectively equivalent to processor. So... I'm back to thinking we should:
- Introduce a dedicated processor for splitting batches apart: New component: OTTL-based request partitioning processor opentelemetry-collector-contrib#39199
- Add support for partitioning generic requests by metadata keys (i.e. this PR)
@dmitryax if you have ideas about how we can achieve it all in exporterhelper I'd be keen to hear them.
Co-authored-by: Andrew Wilkins <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Signed-off-by: Andreas Gkizas <[email protected]>
Description
Feature: The enablement of the sending_queue in the configuration of the otplhttp exporter does not preserve the metadata information of the initial context.
This pr introduces a new configuration called
metadata_keysthat will be used in the exporter configTesting
Added partionetr_test
Built my local otlp exporter
Configuration of otlphttp:
See "ProjectID":"local" that is populated below in my logs
{"log.level":"info","@timestamp":"2025-11-06T15:51:49.955Z","message":"ecpRoutingRoundTripper: after URL modification","resource":{"cloud.availability_zone":"local","k8s.namespace.name":"motel-index-collector","k8s.node.name":"hotel-worker2","k8s.pod.name":"motel-index-collector-local-58484b67c8-l4vrt","k8s.pod.uid":"1a8f6fad-91b6-413a-a929-6d004ca489b3","orchestrator.cluster.name":"default","orchestrator.deploymentslice":"","orchestrator.environment":"default","service.instance.id":"ecf43bce-5aac-444c-b244-8871bbf2d69a","service.name":"motel-index-collector","service.version":"git"},"otelcol.component.id":"ecproutingmiddleware","otelcol.component.kind":"extension","final_url_host":"local.es.svc.cluster.local:9200","final_url_scheme":"http","final_url":"[http://local.es.svc.cluster.local:9200/_otlp/v1/metrics","final_host":"","ProjectID":"local","header_count":6,"ecs.version":"1.6.0"}](http://local.es.svc.cluster.local:9200/_otlp/v1/metrics%22,%22final_host%22:%22%22,%22ProjectID%22:%22local%22,%22header_count%22:6,%22ecs.version%22:%221.6.0%22%7D)This PR is a feature for exporterhelper, so is not just otlphttp-specific.