chore: update all opentelemetry collector packages#3152
Open
renovate[bot] wants to merge 1 commit intomainfrom
Open
chore: update all opentelemetry collector packages#3152renovate[bot] wants to merge 1 commit intomainfrom
renovate[bot] wants to merge 1 commit intomainfrom
Conversation
Contributor
Author
ℹ️ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
71ec868 to
be3f471
Compare
0c7aff8 to
318c05c
Compare
3e6a1a3 to
8ef9682
Compare
0a1d029 to
17f4c63
Compare
17f4c63 to
f0aa4c8
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v0.147.0→v0.149.0v0.147.0→v0.149.0v0.147.0→v0.149.0v1.53.0→v1.55.0v1.53.0→v1.55.0Release Notes
open-telemetry/opentelemetry-collector-contrib (github.com/open-telemetry/opentelemetry-collector-contrib/pkg/ottl)
v0.149.0Compare Source
🛑 Breaking changes 🛑
exporter/elasticsearch: Removehost.os.typeencoding in ECS mode (#46900)Use processor/elasticapmprocessor v0.36.2 or later for
host.os.typeenrichmentreceiver/prometheus: Remove the deprecatedreport_extra_scrape_metricsreceiver configuration option and obsolete extra scrape metric feature gates. (#44181)report_extra_scrape_metricsis no longer accepted inprometheusreceiverconfiguration.Control extra scrape metrics through the PromConfig.ScrapeConfigs.ExtraScrapeMetrics setting instead.
🚩 Deprecations 🚩
receiver/awsfirehose: Deprecate built-in unmarshalers (cwlogs, cwmetrics, otlp_v1) in favor of encoding extensions. (#45830)Use the aws_logs_encoding extension (format: cloudwatch) instead of cwlogs,
and the awscloudwatchmetricstreams_encoding extension instead of cwmetrics (format: json)
or otlp_v1 (format: opentelemetry1.0).
receiver/file_log: Renamefilelogreceiver tofile_logwith deprecated aliasfilelog(#45339)receiver/kafka: Deprecate the built-inazure_resource_logsencoding in favour ofazureencodingextension. (#46267)The built-in
azure_resource_logsencoding does not support all timestamp formatsemitted by Azure services (e.g. US-format timestamps from Azure Functions).
Users should migrate to the
azureencodingextension,which provides full control over time formats and is actively maintained.
💡 Enhancements 💡
cmd/opampsupervisor: Add configuration validation before applying remote config to prevent collector downtime (#41068)Validates collector configurations before applying them, preventing downtime from invalid remote configs.
Disabled by default. Enable via
agent.validate_config: true. May produce false positives when resourceslike ports are temporarily unavailable during validation.
connector/datadog: Document datadog connector is not supported in aix environments (#47010)Explicitly opt out of host metadata computation in datadog components to support AIX compilation target.
connector/signal_to_metrics: Addkeys_expressionsupport ininclude_resource_attributesandattributesfor dynamic attribute key resolution at runtime (#46884)The
keys_expressionfield allows specifying an OTTL value expression that resolves to a listof attribute keys at runtime. This enables dynamic resource attribute filtering based on runtime
data such as client metadata. Exactly one of
keyorkeys_expressionmust be set per entry.connector/signal_to_metrics: Reduce per-signal allocations in the hot path by replacing attribute map allocation with a pooled hash-based ID check, and caching filtered resource attributes per metric definition within each resource batch. (#47197)connector/signal_to_metrics: Pre-compute prefixed collector key to avoid a string allocation on every signal processed. (#47183)Pre-computing the collector key avoids a string allocation on every signal processed.
exporter/datadog: Document datadog exporter is not supported in aix environments (#47010)Explicitly opt out of host metadata computation in datadog components to support AIX compilation target.
exporter/elasticsearch: Addhistogram:rawmapping hint to bypass midpoint approximation for histogram metrics (#47150)exporter/elasticsearch: Cache metric attribute set per bulk session instead of recomputing it for every document (#47170)syncBulkIndexerSession.Add()was callinggetAttributesFromMetadataKeys+attribute.NewSet+metric.WithAttributeSeton every document in the hot path. The attribute set isderived from the request context metadata, which is constant for the lifetime of a session, so it is
now computed once in
StartSessionand reused across allAdd()calls in that session.exporter/elasticsearch: populate _doc_count field in ECS mapping mode (#46936)_doc_count is a special metadata field in Elasticsearch used when a document represents pre-aggregated data (like histograms or aggregate metrics).
Currently, elasticsearchexporter only populates this field for otel mapping mode (native otel field structure). This change
adds support for ECS mapping mode (native ECS field structure) so that we have consistent behavior for both mapping modes.
exporter/elasticsearch: Encoderequire_data_streamin Elasticsearch bulk action metadata instead of the bulk request query string. (#46970)This preserves existing endpoint query parameters while moving
require_data_streamto the per-document action line expected by newer bulk workflows. Benchmarks show
a stable ~27 bytes/item NDJSON payload overhead before compression.
exporter/elasticsearch: Improve performance of Elasticsearch exporter document serialisation (#47171)exporter/elasticsearch: Add metric for docs retried because of request errors (#46215)exporter/kafka: Cache OTel metric attribute sets in OnBrokerE2E hook to reduce per-export allocations (#47186)OnBrokerE2Epreviously rebuiltattribute.NewSet+metric.WithAttributeSeton everycall. The set of distinct (nodeID, host, outcome) combinations is bounded by
2 × number-of-brokers, so the computed
MeasurementOptionis now cached per key.exporter/pulsar: This component does not support aix/ppc64. (#47010)Make the exporter explicitly panic if used in aix/ppc64 environments.
extension/datadog: Document datadog extension is not supported in aix environments (#47010)Explicitly opt out of host metadata computation in datadog components to support AIX compilation target.
extension/db_storage: Make dbstorage work in AIX environments (#47010)sqlite support is offered via modernC, which doesn't support the AIX ppc64 compilation target.
We carve out support for sqlite in AIX environments so contrib can compile for this compilation target.
extension/health_check: Add component event attributes to serialized output. (#43606)When
http.status.include_attributesis enabled in the healthcheckv2 extension (withuse_v2: true),users will see additional attributes in the status output. These attributes provide more
context about component states, including details like error messages and affected components.
For example:
{ "healthy": false, "status": "error", "attributes": { "error_msg": "not enough permissions to read cpu data", "scrapers": ["cpu", "memory", "network"] } }extension/healthcheckv2: Add component event attributes to serialized output. (#43606)When
http.status.include_attributesis enabled in the healthcheckv2 extension (withuse_v2: true),users will see additional attributes in the status output. These attributes provide more
context about component states, including details like error messages and affected components.
For example:
{ "healthy": false, "status": "error", "attributes": { "error_msg": "not enough permissions to read cpu data", "scrapers": ["cpu", "memory", "network"] } }extension/sigv4auth: Add support for External IDs when assuming roles in cross-account authentication scenarios (#44930)Added external_id field to the AssumeRole configuration, allowing users to specify
an External ID when assuming roles for enhanced cross-account security.
internal/datadog: Do not compute host metadata in AIX environments (#47010)Explicitly opt out of host metadata computation in datadog components to support AIX compilation target.
pkg/stanza: Ensure router operator does not split batches of entries (#42393)pkg/stanza: Parse all Windows Event XML fields into the log body, including RenderingInfo (with Culture, Channel, Provider, Task, Opcode, Keywords, Message), UserData, ProcessingErrorData, DebugData, and BinaryEventData. (#46943)Previously, RenderingInfo was only used to derive the top-level level/task/opcode/keywords/message
fields. It is now also emitted as a top-level
rendering_infokey containing all fields includingculture,channel, andprovider. UserData (an alternative to EventData used by some providers)is now parsed into a
user_datakey. Rare schema elements ProcessingErrorData, DebugData, andBinaryEventData are also captured when present.
processor/resourcedetection: Added IBM Cloud VPC resource detector to the Resource Detection Processor (#46874)processor/resourcedetection: Added IBM Cloud Classic resource detector to the Resource Detection Processor (#46874)processor/tail_sampling: Addsampling_strategyconfig withtrace-completeandspan-ingestmodes for tail sampling decision timing and evaluation behavior. (#46600)receiver/awslambda: Enrich context with AWS Lambda receiver metadata for S3 logs (#47046)receiver/azure_event_hub: Add support for Azure Event Hubs distributed processing. This allows the receiver to automatically coordinate partition ownership and checkpointing across multiple collector instances via Azure Blob Storage. (#46595)receiver/docker_stats: Add TLS configuration support for connecting to the Docker daemon over HTTPS with client and server certificates. (#33557)A new optional
tlsconfiguration block is available indocker_statsreceiver config (and theshared
internal/dockerpackage). When omitted the connection remains insecure (plain HTTP orUnix socket), preserving existing behavior. When provided it supports the standard
configtls.ClientConfigfields:ca_file,cert_file,key_file,insecure_skip_verify,min_version, andmax_version.A warning is now emitted when a plain
tcp://orhttp://endpoint is used without TLS,reflecting Docker's deprecation of unauthenticated TCP connections since Docker v26.0
(see https://docs.docker.com/engine/deprecated/#unauthenticated-tcp-connections).
receiver/docker_stats: Add "stream_stats" config option to maintain a persistent Docker stats stream per container instead of opening a new connection on every scrape cycle. (#46493)When
stream_stats: trueis set, each container maintains a persistent open Docker statsstream instead of opening and closing a new connection on every scrape cycle. The scraper
reads from the cached latest value, which reduces connection overhead.
receiver/expvar: Enable the re-aggregation feature for the expvar receiver (#45396)receiver/file_log: Addmax_log_size_behaviorconfig option to control oversized log entry behavior (#44371)The new
max_log_size_behaviorsetting controls what happens when a log entry exceedsmax_log_size.split(default): Splits oversized log entries into multiple log entries. This is the existing behavior.truncate: Truncates oversized log entries and drops the remainder, emitting only a single truncated log entry.receiver/hostmetrics: Enable re-aggregation for system scraper (#46624)Enabled the reaggregation feature gate for the system scraper.
receiver/hostmetrics: Enable re-aggregation for process scraper (#46623)Enabled the reaggregation feature gate for the process scraper and set all metric attributes (context_switch_type, direction, paging_fault_type, state) with requirement_level recommended.
receiver/mongodb: Enable re-aggregation feature for mongodb receiver metrics (#46366)receiver/mongodb: Addschemeconfiguration option to supportmongodb+srvconnections (#36011)The new
schemefield allows connecting to MongoDB clusters usingSRV DNS records (mongodb+srv protocol). Defaults to "mongodb" for
backward compatibility.
receiver/mysql: Addmysql.query_plan.hashattribute to top query log records, enabling users to correlate top queries with their corresponding execution plans. (#46626)receiver/mysql: Addedmysql.session.statusandmysql.session.idattributes to query samples.mysql.session.statusindicates the session status (waiting,running, orother) at the time of the sample.mysql.session.idprovides the unique session identifier. Both attributes provide additional context for understanding query performance and behavior. (#135350)receiver/mysql: Add and tune obfuscation of sensitive properties in both V1 and V2 JSON query plans. (#46629, #46587)Configure and test obfuscation for V1 and V2 plans, including tests of queries retrieved from the performance schema that are truncated and cannot be obfuscated.
The importance of obfuscation can be very context dependent; sensitive PII, banking, and authorization data may reside in the same database as less sensitive data, and it can be vital to ensure that what is expected to be obfuscated is always obfuscated. Significant additional testing has been added around query plan obfuscation to ensure that this is enforced and to provide assurance and reference to users about what specifically is obfuscated and what is not.
receiver/mysql: Propagates W3C TraceContext from MySQL session variables to query sample log records. When a MySQL session sets@traceparent, the receiver extracts the TraceID and SpanID and stamps them onto the correspondingdb.server.query_samplelog record, enabling correlation between application traces and query samples. (#46631)Only samples from sessions where
@traceparentis set will have non-zerotraceIdandspanIdfields on the log record.receiver/prometheus: Add support for reading instrumentation scope attributes fromotel_scope_<attribute-name>labels while feature-gating deprecation ofotel_scope_info. (#41502)Scope attributes are always extracted from
otel_scope_<attribute-name>labels on metrics.The
receiver.prometheusreceiver.IgnoreScopeInfoMetricfeature gate (alpha, disabled by default)controls only whether the legacy
otel_scope_infometric is ignored for scope attribute extraction.When the gate is disabled, both mechanisms coexist to support migration.
See the specification change for motivation: open-telemetry/opentelemetry-specification#4505
receiver/pulsar: This component does not support aix/ppc64. (#47010)Make the receiver explicitly panic if used in aix/ppc64 environments.
receiver/skywalking: Add feature gatetranslator.skywalking.useStableSemconvto update semantic conventions from v1.18.0 to v1.38.0 (#44796)A feature gate
translator.skywalking.useStableSemconvhas been added to control the migration.The gate is disabled by default (Alpha stage), so existing behavior is preserved.
receiver/sqlquery: Add clickhouse support to sqlquery (#47116)receiver/sqlquery: Addrow_conditionto metric configuration for filtering result rows by column value (#45862)Enables extracting individual metrics from pivot-style result sets where each row
represents a different metric (e.g. pgbouncer's
SHOW LISTScommand). Whenrow_conditionis configured on a metric, only rows where the specified columnequals the specified value are used; all other rows are silently skipped.
receiver/sqlserver: Enable dynamic metric reaggregation in the SQL Server receiver. (#46379)receiver/yang_grpc: Support collecting any metric by browsing the whole metrics tree (#47054)🧰 Bug fixes 🧰
exporter/kafka: Fixes the validation fortopic_from_metadata_keyto use partition keys. (#46994)exporter/kafka: Fix topic routing for multi-resource batches whentopic_from_attributeis set without resource-level partitioning (#46872)Previously, when a batch contained multiple resources with different
topic attribute values, all data was silently sent to the topic of the
first resource. Each resource is now correctly routed to its own topic.
exporter/splunk_hec: Fix timestamp precision in Splunk HEC exporter to preserve microseconds instead of truncating to milliseconds. (#47175)Timestamps were rounded to milliseconds before sending to Splunk HEC. The rounding has been removed, giving microsecond precision in the HEC
timefield.extension/bearertokenauth: Redact bearer token from authentication error messages to prevent credential exposure in logs. (#46200)Previously, when a client presented an invalid bearer token, the full token value was
included in the error message returned by the Authenticate method. This error could be
propagated to log output, exposing sensitive credentials. The error message now omits
the token value entirely.
internal/aws: Respect NO_PROXY/no_proxy environment variables when using env-based proxy configuration in awsutil (#46892)When no explicit proxy_address was configured, the HTTP client manually read HTTPS_PROXY
and used http.ProxyURL which ignores NO_PROXY. Now delegates to http.ProxyFromEnvironment
which correctly handles all proxy environment variables.
processor/deltatorate: Append "/s" to the unit of output datapoints to reflect the per-second rate. (#46841)processor/filter: Fix validation of include and exclude severity configurations so they run independently of LogConditions. (#46883)receiver/datadog: Propagate Datadog trace sampling priority to all spans translated from a trace chunk. (#45402)receiver/file_log: Fix data corruption after file compression (#46105)After a log file is compressed (e.g. test.log → test.log.gz), the receiver configured with
compression: autowill now correctly decompress the content and continue reading from where the plaintext file left off.receiver/file_log: Fixes bug where File Log receiver did not read the last line of gzip compressed files. (#45572)receiver/hostmetrics: Align HugePages metric instrument types with the semantic conventions by emitting page_size, reserved, and surplus as non-monotonic sums instead of gauges. (#42650)receiver/hostmetrics: Handle nil PageFaultsStat in process scraper to prevent panic on zombie processes. (#47095)receiver/journald: Fix emitting of historical entries on startup (#46556)When start_at is "end" (the default), pass --lines=0 to journalctl to suppress
the 10 historical entries it emits by default in follow mode.
receiver/k8s_events: Exclude DELETED watch events to prevent duplicate event ingestion. (#47035)receiver/mysql: Remove deprecatedinformation_schema.processlistJOIN from query samples template; usethread.processlist_hostinstead. (#47041)receiver/oracledb: Fix oracledbreceiver aborting entire scrape when a SQL query text fails to obfuscate (e.g. due to Oracle truncating a CLOB mid-string-literal). The affected entry is now skipped with a warning log and the rest of the scrape continues normally. (#47151)receiver/otelarrow: Remove assumed positions of otel arrow root payload types (#46878)receiver/otelarrow: Fix OTLP fallback handlers returning codes.Unknown instead of codes.Unavailable for pipeline errors, causing upstream exporters to permanently drop data instead of retrying. (#46182)receiver/pprof: Fixes pprofreceiver file_scrapper appending resource profiles instead of merging them. (#46991)receiver/prometheus_remote_write: Count target_info samples in PRW response stats (#47108)v0.148.0Compare Source
🛑 Breaking changes 🛑
all: Removes the k8slog receiver after being unmaintained for 3 months (#46544)all: Remove deprecated SAPM exporter (#46555)all: Remove the datadogsemantics processor. (#46893)If you need help, please contact Datadog support: https://www.datadoghq.com/support.
exporter/google_cloud_storage:reuse_if_existsbehavior changed: now checks bucket existence instead of attempting creation (#45971)Previously,
reuse_if_exists=truewould attempt bucket creation and fall back to reusing on conflict.Now,
reuse_if_exists=truechecks if bucket exists (via storage.buckets.get) and uses it, failing if it doesn't exist.Set to true when the service account lacks project-level bucket creation permissions but has bucket-level permissions.
reuse_if_exists=falsestill attempts to create the bucket and fails if it already exists.exporter/kafka: Remove deprecated top-leveltopicandencodingconfiguration fields (#46916)The top-level
topicandencodingfields were deprecated in v0.124.0.Use the per-signal fields instead:
logs::topic,metrics::topic,traces::topic,profiles::topic, and the correspondingencodingfields under each signal section.
exporter/kafka: Remove kafka-local batching partitioner wiring and require explicitsending_queue::batch::partition::metadata_keysconfiguration as a superset ofinclude_metadata_keyswhen batching is enabled. (#46757)pkg/ottl:truncate_allfunction now supports UTF-8 safe truncation (#36713)The default
truncate_allbehavior has changed. Truncation now respects UTF-8 character boundaries by default (new optional parameterutf8_safe, default:true), so results stay valid UTF-8 and may be slightly shorter than the limit.To keep the previous byte-level truncation behavior (e.g. for non-UTF-8 data or to avoid any behavior change), set
utf8_safetofalsein alltruncate_allusages.receiver/awsecscontainermetrics: Add ephemeral storage metrics and fix unit strings from Megabytes to MiB (#46414)Adds two new task-level gauge metrics:
ecs.task.ephemeral_storage.utilizedandecs.task.ephemeral_storage.reserved(in MiB).These metrics are available on AWS Fargate Linux platform version 1.4.0+ and represent the shared ephemeral storage for the entire task.
Breaking change: The unit string for
ecs.task.memory.utilized,ecs.task.memory.reserved,container.memory.utilized, andcontainer.memory.reservedhas been corrected from"Megabytes"to"MiB".The underlying values were already in MiB (computed via division by 1024*1024), but the unit label was incorrect.
Users relying on the exact unit string (e.g. in metric filters or dashboards) will need to update accordingly.
receiver/mysql: Set the default collection of query_sample to false (#46902)receiver/postgresql: Disable default collection of top_query and query_sample events. (#46843)This change is breaking because it disables the default collection of top_query and query_sample events. These events will need to be enabled manually if desired.
receiver/redfish:system.host_nameandbase_urlresource attribute has been changed tohost.nameandurl.fullrespectively.(#46236)
receiver/windowseventlog: Change event_data from an array of single-key maps to a flat map by default, making fields directly accessible via OTTL. The previous format is available by settingevent_data_format: array. (#42565, #32952)Named elements become direct keys (e.g., body["event_data"]["ProcessId"]).
Anonymous elements use numbered keys: param1, param2, etc.
To preserve the previous array format, set event_data_format: array in the receiver configuration.
🚩 Deprecations 🚩
exporter/azure_blob: Introduce new snake case compliant nameazure_blob(#46722)exporter/google_cloud_storage: Introduce new snake case compliant namegoogle_cloud_storage(#46733)extension/aws_logs_encoding: Introduce new snake case compliant nameaws_logs_encoding(#46776)extension/azure_auth: Introduce new snake case compliant nameazure_auth(#46775)extension/cgroup_runtime: Introduce new snake case compliant namecgroup_runtime(#46773)extension/google_cloud_logentry_encoding: Introduce new snake case compliant namegoogle_cloud_logentry_encoding(#46778)processor/metric_start_time: Introduce new snake case compliant namemetric_start_time(#46777)receiver/azure_blob: Introduce new snake case compliant nameazure_blob(#46721)receiver/azure_monitor: Introduce new snake case compliant nameazure_monitor(#46730)receiver/cisco_os: Introduce new snake case compliant namecisco_os(#46948)receiver/macos_unified_logging: Introduce new snake case compliant namemacos_unified_logging(#46729)receiver/prometheus_remote_write: Introduce new snake case compliant nameprometheus_remote_write(#46726)receiver/yang_grpc: Introduce new snake case compliant nameyang_grpc(#46723)🚀 New components 🚀
receiver/azure_functions: Introduce new component to receive logs from Azure Functions (#43507)This change includes only overall structure, readme and configuration for the new component.
💡 Enhancements 💡
cmd/opampsupervisor: Add configurable instance ID to Supervisor (#45596)connector/signal_to_metrics: Addsum.monotonicproperty for improved counter handling (#45865)connector/spanmetrics: Add support for W3C tracestate-based adjusted count in span metrics with stochastic rounding (#45539)The span metrics connector now supports extracting sampling information from W3C tracestate
to generate extrapolated span metrics with adjusted counts. This enables accurate metric
aggregation for sampled traces by computing stochastic-rounded adjusted counts based on
the sampling threshold (ot.th field) in the tracestate. Key features include:
Performance characteristics:
exporter/bmchelix: Enrich metric names with datapoint attributes for unique identification in BMC Helix Operations Management (#46558)This feature is controlled by the
enrich_metric_with_attributesconfiguration option (default:true).Set to
falseto disable enrichment and reduce metric cardinality.Normalization is applied to ensure BHOM compatibility:
entityTypeIdandentityName: Invalid characters replaced with underscores (colons not allowed as they are used as separators in entityId)metricName: Normalized to match pattern[a-zA-Z_:.][a-zA-Z0-9_:.]*exporter/clickhouse: Add per pipeline JSON support for ClickHouse exporter, deprecate JSON feature gate (#46553)Previously, the
clickhouse.jsonfeature gate was used to enable JSON for allClickHouse exporter instances. This feature gate is now deprecated. Use the
jsonconfig option instead, which allows per-pipeline control.
exporter/elasticsearch: Add per-documentdynamic_templatesfor metrics in ECS mapping mode (#46499)Each bulk index action for ECS metrics now includes dynamic_templates so Elasticsearch can apply the correct
mapping (e.g. histogram_metrics, summary_metrics, double_metrics) for the ECS mapping mode. The OTel mapping mode already sent dynamic_templates.
exporter/elasticsearch: Addhttp.response.status_codeto failed document logs to allow for better filtering and error analysis. (#45829)exporter/elasticsearch: Update ECS mode encoder to add conversions fortelemetry.sdk.languageandtelemetry.sdk.version(#46690)Conversions map semconv attributes
telemetry.sdk.language/telemetry.sdk.versionto service.language.name/service.language.version'extension/aws_logs_encoding: Adopt streaming for Network Firewall logs (#46214)extension/aws_logs_encoding: Adopt streaming for CloudTrail signal (#46214)extension/aws_logs_encoding: Adopt encoding extension streaming contract for WAF logs (#46214)extension/aws_logs_encoding: Adopt streaming for S3 access logs (#46214)extension/aws_logs_encoding: Adopt encoding extension streaming contract for VPC flow logs (#46214)extension/aws_logs_encoding: Adopt encoding extension streaming contract for CloudWatch Logs subscription (#46214)extension/aws_logs_encoding: Adopt streaming for ELB signal (#46214)extension/awscloudwatchmetricstreams_encoding: Adopt encoding extension streaming contract for OpenTelemetry v1 formatted metrics (#46214)extension/azure_encoding: Add encoding.format attribute to Azure logs to identify the log type (#44278)extension/azure_encoding: Promote the Azure Encoding extension to Alpha stability. (#46886)extension/azure_encoding: Add processing for Azure Metrics (#41725)extension/datadog: Setos.typeresource attribute if not already present for Fleet Automation metadata. (#46896)extension/headers_setter: Add support for file-based credentials viavalue_fileconfiguration option. Files are watched for changes and header values are automatically updated. (#46473)This is useful for credentials that are rotated, such as Kubernetes secrets.
Example configuration:
headers_setter:
headers:
- key: X-API-Key
value_file: /var/secrets/api-key
extension/oidc: Add logging for failed authentication attempts with client IP and username. (#46482)internal/kafka: This change adds support for authentication via OIDC to the Kafka client. (#41872)It provides an implementation of SASL/OAUTHBEARER for Kafka components, by
integrating with auth extensions that provide OAuth2 tokens, such as oauth2clientauth.
Token acqusition/refresh/exchange is controlled by auth extensions.
To use this, your configuration would be something like:
extensions:
oauth2client:
client_id_file: /path/to/client_id_file
client_secret: /path/to/client_secret_file
exporters:
kafka:
auth:
sasl:
mechanism: OAUTHBEARER
oauthbearer_token_source: oauth2client
pkg/azurelogs: Remove semconv v1.28.0 and v1.34.0 dependencies, migrating to v1.38.0 via paired feature gates (#45033, #45034)Two new alpha feature gates control the migration:
pkg.translator.azurelogs.EmitV1LogConventionsemits stable attribute names (code.function.name,code.file.path,eventNameper log record).pkg.translator.azurelogs.DontEmitV0LogConventionssuppresses the old names (code.function,code.filepath,event.nameon resource).Both gates default to off; enable
EmitV1LogConventionsfirst for a dual-emit migration window.pkg/coreinternal: Add feature gates to migrate semconv v1.12.0 attributes to v1.38.0 equivalents in goldendataset (#45076)The following attribute keys from
go.opentelemetry.io/otel/semconv/v1.12.0can now be migrated to theirv1.38.0equivalentsusing feature gates (both default to disabled, preserving the old behavior):
net.host.ip->network.local.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)net.peer.ip->network.peer.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)http.host->server.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)http.server_name->server.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)To stop emitting the deprecated v1.12.0 attributes, also enable
internal.coreinternal.goldendataset.DontEmitV0NetworkConventions(requires
internal.coreinternal.goldendataset.EmitV1NetworkConventionsto also be enabled).pkg/fileconsumer:filelogreceiver checkpoint storage now supports protobuf encoding behind a feature gate for improved performance and reduced storage usage (#43266)Added optional protobuf encoding for filelog checkpoint storage, providing ~7x faster decoding and 31% storage savings.
Enable with feature gate:
--feature-gates=filelog.protobufCheckpointEncodingThe feature is in StageAlpha (disabled by default) and includes full backward compatibility with JSON checkpoints.
pkg/ottl: Improve unsupported type error diagnostics in the Len() OTTL function by including the runtime type in error messages. (#46476)pkg/stanza: Implementiffield support for the recombine operator so entries not matching the condition pass through unrecombined. (#46048)pkg/zipkin: Add feature gates to migrate semconv v1.12.0 attributes to v1.38.0 equivalents (#45076)The following attribute keys from
go.opentelemetry.io/otel/semconv/v1.12.0can now be migrated to theirv1.38.0equivalentsusing feature gates (both default to disabled, preserving the old behavior):
net.host.ip->network.local.address(enablepkg.translator.zipkin.EmitV1NetworkConventions)net.peer.ip->network.peer.address(enablepkg.translator.zipkin.EmitV1NetworkConventions)To stop emitting the deprecated v1.12.0 attributes, also enable
pkg.translator.zipkin.DontEmitV0NetworkConventions(requires
pkg.translator.zipkin.EmitV1NetworkConventionsto also be enabled).processor/k8s_attributes: Log warning in case deprecated attributes are enabled (#46932)processor/k8s_attributes: Bump version of semconv to 1.40 (#46644)processor/redaction: Document audit trail attributes emitted whensummaryis set todebugorinfo(#46648)Adds an Audit Trail section to the README describing the diagnostic attributes
the processor appends to spans, log records, and metric datapoints, including
a worked example. Also fixes the example output to omit zero-count attributes
that are never emitted, and restores URL Sanitization and Span Name Sanitization
as top-level README sections.
receiver/aerospike: Enable the re-aggregation feature for the aerospike receiver (#46347)receiver/awslambda: Adopt encoding extension streaming for AWS Lambda receiver (#46608)receiver/awslambda: Promote AWS Lambda receiver to Alpha stability. (#46888)receiver/cisco_os: Add cisco_os receiver to the contrib distribution (#46948)receiver/cloudflare: Addmax_request_body_sizeconfig option. (#46630)receiver/docker_stats: Enables dynamic metric reaggregation in the Docker Stats receiver. This does not break existing configuration files. (#45396)receiver/filelog: Addinclude_file_permissionsoption (#46504)receiver/flinkmetrics: Enable re-aggregation feature by classifying attributes with requirement_level and setting reaggregation_enabled to true (#46356)Attributes are classified as required when aggregating across them produces meaningless results
(checkpoint, garbage_collector_name, record), and recommended when totals remain operationally
meaningful (operator_name).
receiver/github: Enables dynamic metric reaggregation in the GitHub receiver. This does not break existing configuration files. (#46385)receiver/haproxy: Addhaproxy.server.stateresource attribute to expose server status (UP, DOWN, MAINT, etc.) (#46799)The new resource attribute is disabled by default and can be enabled via configuration.
receiver/hostmetrics: Enable dynamic metric reaggregation for the cpu scraper in the hostmetrics receiver. (#46386)receiver/hostmetrics: Enable re-aggregation feature for the memory scraper to support dynamic metric attribute configuration at runtime. (#46618)receiver/hostmetrics: Enable re-aggregation feature for the load scraper by settingreaggregation_enabled. (#46617)receiver/hostmetrics: Enable metric re-aggregation for paging scrapers. (#46386, #46621)receiver/hostmetrics: Enables re-aggregation for nfs scraper (#46386, #46620)receiver/hostmetrics: Enable re-aggregation feature for the filesystem scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46616)receiver/hostmetrics: Enable re-aggregation for processes scraper (#46622)Enabled the reaggregation feature gate for the processes scraper and set the status attribute requirement level to recommended.
receiver/hostmetrics: Enable re-aggregation feature for the disk scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46615)receiver/hostmetrics: Enable re-aggregation feature for the network scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46619)receiver/iis: Enable re-aggregation and set requirement levels for attributes. (#46360)receiver/kafka: add kafka.topic, kafka.partition, kafka.offset to client metadata (#45931)receiver/kafkametrics: Enable re-aggregation feature for kafkametrics receiver to support dynamic metric attribute configuration at runtime. (#46362)receiver/mysql: Enables dynamic metric reaggregation in the MySQL receiver. This does not break existing configuration files. (#45396)receiver/oracledb: Addoracledb.procedure_execution_countattribute to top query events for stored procedure execution tracking (#46487)This value is derived from MAX(EXECUTIONS) across all SQL statements
sharing the same PROGRAM_ID in V$SQL, providing
an accurate procedure-level execution count even for multi-statement stored procedures.
receiver/oracledb: Addoracledb.command_typeattribute to the Top-Query collection. (#46838)receiver/podman_stats: Enable dynamic metric reaggregation in the Podman receiver. (#46372)receiver/postgresql: Enables dynamic metric reaggregation in the PostgreSQL receiver. This does not break existing configuration files. ([#45396](https://rediConfiguration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log.