-
Notifications
You must be signed in to change notification settings - Fork 14
deps: update opentelemetry-cpp to 1.25.0 #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: node-v24.x-nsolid-v6.x
Are you sure you want to change the base?
Conversation
Make sure we take `service.version` into account once the fix for `appVersion` was introduced. Make sure we use the common `checkResource()` function in the tracing test too.
PR-URL: #180 Reviewed-By: Rafael Gonzaga <[email protected]> Reviewed-by: Trevor Norris <[email protected]> PR-URL: #371 Reviewed-By: Juan José Arboleda <[email protected]>
WalkthroughBazel/CMake config updates, version bumps, and C++17 adoption. Extensive API/SDK changes: new metrics multi-observer callback API, TLS-configurable OTLP builders, Prometheus/ostream/OTLP builder libraries, configuration subsystem (YAML parser, types, env-driven options), ETW exporter tracing/timestamp tweaks, numerous semconv additions/updates, Zipkin builder removal, tests updated accordingly. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor App as Application
participant Meter as Meter (SDK)
participant Reg as ObservableRegistry
participant MR as MultiObserverResult
participant Inst as ObservableInstrument(s)
participant Reader as MetricReader
rect rgba(200, 230, 255, 0.5)
App->>Meter: RegisterCallback(cb, state, [Inst...])
Meter->>Reg: AddCallback(cb, state, [Inst...])
Reg->>MR: Register instruments
Reg-->>App: callback_id
end
rect rgba(200, 255, 200, 0.5)
Reader->>Meter: Collect()
Meter->>Reg: Observe()
Reg->>cb: Invoke with MR
cb->>MR: ForInstrument<T>().Observe(...)
Reg->>MR: StoreResults(collection_ts)
MR->>Inst: Write to storage
Reg-->>Meter: Done
Meter-->>Reader: Export metrics
end
rect rgba(255, 220, 200, 0.5)
App->>Meter: DeregisterCallback(callback_id)
Meter->>Reg: RemoveCallback(callback_id)
Reg-->>App: Ack
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
deps/opentelemetry-cpp/exporters/otlp/test/otlp_http_exporter_test.cc (1)
173-227:⚠️ Potential issue | 🟠 MajorFinish the HTTP callback on guard exits.
Line 180–199 returns early without finishing the callback, which can leave the request hanging and make
ForceFlush()flaky or block retries. Ensure the callback is always completed, even for guard failures.💡 Suggested fix (apply the same pattern to similar lambdas)
- if (check_json["resourceSpans"].size() == 0) - { - return; - } + auto finish = [&]() { + http_client::nosend::Response response; + response.Finish(*callback.get()); + }; + if (check_json["resourceSpans"].size() == 0) + { + finish(); + return; + } ... - if (received_trace_id != report_trace_id) - { - return; - } + if (received_trace_id != report_trace_id) + { + finish(); + return; + } ... - http_client::nosend::Response response; - response.Finish(*callback.get()); + finish();deps/opentelemetry-cpp/exporters/otlp/test/otlp_http_log_record_exporter_test.cc (1)
161-244:⚠️ Potential issue | 🟠 MajorFinish the HTTP callback on guard exits.
Line 168–181 returns early without finishing the callback, which can leave the request hanging and make
ForceFlush()flaky or block retries. Ensure the callback is always completed, even for guard failures.💡 Suggested fix (apply the same pattern to similar lambdas)
- if (check_json["resourceLogs"].size() == 0) - { - return; - } + auto finish = [&]() { + http_client::nosend::Response response; + response.Finish(*callback.get()); + }; + if (check_json["resourceLogs"].size() == 0) + { + finish(); + return; + } ... - if (received_trace_id == report_trace_id && received_span_id == report_span_id) - { - ++received_record_counter; - } + if (received_trace_id == report_trace_id && received_span_id == report_span_id) + { + ++received_record_counter; + } ... - http_client::nosend::Response response; - response.Finish(*callback.get()); + finish();deps/opentelemetry-cpp/exporters/etw/include/opentelemetry/exporters/etw/etw_tracer.h (1)
165-422:⚠️ Potential issue | 🟠 MajorValidate provider handle before marking tracer as open.
The constructor unconditionally sets
isClosed_tofalseafteretwProvider().open(provId, encoding)without checking whether the handle is valid. Sinceopen()can fail silently and return aHandlewithproviderHandle == INVALID_HANDLE(seeetw_provider.hlines 188–190 and 154, 172),IsClosed()will misreport the tracer's state. The destructor will subsequently attempt to close an invalid handle. CheckprovHandle.providerHandle != INVALID_HANDLEbefore settingisClosed_ = false, or handle registration failure appropriately in the constructor.deps/opentelemetry-cpp/api/include/opentelemetry/context/runtime_context.h (1)
244-343:⚠️ Potential issue | 🟠 MajorFix out-of-bounds read in
Contains()when size_ exceeds max_capacity_When
size_grows beyondcapacity_(which happens whensize_ > max_capacity_), theContains()method iterates fromsize_down to 1 and accessesbase_[pos - 1], reading past the allocated buffer. This occurs inDetach()when it callsContains()in deep-nesting scenarios. Clamp the search range to the actual allocated capacity:Proposed fix
bool Contains(const Token &token) const noexcept { - for (size_t pos = size_; pos > 0; --pos) + const size_t search_size = (std::min)(size_, capacity_); + for (size_t pos = search_size; pos > 0; --pos) { if (token == base_[pos - 1]) { return true; } } return false; }
🤖 Fix all issues with AI agents
In `@deps/opentelemetry-cpp/.bcr/presubmit.yml`:
- Around line 10-14: The CI presubmit uses a single verify_targets task that
sets build_flags['--cxxopt=-std=c++17'] for all platforms which breaks Windows
MSVC; split into two platform-specific verify tasks (e.g., verify_targets_unix
and verify_targets_windows) so the Unix-like matrix entries (debian10, macos,
ubuntu2004) keep build_flags with --cxxopt=-std=c++17 while the Windows entry
uses --cxxopt=/std:c++17; ensure each task still includes the same build_targets
(e.g., '@opentelemetry-cpp//api') and the matrix/when conditions select the
correct platforms.
In `@deps/opentelemetry-cpp/.devcontainer/Dockerfile.dev`:
- Around line 25-28: Replace the interactive update-alternatives calls with
non-interactive --set invocations: instead of running "update-alternatives
--config clang-tidy" and "update-alternatives --config llvm-config", use
"update-alternatives --set clang-tidy /usr/bin/clang-tidy-20" and
"update-alternatives --set llvm-config /usr/bin/llvm-config-20" so the Docker
build does not require a TTY; update the RUN line that currently calls
update-alternatives for clang-tidy and llvm-config accordingly.
In
`@deps/opentelemetry-cpp/api/include/opentelemetry/semconv/incubating/gcp_attributes.h`:
- Around line 101-107: The documentation comment above the constant
kGcpApphubDestinationServiceCriticalityType uses “destination workload” but the
constant is for destination service criticality; update the semconv
source/template to replace “destination workload” with “destination service” in
the docstring for kGcpApphubDestinationServiceCriticalityType (and any matching
incubating template entries), then regenerate the generated header so the
updated comment is emitted alongside that constant.
In
`@deps/opentelemetry-cpp/api/include/opentelemetry/semconv/incubating/system_metrics.h`:
- Around line 605-695: The deprecated metrics for
kMetricSystemLinuxMemoryAvailable and kMetricSystemLinuxMemorySlabUsage use
packet semantics and Counter types incorrectly; update
descrMetricSystemLinuxMemoryAvailable and descrMetricSystemLinuxMemorySlabUsage
to memory-appropriate text (e.g., "An estimate of how much memory is
available..." and "Reports the memory used by the slab allocator..."), change
unitMetricSystemLinuxMemoryAvailable and unitMetricSystemLinuxMemorySlabUsage to
"By", replace synchronous Counter types and factory calls in
CreateSyncInt64MetricSystemLinuxMemoryAvailable/CreateSyncDoubleMetricSystemLinuxMemoryAvailable
and
CreateSyncInt64MetricSystemLinuxMemorySlabUsage/CreateSyncDoubleMetricSystemLinuxMemorySlabUsage
to UpDownCounter versions (CreateInt64UpDownCounter/CreateDoubleUpDownCounter
and corresponding unique_ptr<metrics::UpDownCounter<...>> types), and replace
asynchronous ObservableCounter usage in
CreateAsyncInt64MetricSystemLinuxMemoryAvailable/CreateAsyncDoubleMetricSystemLinuxMemoryAvailable
and
CreateAsyncInt64MetricSystemLinuxMemorySlabUsage/CreateAsyncDoubleMetricSystemLinuxMemorySlabUsage
with ObservableUpDownCounter and
CreateInt64ObservableUpDownCounter/CreateDoubleObservableUpDownCounter to match
the non-deprecated memory semantics.
In
`@deps/opentelemetry-cpp/exporters/otlp/test/otlp_http_metric_exporter_test.cc`:
- Around line 287-293: The early-return branch that checks
request_body.resource_metrics_size()/scope_metrics_size()/metrics_size() exits
without completing the RPC; modify that branch to call response.Finish(...)
before returning (e.g., response.Finish(grpc::Status::OK) or the appropriate
Finish signature used elsewhere in this test) so the call's callback is always
invoked; update the block containing request_body and received_record_counter to
call response.Finish(...) then return.
- Around line 186-200: The early-return branches in the test (after checking
check_json["resourceMetrics"], resource_metrics["scopeMetrics"], and
scope_metrics["metrics"]) skip calling response.Finish(*callback.get()), which
can leave the HTTP client callback unresolved; update each early-return path in
otlp_http_metric_exporter_test.cc to call response.Finish(*callback.get())
before returning (or otherwise ensure the callback is completed), referencing
the existing response and callback objects used in the test so the HTTP client
is always signaled even when the JSON guards cause an early exit.
In
`@deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/configuration_parser.h`:
- Around line 374-377: Remove the unused member variable version_ from the
ConfigurationParser class declaration in configuration_parser.h: the file
currently declares version_, version_major_, and version_minor_, but only
version_major_ and version_minor_ are used/initialized; delete the unused
std::string version_ member to avoid dead code and keep the class consistent
with its implementation.
In
`@deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/configuration.h`:
- Line 70: Remove the erroneous use of the C-style `enum` keyword when declaring
the scoped enum variable: update the declaration of `log_level` to use the
scoped enum type `SeverityNumber` directly (replace `enum SeverityNumber
log_level = SeverityNumber::info;` with a declaration that omits `enum`) so the
variable `log_level` is correctly typed as `SeverityNumber` and initialized to
`SeverityNumber::info`.
In
`@deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/configured_sdk.h`:
- Around line 51-52: The member log_level in class ConfiguredSdk is left
uninitialized causing undefined behavior when Install() calls SetLogLevel();
modify the ConfiguredSdk() constructor to value-initialize log_level to
opentelemetry::sdk::common::internal_log::LogLevel::None so log_level has a
well-defined default before Install() reads it (leave resource initialization
as-is).
In `@deps/opentelemetry-cpp/sdk/src/CMakeLists.txt`:
- Around line 11-13: The CMake conditional USING the WITH_CONFIGURATION symbol
is never defined so the configuration subdirectory never builds; add an option
definition for WITH_CONFIGURATION in the parent CMakeLists so the flag can be
toggled (e.g., define a CMake option named WITH_CONFIGURATION with a sensible
default such as ON) and ensure that the three places that check
IF(WITH_CONFIGURATION) now respect that option; update the top-level or sdk
CMakeLists to declare the option before those IF(...) checks so
add_subdirectory(configuration) can be enabled when desired.
In `@deps/opentelemetry-cpp/sdk/src/configuration/CMakeLists.txt`:
- Around line 30-34: The pkg-config entry for the "configuration" component uses
the wrong description string; update the opentelemetry_add_pkgconfig call for
the component named "configuration" so the second description argument refers to
configuration (e.g., "Configuration components for the OpenTelemetry SDK" or
similar) rather than "Components for exporting traces in the OpenTelemetry SDK."
In `@deps/opentelemetry-cpp/sdk/src/trace/batch_span_processor_options.cc`:
- Around line 30-38: Both GetMaxQueueSizeFromEnv() and
GetMaxExportBatchSizeFromEnv() must guard against environment-parsed zero
values: after calling
opentelemetry::sdk::common::GetUintEnvironmentVariable(...) and before returning
the parsed value, check if the parsed uint value is zero and if so return
kDefaultMaxQueueSize (for GetMaxQueueSizeFromEnv) or kDefaultMaxExportBatchSize
(for GetMaxExportBatchSizeFromEnv); keep the existing fallback when
GetUintEnvironmentVariable fails, and otherwise return
static_cast<size_t>(value) only when value != 0.
🧹 Nitpick comments (11)
deps/opentelemetry-cpp/exporters/ostream/src/console_push_metric_builder.cc (1)
54-54: Nitpick: Trailing semicolon after switch block.Line 54 has a semicolon after the closing brace of the switch statement. While this is syntactically valid (empty statement), it's typically unnecessary. Since this is vendored upstream code, this is a very minor observation.
deps/opentelemetry-cpp/ext/test/http/curl_http_test.cc (1)
501-537: Make the TODO traceable to avoid it becoming permanent.Consider linking the TODO to the same issue referenced above so it’s easy to track re-enablement.
🔧 Proposed tweak
- // TODO: Spurious test failures here. + // TODO(`#3535`): Spurious test failures here.deps/opentelemetry-cpp/sdk/test/metrics/view_registry_test.cc (1)
96-138: Strengthen null-parameter tests to assert the view wasn’t registered.Right now these tests only check “no crash”; consider also asserting
FindViewsreturns the default view (or that the “test_view” name is never observed) to prove the call was ignored.deps/opentelemetry-cpp/.bcr/README.md (1)
14-16: Minor inconsistency in Bazel version notation.The presubmit.yml file uses
9.xbut this README says9.*. Consider aligning the notation for consistency.📝 Suggested fix
Defines the BCR presubmit tests that run when a new version is published. Currently configured to test on: - Platforms: debian10, macos, ubuntu2004, windows -- Bazel versions: 7.x, 8.x, 9.* +- Bazel versions: 7.x, 8.x, 9.xdeps/opentelemetry-cpp/sdk/src/metrics/state/observable_registry.cc (1)
64-132: Consider monotonic IDs to avoid stale-handle collisions.
Using the record pointer as the token can introduce ABA risk if callbacks are auto-removed and the allocator reuses the address; a monotonic counter avoids accidental removal of a new callback with a stale handle.deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/distribution_configuration.h (1)
7-7: Unused include.The
<string>header is included but not used in this file. The class only usesstd::vectorandstd::unique_ptr.Proposed fix
`#include` <memory> -#include <string> `#include` <vector>deps/opentelemetry-cpp/sdk/test/configuration/CMakeLists.txt (1)
16-19: Consider whethertrace.prefix is appropriate for all test types.The
TEST_PREFIX trace.is applied uniformly to all tests, includingyaml_logs_test,yaml_metrics_test, andyaml_distribution_test. This may cause confusion in test output where non-trace tests appear with atrace.prefix. If this is intentional for organizational reasons within the project, feel free to disregard.deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/document_node.h (1)
34-34: Default-initializeDocumentNodeLocationfields for consistency.The current implementation in
RymlDocument::Location()properly initializes all struct members before return. Adding default initialization with{}is optional defensive programming to ensure deterministic values across all potential implementations.Suggested improvement
class DocumentNodeLocation { public: - size_t offset; - size_t line; - size_t col; - std::string filename; + size_t offset{}; + size_t line{}; + size_t col{}; + std::string filename{}; std::string ToString() const; };deps/opentelemetry-cpp/sdk/test/configuration/yaml_distribution_test.cc (1)
65-103: Refactor test to use order-independent lookups instead of relying on iteration order.While RapidYAML preserves insertion order, YAML mappings are unordered by spec. Tests should not depend on iteration order to ensure compatibility if the parser changes and to follow best practices. Use
GetRequiredChildNode()to retrieve entries and properties by name:Order-independent approach
+ auto find_entry = [&](const char *name) { + for (auto &e : distribution->entries) + { + if (e && e->name == name) + return e.get(); + } + return static_cast<opentelemetry::sdk::configuration::DistributionEntryConfiguration *>(nullptr); + }; + - auto *entry_1 = distribution->entries[0].get(); + auto *entry_1 = find_entry("acme_vendor"); ASSERT_NE(entry_1, nullptr); ASSERT_EQ(entry_1->name, "acme_vendor"); node = entry_1->node.get(); ASSERT_NE(node, nullptr); - auto it = node->begin(); - property = (*it); - ASSERT_NE(property, nullptr); - name = property->Key(); - ASSERT_EQ(name, "a"); - ASSERT_EQ(property->AsInteger(), 12); - ++it; - property = (*it); - ASSERT_NE(property, nullptr); - name = property->Key(); - ASSERT_EQ(name, "b"); - ASSERT_EQ(property->AsInteger(), 34); - ++it; - property = (*it); - ASSERT_NE(property, nullptr); - name = property->Key(); - ASSERT_EQ(name, "c"); - ASSERT_EQ(property->AsInteger(), 56); - ++it; - ASSERT_EQ(node->end(), it); + auto a = node->GetRequiredChildNode("a"); + auto b = node->GetRequiredChildNode("b"); + auto c = node->GetRequiredChildNode("c"); + ASSERT_NE(a, nullptr); + ASSERT_NE(b, nullptr); + ASSERT_NE(c, nullptr); + ASSERT_EQ(a->AsInteger(), 12); + ASSERT_EQ(b->AsInteger(), 34); + ASSERT_EQ(c->AsInteger(), 56); - auto *entry_2 = distribution->entries[1].get(); + auto *entry_2 = find_entry("other_vendor"); ASSERT_NE(entry_2, nullptr); ASSERT_EQ(entry_2->name, "other_vendor");deps/opentelemetry-cpp/sdk/test/metrics/multi_observer_test.cc (1)
124-129: Use explicit span data/size to avoid constructor ambiguity.
nostd::spanmay not acceptstd::vectordirectly across all configurations; usingdata()/size()keeps it consistent with earlier usage.♻️ Suggested tweak
- auto callback_id = - meter->RegisterCallback(empty_multi_observable_callback, &was_called, - nostd::span<metrics::ObservableInstrument *>{empty_instruments}); + auto callback_id = + meter->RegisterCallback( + empty_multi_observable_callback, &was_called, + nostd::span<metrics::ObservableInstrument *>{empty_instruments.data(), + empty_instruments.size()});deps/opentelemetry-cpp/sdk/src/configuration/configuration_parser.cc (1)
135-261: Large enum-to-string mapping could benefit from a lookup table.The
ParseSeverityNumberfunction uses a long chain of if-else comparisons for 24 severity levels. While functionally correct, this pattern is verbose.However, since this is vendored upstream code from opentelemetry-cpp, modifying it would create maintenance burden during future updates.
| build_flags: | ||
| - '--cxxopt=-std=c++17' | ||
| - '--host_cxxopt=-std=c++17' | ||
| build_targets: | ||
| - '@opentelemetry-cpp//api' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the full presubmit.yml file
find . -name "presubmit.yml" -type f 2>/dev/null | head -20Repository: nodesource/nsolid
Length of output: 244
🏁 Script executed:
# Also locate the .bazelrc file mentioned in the review
find . -name ".bazelrc" -type f 2>/dev/null | head -20Repository: nodesource/nsolid
Length of output: 159
🏁 Script executed:
# Read the full presubmit.yml file
cat -n ./deps/opentelemetry-cpp/.bcr/presubmit.ymlRepository: nodesource/nsolid
Length of output: 540
🏁 Script executed:
# Read the .bazelrc file
cat -n ./deps/opentelemetry-cpp/.bazelrcRepository: nodesource/nsolid
Length of output: 2186
🌐 Web query:
BCR presubmit.yml bazelrc platform windows cxxopt flag
💡 Result:
In Bazel Central Registry (BCR), you don’t put these flags in a bazelrc; you put them in the module version’s presubmit.yml under a task’s build_flags / test_flags (and each task must declare a bazel: version). [1]
For C++ standard flags, BCR presubmits typically use:
- Unix-like toolchains:
--cxxopt=-std=c++17(or similar) - Windows (MSVC):
--cxxopt=/std:c++17(note the/std:form)
Example pattern from an actual BCR presubmit.yml that splits Unix vs Windows and uses the correct Windows cxxopt form: [2]
If you’re doing this in your own repo via .bazelrc, enable platform-specific configs and then add per-platform sections, e.g. build:windows ... (this is how Bazel’s own .bazelrc and large projects structure platform-specific options): [3] [4]
🌐 Web query:
Bazel presubmit windows platform specific config cxxopt
💡 Result:
To make Windows-only C++ compiler options apply in a presubmit (CI) without breaking Linux/macOS, do it via named .bazelrc configs, not select() inside --cxxopt (that doesn’t work because select() is a BUILD attribute mechanism, not a flag-language feature). (bazel.build)
1) Put platform-specific cxxopt in .bazelrc
# .bazelrc
# Default for GCC/Clang on Linux/macOS
build:posix --cxxopt=-std=c++17
# MSVC on Windows
build:windows --cxxopt=/std:c++17
(Windows uses /std:c++17 vs GCC/Clang -std=c++17.) (stackoverflow.com)
2) In presubmit, select the right config per job
- Windows CI job:
bazel test //... --config=windows - Linux/macOS CI job:
bazel test //... --config=posix
3) If you need platform-specific options inside BUILD rules, use select() on platform constraints
Prefer @platforms//os:windows over @bazel_tools//src/conditions:windows. (reviews.llvm.org)
cc_library(
name = "mylib",
srcs = ["mylib.cc"],
copts = select({
"@platforms//os:windows": ["/DWIN32_LEAN_AND_MEAN"],
"//conditions:default": [],
}),
)This is the intended use of configurable attributes (select()) in BUILD files. (bazel.build)
Citations:
- 1: https://bazel.build/docs/configurable-attributes?utm_source=openai
- 2: https://stackoverflow.com/questions/71067900/how-to-set-c-version-with-bazel-in-a-cross-platform-way?utm_source=openai
- 3: https://reviews.llvm.org/D147948?utm_source=openai
- 4: https://bazel.build/docs/configurable-attributes?utm_source=openai
🌐 Web query:
BCR presubmit.yml conditional build_flags per platform matrix
💡 Result:
In BCR presubmit.yml, build_flags (and test_flags) are per task, not per matrix entry—so you can’t directly say “only add this flag when platform == windows” inside a single task. The common pattern is to split into multiple tasks with different platform: selectors (optionally using YAML anchors to avoid duplication). (android.googlesource.com)
Example (platform-conditional flags via separate tasks):
matrix: &matrix
non_macos_platform:
- debian11
- ubuntu2404
- windows
bazel: [7.x]
tasks:
verify_targets:
name: Verify build targets
platform: ${{ non_macos_platform }}
bazel: ${{ bazel }}
build_targets: &targets
- '@mymodule//:all'
verify_targets_on_macos:
name: Verify build targets on macOS
platform: macos
bazel: ${{ bazel }}
build_flags: &macos_flags
- '--cxxopt=-std=c++14'
build_targets: *targetsThis is exactly how existing BCR-related presubmits handle “flags only on macOS (or only on X)”: duplicate the task with a narrower platform and attach the platform-specific build_flags. (boringssl.googlesource.com)
Citations:
- 1: https://android.googlesource.com/platform/external/bazelbuild-bazel-central-registry/%2B/HEAD/docs/README.md
- 2: https://boringssl.googlesource.com/boringssl.git/%2B/ed22bbb42a36ba6878a2e68b34d43036c221e763/.bcr/presubmit.yml
Split verify_targets task to use platform-specific C++ flags.
The build_flags apply --cxxopt=-std=c++17 universally, but Windows MSVC requires /std:c++17 (forward slash). The matrix includes a Windows platform, so this will fail on Windows CI.
Create separate tasks: one for Unix-like platforms (debian10, macos, ubuntu2004) with --cxxopt=-std=c++17, and one for Windows with --cxxopt=/std:c++17. This follows the standard BCR pattern for platform-conditional flags.
🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/.bcr/presubmit.yml` around lines 10 - 14, The CI
presubmit uses a single verify_targets task that sets
build_flags['--cxxopt=-std=c++17'] for all platforms which breaks Windows MSVC;
split into two platform-specific verify tasks (e.g., verify_targets_unix and
verify_targets_windows) so the Unix-like matrix entries (debian10, macos,
ubuntu2004) keep build_flags with --cxxopt=-std=c++17 while the Windows entry
uses --cxxopt=/std:c++17; ensure each task still includes the same build_targets
(e.g., '@opentelemetry-cpp//api') and the matrix/when conditions select the
correct platforms.
| RUN update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-20 200 && \ | ||
| update-alternatives --install /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-20 200 && \ | ||
| update-alternatives --config clang-tidy && \ | ||
| update-alternatives --config llvm-config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's confirm the file exists and examine the context around lines 25-28
find . -name "Dockerfile.dev" -type f | head -20Repository: nodesource/nsolid
Length of output: 114
🌐 Web query:
update-alternatives --config vs --set interactive non-interactive Docker
💡 Result:
-
update-alternatives --config <name>is interactive: it prints the available choices for the link group and then prompts you to pick one (may switch the group to manual mode depending on your selection). This is meant for humans at a terminal. [1] -
update-alternatives --set <name> <path>is non-interactive: it directly sets<path>as the selected alternative for<name>and is scriptable (CI/Docker)—the man page explicitly says it’s the non-interactive equivalent of--config. [1]
Docker/CI guidance: prefer --set (or --auto if you want the highest-priority option) because Docker builds typically have no TTY for --config prompts. [1]
🏁 Script executed:
# Read the Dockerfile to verify the current code and exact context
cat -n ./deps/opentelemetry-cpp/.devcontainer/Dockerfile.dev | sed -n '20,35p'Repository: nodesource/nsolid
Length of output: 763
Replace interactive update-alternatives --config with --set for non-interactive Docker builds.
The --config option requires user input and will fail or hang in Docker builds which lack a TTY. Use --set instead to directly select the desired alternative.
Suggested fix
RUN update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-20 200 && \
update-alternatives --install /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-20 200 && \
- update-alternatives --config clang-tidy && \
- update-alternatives --config llvm-config
+ update-alternatives --set clang-tidy /usr/bin/clang-tidy-20 && \
+ update-alternatives --set llvm-config /usr/bin/llvm-config-20📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-20 200 && \ | |
| update-alternatives --install /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-20 200 && \ | |
| update-alternatives --config clang-tidy && \ | |
| update-alternatives --config llvm-config | |
| RUN update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-20 200 && \ | |
| update-alternatives --install /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-20 200 && \ | |
| update-alternatives --set clang-tidy /usr/bin/clang-tidy-20 && \ | |
| update-alternatives --set llvm-config /usr/bin/llvm-config-20 |
🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/.devcontainer/Dockerfile.dev` around lines 25 - 28,
Replace the interactive update-alternatives calls with non-interactive --set
invocations: instead of running "update-alternatives --config clang-tidy" and
"update-alternatives --config llvm-config", use "update-alternatives --set
clang-tidy /usr/bin/clang-tidy-20" and "update-alternatives --set llvm-config
/usr/bin/llvm-config-20" so the Docker build does not require a TTY; update the
RUN line that currently calls update-alternatives for clang-tidy and llvm-config
accordingly.
| /** | ||
| Criticality of a destination workload indicates its importance to the business as specified in <a | ||
| href="https://cloud.google.com/app-hub/docs/reference/rest/v1/Attributes#type">AppHub type | ||
| enum</a> | ||
| */ | ||
| static constexpr const char *kGcpApphubDestinationServiceCriticalityType = | ||
| "gcp.apphub_destination.service.criticality_type"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix service/workload wording in destination criticality doc.
The description refers to a “destination workload,” but the constant is for destination service criticality. Since this file is auto-generated, update the semconv source/template and regenerate.
Suggested doc fix
- Criticality of a destination workload indicates its importance to the business as specified in <a
+ Criticality of a destination service indicates its importance to the business as specified in <a📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| Criticality of a destination workload indicates its importance to the business as specified in <a | |
| href="https://cloud.google.com/app-hub/docs/reference/rest/v1/Attributes#type">AppHub type | |
| enum</a> | |
| */ | |
| static constexpr const char *kGcpApphubDestinationServiceCriticalityType = | |
| "gcp.apphub_destination.service.criticality_type"; | |
| /** | |
| Criticality of a destination service indicates its importance to the business as specified in <a | |
| href="https://cloud.google.com/app-hub/docs/reference/rest/v1/Attributes#type">AppHub type | |
| enum</a> | |
| */ | |
| static constexpr const char *kGcpApphubDestinationServiceCriticalityType = | |
| "gcp.apphub_destination.service.criticality_type"; |
🤖 Prompt for AI Agents
In
`@deps/opentelemetry-cpp/api/include/opentelemetry/semconv/incubating/gcp_attributes.h`
around lines 101 - 107, The documentation comment above the constant
kGcpApphubDestinationServiceCriticalityType uses “destination workload” but the
constant is for destination service criticality; update the semconv
source/template to replace “destination workload” with “destination service” in
the docstring for kGcpApphubDestinationServiceCriticalityType (and any matching
incubating template entries), then regenerate the generated header so the
updated comment is emitted alongside that constant.
| /** | ||
| An estimate of how much memory is available for starting new applications, without causing | ||
| swapping. <p> This is an alternative to @code system.memory.usage @endcode metric with @code | ||
| state=free @endcode. Linux starting from 3.14 exports "available" memory. It takes "free" memory | ||
| as a baseline, and then factors in kernel-specific values. This is supposed to be more accurate | ||
| than just "free" memory. For reference, see the calculations <a | ||
| href="https://superuser.com/a/980821">here</a>. See also @code MemAvailable @endcode in <a | ||
| href="https://man7.org/linux/man-pages/man5/proc.5.html">/proc/meminfo</a>. <p> updowncounter | ||
| */ | ||
| static constexpr const char *kMetricSystemLinuxMemoryAvailable = "system.linux.memory.available"; | ||
| static constexpr const char *descrMetricSystemLinuxMemoryAvailable = | ||
| "An estimate of how much memory is available for starting new applications, without causing " | ||
| "swapping."; | ||
| static constexpr const char *unitMetricSystemLinuxMemoryAvailable = "By"; | ||
| The number of packets transferred. | ||
|
|
||
| static inline nostd::unique_ptr<metrics::UpDownCounter<int64_t>> | ||
| @deprecated | ||
| {"note": "Replaced by @code system.memory.linux.available @endcode.", "reason": "renamed", | ||
| "renamed_to": "system.memory.linux.available"} <p> counter | ||
| */ | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *kMetricSystemLinuxMemoryAvailable = | ||
| "system.linux.memory.available"; | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *descrMetricSystemLinuxMemoryAvailable = | ||
| "The number of packets transferred."; | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *unitMetricSystemLinuxMemoryAvailable = | ||
| "{packet}"; | ||
|
|
||
| OPENTELEMETRY_DEPRECATED static inline nostd::unique_ptr<metrics::Counter<uint64_t>> | ||
| CreateSyncInt64MetricSystemLinuxMemoryAvailable(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateInt64UpDownCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| return meter->CreateUInt64Counter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| } | ||
|
|
||
| static inline nostd::unique_ptr<metrics::UpDownCounter<double>> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::unique_ptr<metrics::Counter<double>> | ||
| CreateSyncDoubleMetricSystemLinuxMemoryAvailable(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateDoubleUpDownCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| return meter->CreateDoubleCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| } | ||
|
|
||
| static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| CreateAsyncInt64MetricSystemLinuxMemoryAvailable(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateInt64ObservableUpDownCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| return meter->CreateInt64ObservableCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| } | ||
|
|
||
| static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| CreateAsyncDoubleMetricSystemLinuxMemoryAvailable(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateDoubleObservableUpDownCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| return meter->CreateDoubleObservableCounter(kMetricSystemLinuxMemoryAvailable, | ||
| descrMetricSystemLinuxMemoryAvailable, | ||
| unitMetricSystemLinuxMemoryAvailable); | ||
| } | ||
|
|
||
| /** | ||
| Reports the memory used by the Linux kernel for managing caches of frequently used objects. | ||
| <p> | ||
| The sum over the @code reclaimable @endcode and @code unreclaimable @endcode state values in @code | ||
| linux.memory.slab.usage @endcode SHOULD be equal to the total slab memory available on the system. | ||
| Note that the total slab memory is not constant and may vary over time. | ||
| See also the <a | ||
| href="https://blogs.oracle.com/linux/post/understanding-linux-kernel-memory-statistics">Slab | ||
| allocator</a> and @code Slab @endcode in <a | ||
| href="https://man7.org/linux/man-pages/man5/proc.5.html">/proc/meminfo</a>. <p> updowncounter | ||
| */ | ||
| static constexpr const char *kMetricSystemLinuxMemorySlabUsage = "system.linux.memory.slab.usage"; | ||
| static constexpr const char *descrMetricSystemLinuxMemorySlabUsage = | ||
| "Reports the memory used by the Linux kernel for managing caches of frequently used objects."; | ||
| static constexpr const char *unitMetricSystemLinuxMemorySlabUsage = "By"; | ||
| The number of packets transferred. | ||
|
|
||
| static inline nostd::unique_ptr<metrics::UpDownCounter<int64_t>> | ||
| @deprecated | ||
| {"note": "Replaced by @code system.memory.linux.slab.usage @endcode.", "reason": "renamed", | ||
| "renamed_to": "system.memory.linux.slab.usage"} <p> counter | ||
| */ | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *kMetricSystemLinuxMemorySlabUsage = | ||
| "system.linux.memory.slab.usage"; | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *descrMetricSystemLinuxMemorySlabUsage = | ||
| "The number of packets transferred."; | ||
| OPENTELEMETRY_DEPRECATED static constexpr const char *unitMetricSystemLinuxMemorySlabUsage = | ||
| "{packet}"; | ||
|
|
||
| OPENTELEMETRY_DEPRECATED static inline nostd::unique_ptr<metrics::Counter<uint64_t>> | ||
| CreateSyncInt64MetricSystemLinuxMemorySlabUsage(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateInt64UpDownCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| return meter->CreateUInt64Counter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| } | ||
|
|
||
| static inline nostd::unique_ptr<metrics::UpDownCounter<double>> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::unique_ptr<metrics::Counter<double>> | ||
| CreateSyncDoubleMetricSystemLinuxMemorySlabUsage(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateDoubleUpDownCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| return meter->CreateDoubleCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| } | ||
|
|
||
| static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| CreateAsyncInt64MetricSystemLinuxMemorySlabUsage(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateInt64ObservableUpDownCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| return meter->CreateInt64ObservableCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| } | ||
|
|
||
| static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| OPENTELEMETRY_DEPRECATED static inline nostd::shared_ptr<metrics::ObservableInstrument> | ||
| CreateAsyncDoubleMetricSystemLinuxMemorySlabUsage(metrics::Meter *meter) | ||
| { | ||
| return meter->CreateDoubleObservableUpDownCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| return meter->CreateDoubleObservableCounter(kMetricSystemLinuxMemorySlabUsage, | ||
| descrMetricSystemLinuxMemorySlabUsage, | ||
| unitMetricSystemLinuxMemorySlabUsage); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the exact file and lines in question
file_path="deps/opentelemetry-cpp/api/include/opentelemetry/semconv/incubating/system_metrics.h"
echo "=== Checking file exists and getting context ==="
if [ -f "$file_path" ]; then
echo "File found. Checking lines around 605-695..."
sed -n '600,700p' "$file_path" | head -100
else
echo "File not found at specified path. Searching..."
find . -name "system_metrics.h" -type f 2>/dev/null
fiRepository: nodesource/nsolid
Length of output: 4826
🏁 Script executed:
# Search for non-deprecated versions of these metrics to understand the correct semantics
rg "system\.linux\.memory\.(available|slab)" --type cpp -A 3 -B 3Repository: nodesource/nsolid
Length of output: 4104
🌐 Web query:
OpenTelemetry semantic conventions system.linux.memory.available unit type counter updowncounter
💡 Result:
For the OpenTelemetry system semantic conventions metric:
- Current canonical name:
system.memory.linux.available - Instrument type: UpDownCounter
- Unit (UCUM):
By(bytes) [1]
If you’re using the older name system.linux.memory.available, it referred to the same thing and also used UpDownCounter with unit By [2].
Sources
- OpenTelemetry semantic conventions – System metrics (
system.memory.linux.available) [1] - Generated semantic-conventions constant docs for
SYSTEM_LINUX_MEMORY_AVAILABLE(instrument/unit metadata) [2]
Fix incorrect semantics in deprecated system.linux.memory. metrics.*
The deprecated aliases for system.linux.memory.available and system.linux.memory.slab.usage have packet-based descriptions/units and Counter types, which are inconsistent with the current non-deprecated versions. The non-deprecated entries in system_metrics.h correctly use memory semantics (bytes via By unit) and UpDownCounter types. The deprecated entries must be aligned to match:
- Descriptions should describe memory (e.g., "An estimate of how much memory is available..." for available; "Reports the memory used by..." for slab)
- Unit must be
"By"(bytes), not"{packet}" - Sync methods: change
CountertoUpDownCounter; changeCreateUInt64Counter/CreateDoubleCountercalls toCreateInt64UpDownCounter/CreateDoubleUpDownCounter - Async methods: change
ObservableCountertoObservableUpDownCounter; changeCreateInt64ObservableCounter/CreateDoubleObservableCountertoCreateInt64ObservableUpDownCounter/CreateDoubleObservableUpDownCounter
This ensures legacy callers receive telemetry with correct units and types.
🤖 Prompt for AI Agents
In
`@deps/opentelemetry-cpp/api/include/opentelemetry/semconv/incubating/system_metrics.h`
around lines 605 - 695, The deprecated metrics for
kMetricSystemLinuxMemoryAvailable and kMetricSystemLinuxMemorySlabUsage use
packet semantics and Counter types incorrectly; update
descrMetricSystemLinuxMemoryAvailable and descrMetricSystemLinuxMemorySlabUsage
to memory-appropriate text (e.g., "An estimate of how much memory is
available..." and "Reports the memory used by the slab allocator..."), change
unitMetricSystemLinuxMemoryAvailable and unitMetricSystemLinuxMemorySlabUsage to
"By", replace synchronous Counter types and factory calls in
CreateSyncInt64MetricSystemLinuxMemoryAvailable/CreateSyncDoubleMetricSystemLinuxMemoryAvailable
and
CreateSyncInt64MetricSystemLinuxMemorySlabUsage/CreateSyncDoubleMetricSystemLinuxMemorySlabUsage
to UpDownCounter versions (CreateInt64UpDownCounter/CreateDoubleUpDownCounter
and corresponding unique_ptr<metrics::UpDownCounter<...>> types), and replace
asynchronous ObservableCounter usage in
CreateAsyncInt64MetricSystemLinuxMemoryAvailable/CreateAsyncDoubleMetricSystemLinuxMemoryAvailable
and
CreateAsyncInt64MetricSystemLinuxMemorySlabUsage/CreateAsyncDoubleMetricSystemLinuxMemorySlabUsage
with ObservableUpDownCounter and
CreateInt64ObservableUpDownCounter/CreateDoubleObservableUpDownCounter to match
the non-deprecated memory semantics.
| if (check_json["resourceMetrics"].size() == 0) | ||
| { | ||
| return; | ||
| } | ||
| auto resource_metrics = *check_json["resourceMetrics"].begin(); | ||
| auto scope_metrics = *resource_metrics["scopeMetrics"].begin(); | ||
| auto scope = scope_metrics["scope"]; | ||
| if (resource_metrics["scopeMetrics"].size() == 0) | ||
| { | ||
| return; | ||
| } | ||
| auto scope_metrics = *resource_metrics["scopeMetrics"].begin(); | ||
| if (scope_metrics["metrics"].size() == 0) | ||
| { | ||
| return; | ||
| } | ||
| ++received_record_counter; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing response.Finish() call on early returns may cause test issues.
The early-return guards skip calling response.Finish(*callback.get()), which may leave the HTTP client callback in an incomplete state. If the exporter expects a response signal, this could cause hangs or unexpected behavior in certain test scenarios.
Consider calling response.Finish() before returning, or document why skipping it is safe:
🔧 Suggested fix
if (check_json["resourceMetrics"].size() == 0)
{
+ http_client::nosend::Response response;
+ response.Finish(*callback.get());
return;
}
auto resource_metrics = *check_json["resourceMetrics"].begin();
if (resource_metrics["scopeMetrics"].size() == 0)
{
+ http_client::nosend::Response response;
+ response.Finish(*callback.get());
return;
}
auto scope_metrics = *resource_metrics["scopeMetrics"].begin();
if (scope_metrics["metrics"].size() == 0)
{
+ http_client::nosend::Response response;
+ response.Finish(*callback.get());
return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (check_json["resourceMetrics"].size() == 0) | |
| { | |
| return; | |
| } | |
| auto resource_metrics = *check_json["resourceMetrics"].begin(); | |
| auto scope_metrics = *resource_metrics["scopeMetrics"].begin(); | |
| auto scope = scope_metrics["scope"]; | |
| if (resource_metrics["scopeMetrics"].size() == 0) | |
| { | |
| return; | |
| } | |
| auto scope_metrics = *resource_metrics["scopeMetrics"].begin(); | |
| if (scope_metrics["metrics"].size() == 0) | |
| { | |
| return; | |
| } | |
| ++received_record_counter; | |
| if (check_json["resourceMetrics"].size() == 0) | |
| { | |
| http_client::nosend::Response response; | |
| response.Finish(*callback.get()); | |
| return; | |
| } | |
| auto resource_metrics = *check_json["resourceMetrics"].begin(); | |
| if (resource_metrics["scopeMetrics"].size() == 0) | |
| { | |
| http_client::nosend::Response response; | |
| response.Finish(*callback.get()); | |
| return; | |
| } | |
| auto scope_metrics = *resource_metrics["scopeMetrics"].begin(); | |
| if (scope_metrics["metrics"].size() == 0) | |
| { | |
| http_client::nosend::Response response; | |
| response.Finish(*callback.get()); | |
| return; | |
| } | |
| +received_record_counter; |
🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/exporters/otlp/test/otlp_http_metric_exporter_test.cc`
around lines 186 - 200, The early-return branches in the test (after checking
check_json["resourceMetrics"], resource_metrics["scopeMetrics"], and
scope_metrics["metrics"]) skip calling response.Finish(*callback.get()), which
can leave the HTTP client callback unresolved; update each early-return path in
otlp_http_metric_exporter_test.cc to call response.Finish(*callback.get())
before returning (or otherwise ensure the callback is completed), referencing
the existing response and callback objects used in the test so the HTTP client
is always signaled even when the JSON guards cause an early exit.
| opentelemetry::sdk::common::internal_log::LogLevel log_level; | ||
| opentelemetry::sdk::resource::Resource resource; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find LogLevel definition and its location
rg -n "enum class LogLevel|enum LogLevel" deps/opentelemetry-cpp --max-count=20Repository: nodesource/nsolid
Length of output: 164
🏁 Script executed:
#!/bin/bash
# Check the actual configured_sdk.h file
cat -n deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/configured_sdk.h | head -100Repository: nodesource/nsolid
Length of output: 2483
🏁 Script executed:
#!/bin/bash
# Find the constructor implementation for ConfiguredSdk
rg -n "ConfiguredSdk::" deps/opentelemetry-cpp -A 10 -B 2Repository: nodesource/nsolid
Length of output: 3545
🏁 Script executed:
#!/bin/bash
# Get the LogLevel enum definition
cat -n deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/common/global_log_handler.h | head -50Repository: nodesource/nsolid
Length of output: 1625
🏁 Script executed:
#!/bin/bash
# Check how log_level is set in the Create method / SdkBuilder
rg -n "log_level\s*=" deps/opentelemetry-cpp/sdk/src/configuration -A 2 -B 2Repository: nodesource/nsolid
Length of output: 3300
Initialize log_level to prevent undefined behavior.
The ConfiguredSdk() constructor only initializes resource but leaves log_level uninitialized. When Install() is called, it reads the uninitialized log_level and passes it to SetLogLevel(), resulting in undefined behavior. Value-initialize log_level to LogLevel::None (the default).
Suggested fix
- opentelemetry::sdk::common::internal_log::LogLevel log_level;
+ opentelemetry::sdk::common::internal_log::LogLevel log_level{};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| opentelemetry::sdk::common::internal_log::LogLevel log_level; | |
| opentelemetry::sdk::resource::Resource resource; | |
| opentelemetry::sdk::common::internal_log::LogLevel log_level{}; | |
| opentelemetry::sdk::resource::Resource resource; |
🤖 Prompt for AI Agents
In
`@deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/configuration/configured_sdk.h`
around lines 51 - 52, The member log_level in class ConfiguredSdk is left
uninitialized causing undefined behavior when Install() calls SetLogLevel();
modify the ConfiguredSdk() constructor to value-initialize log_level to
opentelemetry::sdk::common::internal_log::LogLevel::None so log_level has a
well-defined default before Install() reads it (leave resource initialization
as-is).
| if(WITH_CONFIGURATION) | ||
| add_subdirectory(configuration) | ||
| endif() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for WITH_CONFIGURATION definitions and usage across the entire codebase
rg -n "WITH_CONFIGURATION" --type cmakeRepository: nodesource/nsolid
Length of output: 273
🏁 Script executed:
# Also search for option definitions that might set this flag
rg -n "option.*WITH_CONFIGURATION" -A 2Repository: nodesource/nsolid
Length of output: 43
🏁 Script executed:
# Look for add_subdirectory(configuration) to find all related usages
rg -n "add_subdirectory.*configuration" --type cmakeRepository: nodesource/nsolid
Length of output: 227
🏁 Script executed:
# Check the main CMakeLists.txt files that might define this option
find . -name "CMakeLists.txt" -path "*/deps/opentelemetry-cpp/*" | head -20Repository: nodesource/nsolid
Length of output: 1217
🏁 Script executed:
# Check the parent CMakeLists.txt at deps/opentelemetry-cpp/sdk/CMakeLists.txt around line 39
sed -n '35,45p' deps/opentelemetry-cpp/sdk/CMakeLists.txtRepository: nodesource/nsolid
Length of output: 249
🏁 Script executed:
# Search for option definitions across the entire repository
rg -n "^option\(" -t cmakeRepository: nodesource/nsolid
Length of output: 4155
🏁 Script executed:
# Also check if it's set elsewhere
rg -n "set.*WITH_CONFIGURATION" -t cmakeRepository: nodesource/nsolid
Length of output: 43
🏁 Script executed:
# Check for any CMakeLists.txt in the root or parent directories
find . -maxdepth 3 -name "CMakeLists.txt" | grep -v deps | head -10Repository: nodesource/nsolid
Length of output: 43
WITH_CONFIGURATION option is undefined and must be added to CMake configuration. The flag is used in three places to conditionally build the configuration subdirectory, but it has no option() definition anywhere in the codebase. This causes the condition to always evaluate to false, preventing the configuration target from being built. Add option(WITH_CONFIGURATION "Build configuration component" ON) to the appropriate parent CMakeLists.txt (likely deps/opentelemetry-cpp/sdk/CMakeLists.txt or the root CMakeLists.txt).
🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/sdk/src/CMakeLists.txt` around lines 11 - 13, The
CMake conditional USING the WITH_CONFIGURATION symbol is never defined so the
configuration subdirectory never builds; add an option definition for
WITH_CONFIGURATION in the parent CMakeLists so the flag can be toggled (e.g.,
define a CMake option named WITH_CONFIGURATION with a sensible default such as
ON) and ensure that the three places that check IF(WITH_CONFIGURATION) now
respect that option; update the top-level or sdk CMakeLists to declare the
option before those IF(...) checks so add_subdirectory(configuration) can be
enabled when desired.
| if(OPENTELEMETRY_INSTALL) | ||
| opentelemetry_add_pkgconfig( | ||
| configuration "OpenTelemetry SDK - Configuration" | ||
| "Components for exporting traces in the OpenTelemetry SDK." | ||
| "opentelemetry_configuration") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect pkg-config description.
The description on line 33 says "Components for exporting traces in the OpenTelemetry SDK" but this is the configuration component, not the trace exporter.
📝 Proposed fix
if(OPENTELEMETRY_INSTALL)
opentelemetry_add_pkgconfig(
configuration "OpenTelemetry SDK - Configuration"
- "Components for exporting traces in the OpenTelemetry SDK."
+ "Components for SDK configuration in OpenTelemetry."
"opentelemetry_configuration")
endif()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if(OPENTELEMETRY_INSTALL) | |
| opentelemetry_add_pkgconfig( | |
| configuration "OpenTelemetry SDK - Configuration" | |
| "Components for exporting traces in the OpenTelemetry SDK." | |
| "opentelemetry_configuration") | |
| if(OPENTELEMETRY_INSTALL) | |
| opentelemetry_add_pkgconfig( | |
| configuration "OpenTelemetry SDK - Configuration" | |
| "Components for SDK configuration in OpenTelemetry." | |
| "opentelemetry_configuration") | |
| endif() |
🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/sdk/src/configuration/CMakeLists.txt` around lines 30
- 34, The pkg-config entry for the "configuration" component uses the wrong
description string; update the opentelemetry_add_pkgconfig call for the
component named "configuration" so the second description argument refers to
configuration (e.g., "Configuration components for the OpenTelemetry SDK" or
similar) rather than "Components for exporting traces in the OpenTelemetry SDK."
| size_t GetMaxQueueSizeFromEnv() | ||
| { | ||
| std::uint32_t value{}; | ||
| if (!opentelemetry::sdk::common::GetUintEnvironmentVariable(kMaxQueueSizeEnv, value)) | ||
| { | ||
| return kDefaultMaxQueueSize; | ||
| } | ||
| return static_cast<size_t>(value); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Locate and examine GetUintEnvironmentVariable implementation
rg -n "GetUintEnvironmentVariable" deps/opentelemetry-cpp/sdk -A 10 -B 2Repository: nodesource/nsolid
Length of output: 13103
🏁 Script executed:
# Read the full batch_span_processor_options.cc file to see context and any existing guards
cat -n deps/opentelemetry-cpp/sdk/src/trace/batch_span_processor_options.ccRepository: nodesource/nsolid
Length of output: 2857
🏁 Script executed:
# Check for any validation on max_queue_size and max_export_batch_size in the BSP implementation
rg -n "max_queue_size|max_export_batch_size" deps/opentelemetry-cpp/sdk/src/trace -C 3Repository: nodesource/nsolid
Length of output: 3900
🏁 Script executed:
# Look for any validation in the BatchSpanProcessorOptions class definition/initialization
find deps/opentelemetry-cpp/sdk -name "*.h" -o -name "*.cc" | xargs rg -l "BatchSpanProcessorOptions" | head -5Repository: nodesource/nsolid
Length of output: 407
🏁 Script executed:
# Get the complete GetUintEnvironmentVariable implementation
sed -n '206,230p' deps/opentelemetry-cpp/sdk/src/common/env_variables.ccRepository: nodesource/nsolid
Length of output: 918
🏁 Script executed:
# Check the BatchSpanProcessorOptions header for any validation
cat deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/trace/batch_span_processor_options.hRepository: nodesource/nsolid
Length of output: 3419
🏁 Script executed:
# Check if there's any validation when max_queue_size or max_export_batch_size is set
rg -n "max_queue_size_|max_export_batch_size_" deps/opentelemetry-cpp/sdk/include/opentelemetry/sdk/trace/ -A 2 -B 2Repository: nodesource/nsolid
Length of output: 911
🏁 Script executed:
# Check the complete strtoull section of GetUintEnvironmentVariable to see all the rejection cases
sed -n '206,240p' deps/opentelemetry-cpp/sdk/src/common/env_variables.ccRepository: nodesource/nsolid
Length of output: 1258
🏁 Script executed:
# Check if there's a test case specifically for zero values
grep -n "SetEnv.*UINT.*0\|UINT_ENV.*0" deps/opentelemetry-cpp/sdk/test/common/env_var_test.ccRepository: nodesource/nsolid
Length of output: 43
🏁 Script executed:
# Look for any validation in the BSP constructor or factory
grep -n "max_queue_size\|max_export_batch_size" deps/opentelemetry-cpp/sdk/src/trace/batch_span_processor_factory.cc -B 2 -A 2Repository: nodesource/nsolid
Length of output: 43
Add guards to reject zero-sized queue and batch values from environment variables.
If GetUintEnvironmentVariable parses "0" successfully (which it does), the GetMaxQueueSizeFromEnv() and GetMaxExportBatchSizeFromEnv() functions will propagate zero values. This breaks downstream logic:
- A zero
max_queue_sizecreates a circular buffer of size 0 (undefined behavior) - Line 101 check (
buffer_size >= max_queue_size_ / 2) always triggers when size is 0 - A zero
max_export_batch_sizeprevents any spans from being exported (line 258 condition never true)
Add zero checks to both functions to fall back to defaults:
Suggested fix
size_t GetMaxQueueSizeFromEnv()
{
std::uint32_t value{};
- if (!opentelemetry::sdk::common::GetUintEnvironmentVariable(kMaxQueueSizeEnv, value))
+ if (!opentelemetry::sdk::common::GetUintEnvironmentVariable(kMaxQueueSizeEnv, value) ||
+ value == 0)
{
return kDefaultMaxQueueSize;
}
return static_cast<size_t>(value);
}
size_t GetMaxExportBatchSizeFromEnv()
{
std::uint32_t value{};
- if (!opentelemetry::sdk::common::GetUintEnvironmentVariable(kMaxExportBatchSizeEnv, value))
+ if (!opentelemetry::sdk::common::GetUintEnvironmentVariable(kMaxExportBatchSizeEnv, value) ||
+ value == 0)
{
return kDefaultMaxExportBatchSize;
}
return static_cast<size_t>(value);
}🤖 Prompt for AI Agents
In `@deps/opentelemetry-cpp/sdk/src/trace/batch_span_processor_options.cc` around
lines 30 - 38, Both GetMaxQueueSizeFromEnv() and GetMaxExportBatchSizeFromEnv()
must guard against environment-parsed zero values: after calling
opentelemetry::sdk::common::GetUintEnvironmentVariable(...) and before returning
the parsed value, check if the parsed uint value is zero and if so return
kDefaultMaxQueueSize (for GetMaxQueueSizeFromEnv) or kDefaultMaxExportBatchSize
(for GetMaxExportBatchSizeFromEnv); keep the existing fallback when
GetUintEnvironmentVariable fails, and otherwise return
static_cast<size_t>(value) only when value != 0.
| TEST_F(BatchLogRecordProcessorTest, TestDefaultValues) | ||
| { | ||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.max_queue_size, static_cast<size_t>(2048)); | ||
| EXPECT_EQ(options.schedule_delay_millis, std::chrono::milliseconds(1000)); | ||
| EXPECT_EQ(options.export_timeout_millis, std::chrono::milliseconds(30000)); | ||
| EXPECT_EQ(options.max_export_batch_size, static_cast<size_t>(512)); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestMaxQueueSizeFromEnv) | ||
| { | ||
| setenv("OTEL_BLRP_MAX_QUEUE_SIZE", "1234", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.max_queue_size, static_cast<size_t>(1234)); | ||
|
|
||
| unsetenv("OTEL_BLRP_MAX_QUEUE_SIZE"); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestScheduleDelayFromEnv) | ||
| { | ||
| setenv("OTEL_BLRP_SCHEDULE_DELAY", "7s", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.schedule_delay_millis, std::chrono::milliseconds(7000)); | ||
|
|
||
| unsetenv("OTEL_BLRP_SCHEDULE_DELAY"); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestScheduleDelayInvalidValueFromEnv) | ||
| { | ||
| setenv("OTEL_BLRP_SCHEDULE_DELAY", "invalid_value", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.schedule_delay_millis, std::chrono::milliseconds(1000)); | ||
|
|
||
| unsetenv("OTEL_BLRP_SCHEDULE_DELAY"); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestScheduleDelayInvalidEdgeCasesFromEnv) | ||
| { | ||
| struct TestCase | ||
| { | ||
| const char *value; | ||
| const char *description; | ||
| }; | ||
|
|
||
| const std::vector<TestCase> cases = { | ||
| {"-5ms", "negative duration"}, | ||
| {"0ms", "zero duration"}, | ||
| {"999999999999999999999s", "overflow"}, | ||
| {"", "empty string"}, | ||
| }; | ||
|
|
||
| for (const auto &tc : cases) | ||
| { | ||
| setenv("OTEL_BLRP_SCHEDULE_DELAY", tc.value, 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.schedule_delay_millis, std::chrono::milliseconds(1000)) | ||
| << "Failed for case: " << tc.description << " (value='" << tc.value << "')"; | ||
|
|
||
| unsetenv("OTEL_BLRP_SCHEDULE_DELAY"); | ||
| } | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestExportTimeoutFromEnv) | ||
| { | ||
| setenv("OTEL_BLRP_EXPORT_TIMEOUT", "250ms", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.export_timeout_millis, std::chrono::milliseconds(250)); | ||
|
|
||
| unsetenv("OTEL_BLRP_EXPORT_TIMEOUT"); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestMaxExportBatchSizeFromEnv) | ||
| { | ||
| setenv("OTEL_BLRP_MAX_EXPORT_BATCH_SIZE", "42", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.max_export_batch_size, static_cast<size_t>(42)); | ||
|
|
||
| unsetenv("OTEL_BLRP_MAX_EXPORT_BATCH_SIZE"); | ||
| } | ||
|
|
||
| TEST_F(BatchLogRecordProcessorTest, TestOptionsReadFromMultipleEnvVars) | ||
| { | ||
| setenv("OTEL_BLRP_MAX_QUEUE_SIZE", "3000", 1); | ||
| setenv("OTEL_BLRP_SCHEDULE_DELAY", "2s", 1); | ||
| setenv("OTEL_BLRP_EXPORT_TIMEOUT", "1s", 1); | ||
| setenv("OTEL_BLRP_MAX_EXPORT_BATCH_SIZE", "256", 1); | ||
|
|
||
| BatchLogRecordProcessorOptions options; | ||
|
|
||
| EXPECT_EQ(options.max_queue_size, static_cast<size_t>(3000)); | ||
| EXPECT_EQ(options.schedule_delay_millis, std::chrono::milliseconds(2000)); | ||
| EXPECT_EQ(options.export_timeout_millis, std::chrono::milliseconds(1000)); | ||
| EXPECT_EQ(options.max_export_batch_size, static_cast<size_t>(256)); | ||
|
|
||
| unsetenv("OTEL_BLRP_MAX_QUEUE_SIZE"); | ||
| unsetenv("OTEL_BLRP_SCHEDULE_DELAY"); | ||
| unsetenv("OTEL_BLRP_EXPORT_TIMEOUT"); | ||
| unsetenv("OTEL_BLRP_MAX_EXPORT_BATCH_SIZE"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore preexisting OTEL_BLRP_ env vars to avoid test cross‑talk.*
These tests unconditionally unset variables; if the host environment had them set, later tests/processes lose the original values. A scoped guard to capture/restore avoids global state leaks.
💡 Proposed scoped env-var guard (apply across the env-var tests)
@@
using namespace opentelemetry::sdk::common;
namespace nostd = opentelemetry::nostd;
+class ScopedEnvVar
+{
+public:
+ ScopedEnvVar(const char *key, const char *value) : key_(key)
+ {
+ const char *prev = getenv(key);
+ if (prev != nullptr)
+ {
+ had_prev_ = true;
+ prev_ = prev;
+ }
+ setenv(key, value, 1);
+ }
+ ~ScopedEnvVar()
+ {
+ if (had_prev_)
+ {
+ setenv(key_, prev_.c_str(), 1);
+ }
+ else
+ {
+ unsetenv(key_);
+ }
+ }
+private:
+ const char *key_;
+ bool had_prev_{false};
+ std::string prev_;
+};
@@
TEST_F(BatchLogRecordProcessorTest, TestMaxQueueSizeFromEnv)
{
- setenv("OTEL_BLRP_MAX_QUEUE_SIZE", "1234", 1);
+ ScopedEnvVar max_queue_size_env("OTEL_BLRP_MAX_QUEUE_SIZE", "1234");
BatchLogRecordProcessorOptions options;
EXPECT_EQ(options.max_queue_size, static_cast<size_t>(1234));
- unsetenv("OTEL_BLRP_MAX_QUEUE_SIZE");
}
Summary by CodeRabbit
New Features
Bug Fixes
Changes