Skip to content

memory leak from otlp http exporter #3642

@farzinlize

Description

@farzinlize

Describe your environment
I'm using OPENTELEMETRY_VERSION "1.22.0"

Steps to reproduce
the problem is that when application is finished, valgrind reports many memory leaks related to otlp http exporter (examples are below) and calling shutdown on TracerProvider dose not change that. the main function is just a unit-test that calls defined initial and terminate telemetry functions I wrote like this:

initial_telemetry_ut();
// do some work
terminate_telemetry_ut(); // for cleanup but it fails to release allocated memories completely

here is the summery of those two functions abow:

  • initial_telemetry_ut summery:
otlp_export::OtlpHttpExporterOptions options; options.url = "http://localhost:4318/v1/traces";
auto exporter = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(options);

auto processor = opentelemetry::sdk::trace::BatchSpanProcessorFactory::Create(std::move(exporter), {});

auto provider = opentelemetry::sdk::trace::TracerProviderFactory::Create(
            std::move(processor), resources, std::move(the_sampler)
        );

opentelemetry::nostd::shared_ptr<opentelemetry::trace::TracerProvider> api_provider = std::shared_ptr<opentelemetry::trace::TracerProvider>(
        provider.release()
    );
g_provider = api_provider; // g_provider is a global variable of type pentelemetry::nostd::shared_ptr<opentelemetry::trace::TracerProvider>

opentelemetry::trace::Provider::SetTracerProvider(api_provider);
  • terminate_telemetry_ut summery:
std::shared_ptr<otl_trace::TracerProvider> none;
trace_api::Provider::SetTracerProvider(none);

g_provider = nullptr; // the same global variable from above

finally here are some examples of errors in valgrind report:

...
==22678== 648 bytes in 9 blocks are still reachable in loss record 124 of 130
==22678==    at 0x4846FA3: operator new(unsigned long) (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==22678==    by 0x4BC2DD2: google::protobuf::EncodedDescriptorDatabase::DescriptorIndex::AddSymbol(google::protobuf::stringpiece_internal::StringPiece) (in /usr/lib/x86_64-linux-gnu/libprotobuf.so.32.0.12)
==22678==    by 0x4BC43B1: google::protobuf::EncodedDescriptorDatabase::Add(void const*, int) (in /usr/lib/x86_64-linux-gnu/libprotobuf.so.32.0.12)
==22678==    by 0x4B66C76: google::protobuf::DescriptorPool::InternalAddGeneratedFile(void const*, int) (in /usr/lib/x86_64-linux-gnu/libprotobuf.so.32.0.12)
==22678==    by 0x4BDBB77: ??? (in /usr/lib/x86_64-linux-gnu/libprotobuf.so.32.0.12)
==22678==    by 0x4AE02AA: ??? (in /usr/lib/x86_64-linux-gnu/libprotobuf.so.32.0.12)
==22678==    by 0x400571E: call_init.part.0 (dl-init.c:74)
==22678==    by 0x4005823: call_init (dl-init.c:120)
==22678==    by 0x4005823: _dl_init (dl-init.c:121)
==22678==    by 0x401F59F: ??? (in /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
...
==22678== 22,000 bytes in 125 blocks are still reachable in loss record 129 of 130
==22678==    at 0x484D953: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==22678==    by 0x60E2823: asn1_array2tree (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.6.3)
==22678==    by 0x5472434: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.37.1)
==22678==    by 0x5438EDF: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.37.1)
==22678==    by 0x400571E: call_init.part.0 (dl-init.c:74)
==22678==    by 0x4005823: call_init (dl-init.c:120)
==22678==    by 0x4005823: _dl_init (dl-init.c:121)
==22678==    by 0x401F59F: ??? (in /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
==22678== 
==22678== 83,072 bytes in 472 blocks are still reachable in loss record 130 of 130
==22678==    at 0x484D953: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==22678==    by 0x60E2823: asn1_array2tree (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.6.3)
==22678==    by 0x5472339: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.37.1)
==22678==    by 0x5438EDF: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.37.1)
==22678==    by 0x400571E: call_init.part.0 (dl-init.c:74)
==22678==    by 0x4005823: call_init (dl-init.c:120)
==22678==    by 0x4005823: _dl_init (dl-init.c:121)
==22678==    by 0x401F59F: ??? (in /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
==22678== 
==22678== LEAK SUMMARY:
==22678==    definitely lost: 0 bytes in 0 blocks
==22678==    indirectly lost: 0 bytes in 0 blocks
==22678==      possibly lost: 0 bytes in 0 blocks
==22678==    still reachable: 120,567 bytes in 890 blocks
==22678==         suppressed: 0 bytes in 0 blocks
==22678== 
==22678== For lists of detected and suppressed errors, rerun with: -s
==22678== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

What is the expected behavior?
I expect to see no leaks in valgrind result

Tip: React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions