Skip to content

Memory leak with python sentry-sdk 3.0.0a6 #4841

@mukesh-hira

Description

@mukesh-hira

How do you use Sentry?

Sentry Saas (sentry.io)

Version

3.0.0a6

Steps to Reproduce

Enable sentry-sdk with default parameters in our python FastAPI server with a sampling rate of 0.1. Run a load test using k6 that emulates a 1000 virtual users with spikes of search APIs. Python RSS and pod memory usage continues to grow up to 3GB and beyond as long as the load test keeps running. At the end of the load test, RSS reaches a plateau but doesn't come back down even hours after the load test stops. We are seeing high memory utilization in our prod environment too, but the isolated load test provides a better perspective as in that environment activity returns to zero and thus memory utilization should clearly come back down.

The memory leak was narrowed down by running tracemalloc periodically and it showed open telemetry responsible for bulk of the allocations. Changing sentry-sdk sampling rate to 0 makes the system behave very well with very low memory utilization, eliminating the possibility of memory leaks in other parts of the code.

It seems that use of open telemetry in sentry-sdk is new in 3.0 and the current stable 2.86 release doesn't use open telemetry under the hood. Thus it is possible that the mem leak we are experiencing has to do with the specific release of sentry-sdk we are running with new open telemetry integration.

Expected Result

Expected result - memory utilization should come back down to a small level after stopping load test.

Actual Result

Memory utilization stays at the level it reaches at the stop of load test.

Metadata

Metadata

Assignees

No one assigned

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions