Summary
deepeval v3.7.7 introduces critical security and privacy risks for any application that uses OpenTelemetry. At module import time (not at evaluation time), deepeval/telemetry.py silently:
- Hijacks the global OpenTelemetry
TracerProvider — causing ALL application spans (not just deepeval's) to be routed to deepeval's own New Relic account
- Initializes Sentry with 100% CPU profiling (
profiles_sample_rate=1.0, traces_sample_rate=1.0)
- Overrides the host application's
sys.excepthook — sending uncaught exceptions to deepeval's Sentry DSN
- Makes a blocking HTTP call to
https://api.ipify.org to collect the server's public IP
- Initializes PostHog analytics that phones home with usage data
Why this is a security risk
The most severe issue is #1: by calling trace.set_tracer_provider(TracerProvider()) at import time, deepeval registers itself as the global tracer provider. Any application that subsequently calls trace.set_tracer_provider() gets the warning:
"Overriding of current TracerProvider is not allowed"
This means all application tracing data — every span created via trace.get_tracer() anywhere in the host application — flows through deepeval's BatchSpanProcessor and is exported to:
Endpoint: https://otlp.nr-data.net:4317
API Key: 1711c684db8a30361a7edb0d0398772cFFFFNRAL
This is not limited to deepeval's own telemetry. It captures business logic spans, request traces, database query traces, and any other OpenTelemetry instrumentation in the host application. This is data exfiltration of application telemetry to a third party without user consent or disclosure.
Reproduction
Any Python application using OpenTelemetry + deepeval:
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
# Import anything from deepeval — triggers telemetry.py at module load
from deepeval.metrics import GEval
# This now FAILS silently — deepeval already owns the global provider
my_provider = TracerProvider()
trace.set_tracer_provider(my_provider)
# WARNING: "Overriding of current TracerProvider is not allowed"
# All spans now go to deepeval's New Relic, not your own backend
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("my-business-logic"):
pass # ← this span is exported to otlp.nr-data.net
Affected code
deepeval/telemetry.py lines 87–130 (v3.7.7):
if not telemetry_opt_out():
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
anonymous_public_ip = get_anonymous_public_ip() # blocking HTTP call
sentry_sdk.init(
dsn="https://5ef587d58109ee45d6544f3657efdd1f@o4506098477236224.ingest.sentry.io/4506098479136768",
profiles_sample_rate=1.0, # 100% CPU profiling on host app
traces_sample_rate=1.0, # 100% trace sampling on host app
send_default_pii=False,
attach_stacktrace=False,
default_integrations=False,
)
trace.set_tracer_provider(TracerProvider()) # HIJACKS GLOBAL PROVIDER
tracer_provider = trace.get_tracer_provider()
NEW_RELIC_LICENSE_KEY = "1711c684db8a30361a7edb0d0398772cFFFFNRAL"
NEW_RELIC_OTLP_ENDPOINT = "https://otlp.nr-data.net:4317"
otlp_exporter = OTLPSpanExporter(
endpoint=NEW_RELIC_OTLP_ENDPOINT,
headers={"api-key": NEW_RELIC_LICENSE_KEY},
)
span_processor = BatchSpanProcessor(otlp_exporter)
tracer_provider.add_span_processor(span_processor)
Impact
- Data exfiltration: All OpenTelemetry spans from the host application are sent to deepeval's New Relic account, potentially including sensitive business data, user identifiers, query parameters, and internal service details
- Performance degradation: Sentry CPU profiling at 100% sample rate causes measurable CPU and memory overhead in production
- Memory leaks: BatchSpanProcessor buffering spans for export to New Relic, plus Sentry profiling buffers, cause memory growth under load
- Silent failure: The host application's own TracerProvider setup fails silently with a warning, giving the false impression that tracing is configured correctly while no data reaches the intended backend
- Exception handler hijack:
sys.excepthook is overridden, so uncaught exceptions go to deepeval's Sentry instead of the application's own error handling
Suggested fix
- Never call
trace.set_tracer_provider() globally. Use a local/private TracerProvider instance for deepeval's own telemetry instead of hijacking the global one
- Never initialize
sentry_sdk.init() globally. This affects the entire host application, not just deepeval
- Never override
sys.excepthook in a library
- Make telemetry opt-in, not opt-out, especially for behaviors that affect the host application's global state
- Document clearly what data is collected and where it is sent
Related issues
Environment
- deepeval version: 3.7.7
- opentelemetry-sdk: 1.30.0
- Python: 3.13.2
Summary
deepevalv3.7.7 introduces critical security and privacy risks for any application that uses OpenTelemetry. At module import time (not at evaluation time),deepeval/telemetry.pysilently:TracerProvider— causing ALL application spans (not just deepeval's) to be routed to deepeval's own New Relic accountprofiles_sample_rate=1.0,traces_sample_rate=1.0)sys.excepthook— sending uncaught exceptions to deepeval's Sentry DSNhttps://api.ipify.orgto collect the server's public IPWhy this is a security risk
The most severe issue is #1: by calling
trace.set_tracer_provider(TracerProvider())at import time, deepeval registers itself as the global tracer provider. Any application that subsequently callstrace.set_tracer_provider()gets the warning:This means all application tracing data — every span created via
trace.get_tracer()anywhere in the host application — flows through deepeval'sBatchSpanProcessorand is exported to:This is not limited to deepeval's own telemetry. It captures business logic spans, request traces, database query traces, and any other OpenTelemetry instrumentation in the host application. This is data exfiltration of application telemetry to a third party without user consent or disclosure.
Reproduction
Any Python application using OpenTelemetry + deepeval:
Affected code
deepeval/telemetry.pylines 87–130 (v3.7.7):Impact
sys.excepthookis overridden, so uncaught exceptions go to deepeval's Sentry instead of the application's own error handlingSuggested fix
trace.set_tracer_provider()globally. Use a local/privateTracerProviderinstance for deepeval's own telemetry instead of hijacking the global onesentry_sdk.init()globally. This affects the entire host application, not just deepevalsys.excepthookin a libraryRelated issues
Environment