Releases: DataDog/dd-trace-py
3.12.0rc3
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Prelude
AI Guard is an upcoming Datadog security product currently under active design and development. Note: The Python SDK is released as a technical preview. Functionality and APIs are subject to change, and backward compatibility is not guaranteed at this stage.
Upgrade Notes
- profiling: timeline view is now enabled by default (
DD_PROFILING_TIMELINE_ENABLED=True). This provides visualization of profiling data with timing information.
Deprecation Notes
- tracing
ddtrace.settings.__init__imports are deprecated and will be removed in version 4.0.0.- Deprecate the non_active_span parameter in the
HTTPPropagator.injectmethod.HTTPPropagator.inject(context=...)should be used to inject headers instead.
- profiling: Windows support is removed. Please file an issue if you want this reverted.
New Features
- App and API Protection (AAP): Introduce a public Python SDK that provides programmatic access to AI Guard’s public endpoint.
- asgi: Adds tracing on websocket spans with
DD_TRACE_WEBSOCKET_MESSAGES_ENABLED, which replacesDD_TRACE_WEBSOCKET_MESSAGES. - CI Visibility: This introduces an alternative method for collecting and sending test spans. In this mode, the
CIVisibilitytracer is kept separate from the globalddtracetracer, which helps avoid interference between test and non-test tracer configurations. This mode is currently experimental, and can be enabled by setting the environment variableDD_CIVISIBILITY_USE_BETA_WRITERtotrue. - crewai: Introduces APM and LLM Observability tracing support for CrewAI Flow
kickoff/kickoff_asynccalls, including tracing internal flow method execution. - LLM Observability
- Adds support for collecting tool definitions, tool calls and tool results in the Anthropic integration.
- Increases span event size limit from 1MB to 5MB.
- Records agent manifest information for LangGraph compiled graphs.
- add ability to drop spans by having a
SpanProcessorreturnNone. - mcp: Adds distributed tracing support for MCP tool calls across client-server boundaries by default.
To disable distributed tracing for mcp, set the configuration:DD_MCP_DISTRIBUTED_TRACING=Falsefor both the client and server.
Bug Fixes
- AAP
- resolves a bug where ASGI middleware would not catch the BlockingException raised by AAP because it was aggregated in an ExceptionGroup
- This fix resolves an issue where a malformed package would prevent reporting of other correctly formed packages to Software Composition Analysis
- This fix resolves an issue where the
routeparameter was not being correctly handled in the Django path function.
- CI Visibility: This fix resolves an issue where using the pytest
skipifmarker with the condition passed as a keyword argument (or not provided at all) would cause the test to be reported as failed, in particular when theflakyorpytest-rerunfailureswere also used. - ddtrace_api: Fixes a bug in the ddtrace_api integration in which
patch()with no arguments, and thuspatch_all(), breaks the integration. - django: fix incorrect component tag being set for django orm spans
- dynamic instrumentation: extended captured value redaction in mappings with keys of type
bytes. - openai: Resolves an issue where an uninitialized
OpenAI/AsyncOpenAIclient would result in anAttributeError. - pydantic_ai: This fix resolves an issue where enabling the Pydantic AI for
pydantic-ai-slim >= 0.4.4would fail. Seethis issue <https://github.com/DataDog/dd-trace-py/issues/14161>_ for more details. - tracing
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
DD_TRACE_SAMPLING_RULES='\[{"resource": null, "sample_rate": 1}\]'will be equivalent toDD_TRACE_SAMPLING_RULES='\[{"sample_rate": 1}\]'. - Fix inconsistent trace sampling during partial flush (traces >300 spans). Now correctly applies sampling rules to the root span instead of a random payload span. Since traces are sampled only once, using the wrong span could bypass sampling rules and cause the agent to apply default rate limits. Fixes regression introduced in v2.8.0.
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
- kafka: This fix resolves an issue where the
list_topicscall in the Kafka integration could hang indefinitely. The integration now sets a 1-second timeout on `list_topics calls and caches both successful cluster ID results and failures (with a 5-minute retry interval) to prevent repeated slow metadata queries. - Code Security (IAST): Fixes Gevent worker timeouts by preloading IAST early and refactoring taint sink initialization to remove legacy import-based triggers, ensuring reliable instrumentation.
- LLM Observability
- Fixes a bug where code execution outputs done through
google-genaiwould result in no output messages on the LLM Observabilityllmspan. - langgraph: resolves
ModuleNotFoundErrorerrors when patchinglanggraph>=0.6.0 - openai: fixed an issue when using the openai responses api with
openai>=1.66.0,<1.66.2would result in anAttributeError.
- Fixes a bug where code execution outputs done through
- Remote Config: Eagerly query Remote Config upon process startup to ensure timely configuration updates.
- Flares: Fixes to make the tracer flares match the spec.
Other Changes
- tracing: Improves debug logging in
HTTPPropagator.injectmethod to help diagnose issues with sampling decisions. - profiling: removed redundant sampling code from memory profile, improving overhead and accuracy. Sizes and counts of objects allocated since the last profile are now reported more accurately. ENV:
DD_PROFILING_MAX_EVENTSis deprecated and does nothing. UseDD_PROFILING_HEAP_SAMPLE_SIZEto control sampling frequency of the memory profiler.
3.12.0rc2
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Prelude
AI Guard is an upcoming Datadog security product currently under active design and development. Note: The Python SDK is released as a technical preview. Functionality and APIs are subject to change, and backward compatibility is not guaranteed at this stage.
Upgrade Notes
- profiling: timeline view is now enabled by default (
DD_PROFILING_TIMELINE_ENABLED=True). This provides visualization of profiling data with timing information.
Deprecation Notes
- tracing
ddtrace.settings.__init__imports are deprecated and will be removed in version 4.0.0.- Deprecate the non_active_span parameter in the
HTTPPropagator.injectmethod.HTTPPropagator.inject(context=...)should be used to inject headers instead.
- profiling: Windows support is removed. Please file an issue if you want this reverted.
New Features
- App and API Protection (AAP): Introduce a public Python SDK that provides programmatic access to AI Guard’s public endpoint.
- asgi: Adds tracing on websocket spans with
DD_TRACE_WEBSOCKET_MESSAGES_ENABLED, which replacesDD_TRACE_WEBSOCKET_MESSAGES. - CI Visibility: This introduces an alternative method for collecting and sending test spans. In this mode, the
CIVisibilitytracer is kept separate from the globalddtracetracer, which helps avoid interference between test and non-test tracer configurations. This mode is currently experimental, and can be enabled by setting the environment variableDD_CIVISIBILITY_USE_BETA_WRITERtotrue. - LLM Observability
- Adds support for collecting tool definitions, tool calls and tool results in the Anthropic integration.
- Increases span event size limit from 1MB to 5MB.
- Records agent manifest information for LangGraph compiled graphs.
- add ability to drop spans by having a
SpanProcessorreturnNone. - mcp: Adds distributed tracing support for MCP tool calls across client-server boundaries by default.
To disable distributed tracing for mcp, set the configuration:DD_MCP_DISTRIBUTED_TRACING=Falsefor both the client and server.
Bug Fixes
- AAP
- resolves a bug where ASGI middleware would not catch the BlockingException raised by AAP because it was aggregated in an ExceptionGroup
- This fix resolves an issue where a malformed package would prevent reporting of other correctly formed packages to Software Composition Analysis
- This fix resolves an issue where the
routeparameter was not being correctly handled in the Django path function.
- CI Visibility: This fix resolves an issue where using the pytest
skipifmarker with the condition passed as a keyword argument (or not provided at all) would cause the test to be reported as failed, in particular when theflakyorpytest-rerunfailureswere also used. - ddtrace_api: Fixes a bug in the ddtrace_api integration in which
patch()with no arguments, and thuspatch_all(), breaks the integration. - django: fix incorrect component tag being set for django orm spans
- dynamic instrumentation: extended captured value redaction in mappings with keys of type
bytes. - openai: Resolves an issue where an uninitialized
OpenAI/AsyncOpenAIclient would result in anAttributeError. - pydantic_ai: This fix resolves an issue where enabling the Pydantic AI for
pydantic-ai-slim >= 0.4.4would fail. Seethis issue <https://github.com/DataDog/dd-trace-py/issues/14161>_ for more details. - tracing
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
DD_TRACE_SAMPLING_RULES='\[{"resource": null, "sample_rate": 1}\]'will be equivalent toDD_TRACE_SAMPLING_RULES='\[{"sample_rate": 1}\]'. - Fix inconsistent trace sampling during partial flush (traces >300 spans). Now correctly applies sampling rules to the root span instead of a random payload span. Since traces are sampled only once, using the wrong span could bypass sampling rules and cause the agent to apply default rate limits. Fixes regression introduced in v2.8.0.
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
- kafka: This fix resolves an issue where the
list_topicscall in the Kafka integration could hang indefinitely. The integration now sets a 1-second timeout on `list_topics calls and caches both successful cluster ID results and failures (with a 5-minute retry interval) to prevent repeated slow metadata queries. - Code Security (IAST): Fixes Gevent worker timeouts by preloading IAST early and refactoring taint sink initialization to remove legacy import-based triggers, ensuring reliable instrumentation.
- LLM Observability
- Fixes a bug where code execution outputs done through
google-genaiwould result in no output messages on the LLM Observabilityllmspan. - langgraph: resolves
ModuleNotFoundErrorerrors when patchinglanggraph>=0.6.0 - openai: fixed an issue when using the openai responses api with
openai>=1.66.0,<1.66.2would result in anAttributeError.
- Fixes a bug where code execution outputs done through
- Remote Config: Eagerly query Remote Config upon process startup to ensure timely configuration updates.
- Flares: Fixes to make the tracer flares match the spec.
Other Changes
- tracing: Improves debug logging in
HTTPPropagator.injectmethod to help diagnose issues with sampling decisions. - profiling: removed redundant sampling code from memory profile, improving overhead and accuracy. Sizes and counts of objects allocated since the last profile are now reported more accurately. ENV:
DD_PROFILING_MAX_EVENTSis deprecated and does nothing. UseDD_PROFILING_HEAP_SAMPLE_SIZEto control sampling frequency of the memory profiler.
3.11.3
Bug Fixes
- AAP: resolves a bug where ASGI middleware would not catch the BlockingException raised by AAP because it was aggregated in an ExceptionGroup
- AAP: This fix resolves an issue where the
routeparameter was not being correctly handled in the Django path function. - dynamic instrumentation: extended captured value redaction in mappings with keys of type
bytes. - kafka: This fix resolves an issue where the list_topics call in the Kafka integration could hang indefinitely. The integration now sets a 1-second timeout on list_topics calls and caches both successful cluster ID results and failures (with a 5-minute retry interval) to prevent repeated slow metadata queries.
- openai: Resolves an issue where an uninitialized
OpenAI/AsyncOpenAIclient would result in anAttributeError. - pydantic_ai: This fix resolves an issue where enabling the Pydantic AI for pydantic-ai-slim >= 0.4.4 would fail. See
this issue <https://github.com/DataDog/dd-trace-py/issues/14161>_ for more details. - Remote Config: Eagerly query Remote Config upon process startup to ensure timely configuration updates.
- tracing: Fix inconsistent trace sampling during partial flush (traces >300 spans). Now correctly applies sampling rules to the root span instead of a random payload span. Since traces are sampled only once, using the wrong span could bypass sampling rules and cause the agent to apply default rate limits. Fixes regression introduced in v2.8.0.
3.12.0rc1
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Prelude
AI Guard is an upcoming Datadog security product currently under active design and development. Note: The Python SDK is released as a technical preview. Functionality and APIs are subject to change, and backward compatibility is not guaranteed at this stage.
Upgrade Notes
- profiling: timeline view is now enabled by default (
DD_PROFILING_TIMELINE_ENABLED=True). This provides visualization of profiling data with timing information.
Deprecation Notes
- tracing
ddtrace.settings.__init__imports are deprecated and will be removed in version 4.0.0.- Deprecate the non_active_span parameter in the
HTTPPropagator.injectmethod.HTTPPropagator.inject(context=...)should be used to inject headers instead.
- profiling: Windows support is removed. Please file an issue if you want this reverted.
New Features
- App and API Protection (AAP): Introduce a public Python SDK that provides programmatic access to AI Guard’s public endpoint.
- asgi: Adds tracing on websocket spans with
DD_TRACE_WEBSOCKET_MESSAGES_ENABLED, which replacesDD_TRACE_WEBSOCKET_MESSAGES. - CI Visibility: This introduces an alternative method for collecting and sending test spans. In this mode, the
CIVisibilitytracer is kept separate from the globalddtracetracer, which helps avoid interference between test and non-test tracer configurations. This mode is currently experimental, and can be enabled by setting the environment variableDD_CIVISIBILITY_USE_BETA_WRITERtotrue. - LLM Observability
- Increases span event size limit from 1MB to 5MB.
- Records agent manifest information for LangGraph compiled graphs.
- add ability to drop spans by having a
SpanProcessorreturnNone. - mcp: Adds distributed tracing support for MCP tool calls across client-server boundaries by default.
To disable distributed tracing for mcp, set the configuration:DD_MCP_DISTRIBUTED_TRACING=Falsefor both the client and server.
Bug Fixes
- AAP
- resolves a bug where ASGI middleware would not catch the BlockingException raised by AAP because it was aggregated in an ExceptionGroup
- This fix resolves an issue where the
routeparameter was not being correctly handled in the Django path function.
- CI Visibility: This fix resolves an issue where using the pytest
skipifmarker with the condition passed as a keyword argument (or not provided at all) would cause the test to be reported as failed, in particular when theflakyorpytest-rerunfailureswere also used. - ddtrace_api: Fixes a bug in the ddtrace_api integration in which
patch()with no arguments, and thuspatch_all(), breaks the integration. - django: fix incorrect component tag being set for django orm spans
- dynamic instrumentation: extended captured value redaction in mappings with keys of type
bytes. - ASM: This fix resolves an issue where a malformed package would prevent reporting of other correctly formed packages to Software Composition Analysis
- openai: Resolves an issue where an uninitialized
OpenAI/AsyncOpenAIclient would result in anAttributeError. - pydantic_ai: This fix resolves an issue where enabling the Pydantic AI for
pydantic-ai-slim >= 0.4.4would fail. Seethis issue <https://github.com/DataDog/dd-trace-py/issues/14161>_ for more details. - tracing
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
DD_TRACE_SAMPLING_RULES='\[{"resource": null, "sample_rate": 1}\]'will be equivalent toDD_TRACE_SAMPLING_RULES='\[{"sample_rate": 1}\]'. - Fix inconsistent trace sampling during partial flush (traces >300 spans). Now correctly applies sampling rules to the root span instead of a random payload span. Since traces are sampled only once, using the wrong span could bypass sampling rules and cause the agent to apply default rate limits. Fixes regression introduced in v2.8.0.
- Resolves an issue where sampling rules with null values for service, resource, or name would not match any spans, since these fields are always exported as strings and never null. Now, null and unset fields are treated the same. Ex:
- kafka: This fix resolves an issue where the
list_topicscall in the Kafka integration could hang indefinitely. The integration now sets a 1-second timeout on `list_topics calls and caches both successful cluster ID results and failures (with a 5-minute retry interval) to prevent repeated slow metadata queries. - Code Security (IAST): Fixes Gevent worker timeouts by preloading IAST early and refactoring taint sink initialization to remove legacy import-based triggers, ensuring reliable instrumentation.
- LLM Observability
- Fixes a bug where code execution outputs done through
google-genaiwould result in no output messages on the LLM Observabilityllmspan. - langgraph: resolves
ModuleNotFoundErrorerrors when patchinglanggraph>=0.6.0 - openai: fixed an issue when using the openai responses api with
openai>=1.66.0,<1.66.2would result in anAttributeError.
- Fixes a bug where code execution outputs done through
- Remote Config: Eagerly query Remote Config upon process startup to ensure timely configuration updates.
- Flares: Fixes to make the tracer flares match the spec.
Other Changes
- tracing: Improves debug logging in
HTTPPropagator.injectmethod to help diagnose issues with sampling decisions. - profiling: removed redundant sampling code from memory profile, improving overhead and accuracy. Sizes and counts of objects allocated since the last profile are now reported more accurately. ENV:
DD_PROFILING_MAX_EVENTSis deprecated and does nothing. UseDD_PROFILING_HEAP_SAMPLE_SIZEto control sampling frequency of the memory profiler.
3.11.2
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Bug Fixes
- CI Visibility: This fix resolves an issue where using the pytest
skipifmarker with the condition passed as a keyword argument (or not provided at all) would cause the test to be reported as failed, in particular when theflakyorpytest-rerunfailureswere also used.
3.11.1
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Bug Fixes
- ddtrace_api: Fixes a bug in the ddtrace_api integration in which
patch()with no arguments, and thuspatch_all(), breaks the integration.
3.11.0
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
- aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
- Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
-
CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
-
AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. -
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
-
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
-
dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
-
logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. -
azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
-
profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
-
tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send. - sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
-
Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
3.11.0rc3
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
- aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
- Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
-
CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
-
AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. -
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
-
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
-
dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
-
logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. -
azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
-
profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
-
tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send. - sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
-
Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
3.10.3
Bug Fixes
- dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
- tracing
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
- Code Security: AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic. - litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- django: fix incorrect component tag being set for django orm spans
3.11.0rc2
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
- CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
- AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. - litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
- litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
- dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
- logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. - azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
- profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
- tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send.
- Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LLM Observability span.
- gemini: Removes the IO data from the APM spans for Gemini LLM requests and responses, which is duplicated in the LLM Observability span.
- vertexai: Removes the IO data from the APM spans for VertexAI LLM requests and responses, which is duplicated in the LLM Observability span.
- langchain: Removes I/O tags from APM spans for LangChain LLM requests and responses, which is duplicated in LLM Observability.
- Sampling rules now only support glob matchers; regex and callable matchers are no longer supported. This simplifies the code and removes functionality that was removed from the public API in ddtrace v3.0.0.