Releases: DataDog/dd-trace-py
3.11.0
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
- aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
- Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
-
CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
-
AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. -
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
-
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
-
dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
-
logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. -
azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
-
profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
-
tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send. - sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
-
Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
3.11.0rc3
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
- aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
- Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
-
CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
-
AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. -
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
-
litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
-
LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
-
dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
-
logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. -
azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
-
profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
-
tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send. - sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
-
Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
3.10.3
Bug Fixes
- dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
- tracing
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
- Code Security: AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic. - litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- django: fix incorrect component tag being set for django orm spans
3.11.0rc2
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
- CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by
pytest, leading toI/O operation on closed fileerrors at the end of the test session. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
- AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. - litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
- litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError. - This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
- dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
- logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. - azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
- profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
- tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send.
- Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LLM Observability span.
- gemini: Removes the IO data from the APM spans for Gemini LLM requests and responses, which is duplicated in the LLM Observability span.
- vertexai: Removes the IO data from the APM spans for VertexAI LLM requests and responses, which is duplicated in the LLM Observability span.
- langchain: Removes I/O tags from APM spans for LangChain LLM requests and responses, which is duplicated in LLM Observability.
- Sampling rules now only support glob matchers; regex and callable matchers are no longer supported. This simplifies the code and removes functionality that was removed from the public API in ddtrace v3.0.0.
2.21.11
Estimated end-of-life date: 10-2025
See the support level definitions for more information.
Bug Fixes
- dynamic instrumentation
- fixed an issue with the instrumentation of generators with Python 3.10.
- fixed an issue with the instrumentation of the first line of an iteration block (e.g. for loops) that could have caused undefined behavior.
- prevent an exception when trying to remove a probe that did not resolve to a valid source code location.
3.11.0rc1
Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.
Upgrade Notes
- CI Visibility: Code coverage collection for Test Impact Analysis with
pytestdoes not requirecoverage.pyas a dependency anymore.
Deprecation Notes
- CI Visibility: The
freezegunintegration is deprecated and will be removed in 4.0.0. Thefreezegunintegration is not necessary anymore for the correct reporting of test durations and timestamps.
New Features
- LLM Observability
- Added support to
submit_evaluation_for()for submitting boolean metrics in LLMObs evaluation metrics, usingmetric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow. - Bedrock Converse
toolResultcontent blocks are formatted as tool messages on LLM Observabilityllmspans' inputs. - This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
- This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
- Adds support to automatically submit Google GenAI calls to LLM Observability.
- Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
- Adds support to automatically submit PydanticAI request spans to LLM Observability.
- mcp: Adds tracing support for
mcp.client.session.ClientSession.call_toolandmcp.server.fastmcp.tools.tool_manager.ToolManager.call_toolmethods in the MCP SDK.
- Added support to
- otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable
DD_METRICS_OTEL_ENABLEDmust be set totrueand the application must include its own OTLP metrics exporter. - asgi: Obfuscate resource names on 404 spans when
DD_ASGI_OBFUSCATE_404_RESOURCEis enabled (disabled by default). - code origin: added support for in-product enablement.
- logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either
ddtrace-runorimport ddtrace.auto. To disable this feature, set the environment variableDD_LOGS_INJECTIONtoFalse. - google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.
Bug Fixes
- CI Visibility
- This fix resolves an issue where
freezegunwould not work with tests defined inunittestclasses. - This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where
- AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the
DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDSenvironment variable. - lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
- litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
- LLM Observability
- Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- openai
- This fix resolves an issue where openai tracing caused an
AttributeErrorwhile parsingNoneTypestreamed chunk deltas. - fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an
AttributeError.
- This fix resolves an issue where openai tracing caused an
- Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
- fixes an issue where input messages for tool messages were not being captured properly.
- This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22. - This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
- Fixed an issue where grabbing token values for some providers through
langchainlibraries raised aValueError. - This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
- dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
- logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you. - profiling
- Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
- Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
- tracing
- This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
- This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve the decorated function's return type, returningAnyinstead of the original return type. - This fix resolves an issue where spans would have incorrect timestamps and durations when
freezegunwas in use. With this change, thefreezegunintegration is not necessary anymore. - Fixes an issue in which span durations or start timestamps exceeding the platform's
LONG_MAXcaused traces to fail to send.
- Code Security (IAST)
- Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
- AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g.,
mysqlsh(MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
Other Changes
- openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
- anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LLM Observability span.
- gemini: Removes the IO data from the APM spans for Gemini LLM requests and responses, which is duplicated in the LLM Observability span.
- vertexai: Removes the IO data from the APM spans for VertexAI LLM requests and responses, which is duplicated in the LLM Observability span.
2.21.10
Estimated end-of-life date: 10-2025
See the support level definitions for more information.
Bug Fixes
- tracing
- This resolves a
TypeErrorin encoding when truncating a large bytes object. - This fix resolves an issue where the library fails to decode a supported sampling mechanism, resulting in the log line: "failed to decode _dd.p.dm: ..."
- Fixes an issue where span attributes were not truncated before encoding, leading to runtime error and causing spans to be dropped. Spans with resource name, tag key or value larger than 25000 characters will be truncated to 2500 characters.
- Fixes an issue where truncation of span attributes longer than 25000 characters would not consistently count the size of UTF-8 multibyte characters, leading to a
unicode string is too largeerror.
- This resolves a
- dynamic instrumentation: fixed an incompatibility issue with code origin that caused line probes on the entry point functions to fail to instrument.
3.10.2
Bug Fixes
- logging: Fix issue when
dd.*properties were not injected onto logging records unlessDD_LOGS_ENABLED=trueenv var was set (default value isstructured). This issue causes problems for non-structured loggers which set their own format string instead of havingddtraceset the logging format string for you.
3.10.1
Bug Fixes
- CI Visibility:
- This fix resolves an issue where using Test Optimization together with external retry plugins such as
flakyorpytest-rerunfailureswould cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used. - This fix resolves an issue where test retry numbers were not reported correctly when tests were run with
pytest-xdist.
- This fix resolves an issue where using Test Optimization together with external retry plugins such as
- LLM Observability:
- Fixes an issue where LangGraph span links for execution flows were broken for
langgraph>=0.3.22.
- Fixes an issue where LangGraph span links for execution flows were broken for
3.10.0
Deprecation Notes
- Dynamic Instrumentation:
- The
DD_DYNAMIC_INSTRUMENTATION_UPLOAD_FLUSH_INTERVALenvironment variable has been deprecated in favor ofDD_DYNAMIC_INSTRUMENTATION_UPLOAD_INTERVAL_SECONDS. The old environment variable will be removed in a future major release.
- The
- Tracing
- The
escapedandtimestamparguments in therecord_exceptionmethod are deprecated and will be removed in version 4.0.0.
- The
New Features
- DSM:
- Add flag in set_consume_checkpoint() to indicate if DSM checkpoint was manually set.
- Tracing:
- Adds the environment variable
DD_RUNTIME_METRICS_RUNTIME_ID_ENABLEDto enable runtime metrics for tagging runtime metrics with the current runtime ID. This is useful for tracking runtime metrics across multiple processes. Previously, this wasDD_TRACE_EXPERIMENTAL_RUNTIME_ID_ENABLED. azure_servicebus: Add support for Azure Service Bus producers.azure_functions: Adds messaging span attributes for service bus triggersazure_functions: Add distributed tracing support for Service Bus triggers.ddtrace-api: Adds patching ofddtrace_api.tracer.set_tagsto theddtrace_apiintegration- loguru,structlog,logbook:
- Enable trace-log correlation for structured loggers by default.
- Adds support for trace-log correlation via remote configuration. Previously, this functionality was only available for Python’s built-in logging library.
- Adds the environment variable
- Dynamic Instrumentation:
- Code Origins for Spans is now automatically enabled when Dynamic Instrumentation is turned on.
- LLM Observability:
- Introduces tracing support for bedrock-agent-runtime
invoke_agentcalls. If bedrock agents tracing is enabled, the internal bedrock traces will be converted and submitted as LLM Observability spans. - Adds support for configuring proxy URLs for LLM Observability using the
DD_LLMOBS_INSTRUMENTED_PROXY_URLSenvironment variable or by enabling LLM Observability with theinstrumented_proxy_urlsargument. Spans sent to a proxy URL will now show up as workflow spans instead of LLM spans. - Adds LLM Observability tracing support for the OpenAI Responses endpoint.
google_genai: Introduces tracing support for Google's Generative AI SDK for Python'sgenerate_contentandgenerate_content_streammethods.
See the docs for more information.pydantic_ai: Introduces tracing support for PydanticAI'sAgent.iterandTool.runmethods.
See the docs for more information.
- Introduces tracing support for bedrock-agent-runtime
- CI Visibility:
- This introduces preliminary support to link children pytest-xdist tests (and test suites and test modules) to their parent sessions, instead of being sent as independent sessions.
- Exception Replay:
- Added in-product enablement support.
- Code Security (IAST):
- Handle IAST security controls custom validation and sanitization methods. See the Security Controls documentation for more information about this feature.
- Profiling:
- Add gevent support to the new stack sampling mechanism (stack v2).
- AAP:
- This introduces the WAF trace tagging feature. This feature enables Datadog’s security research team to efficiently gather API security findings without generating appsec events, which bypass tracer sampling mechanisms. As an example, trace-tagging rules can be used to add attributes to traces with details about the signing algorithm and expiration of a JWT token with the goal of providing authentication-related findings.
Bug Fixes
- Tracing:
- algoliasearch: Fix for algoliasearch dangling reference.
- This resolves a
TypeErrorin encoding when truncating a large bytes object. - Resolves a sampling issue where agent-based sampling rates were not correctly applied after a process forked or the tracer was reconfigured.
- Resolves a bug where
os.systemorsubprocess.Popencould return the wrong exception type. - This fix resolves an issue in which traced nested generator functions had their execution order subtly changed in a way that affected the stack unwinding sequence during exception handling. The issue was caused by the tracer's use of simple iteration via
for v in g: yield vduring the wrapping of generator functions where full bidrectional communication with the sub-generator viayield from gwas appropriate. See PEP380 for an explanation of how these two generator uses differ. - This fix resolves an issue where the
@tracer.wrap()decorator failed to preserve return values from generator functions, causingStopIteration.valueto beNoneinstead of the actual returned value. rq: Enable parsing distributed tracing metadata in perform job
- AAP:
- This fix resolves an issue where track_user was generating additional unexpected security activity for customers.
- This fix resolves an issue where the new ATO SDK track_user was reporting differently email, name, scope and role of the tracked user.
- CI Visibility:
- This fix resolves an issue where test spans would be left unfinished if the pytest_runtest_protocol hook was overridden in conftest.py, causing the corresponding module and suite to be unfinished as well.
- This fix resolves an issue where code coverage would not be enabled if ddtrace was enabled via the
PYTEST_ADDOPTSenvironment variable.
- azure_functions:
- This fix resolves an issue where functions throw an error on loading when the function_name decorator follows another decorator.
- LLM Observability:
- This fix resolves an issue where modifying bedrock converse streamed chunks caused traced spans to show modified content.
- Resolved an issue where manual instrumentation would raise
DD_LLMOBS_ML_APPmissing errors when LLM Observability was disabled. - litellm: This fix resolves an out of bounds error when handling streamed responses. This error occurred when the number of choices in a streamed response was not set as a keyword argument.
- Fixes an issue where the trace ID exported from
export_spanwas incorrect. langchain: Resolved anAttributeErrorthat could occur when async tasks are cancelled during agenerate calls.
- Dynamic Instrumentation:
- Fixed an incompatibility issue with code origin that caused line probes on the entry point functions to fail to instrument.
- Fixed an issue with the instrumentation of the first line of an iteration block (e.g. for loops) that could have caused undefined behavior.
- Fixed an issue that prevented line probes from being instrumented on a line containing just the code
try:for CPython 3.11 and later. - Fixes an issue with the instrumentation of generators with Python 3.10.
- Code Origin:
- Fixed a potential memory leak when collecting entry span location information.
- Logging:
- Ensured that
ddtrace.tracer.get_log_correlation_context()returns the expected log correlation attributes (e.g.,dd.trace_id,dd.span_id,dd.serviceinstead oftrace_id,span_id,service). This change aligns the method's output with the attributes in ddtrace log-correlation docs. - Fixed an issue where
ddtrace.tracer.get_log_correlation_context()would return the service name of the current span instead of the global service name.
- Ensured that
Other Changes
-
Tracing:
- Adds explicit support ranges for all integrations. These support ranges can be used in conjunction with DD_TRACE_SAFE_INSTRUMENTATION_ENABLED=true to enable safer patching of integrations, by ensuring that only compatible versions of an integration are patched.
-
Profiling:
- The native profile exporter is now the default profile exporter, and the legacy Python exporter is removed. The
DD_PROFILING_EXPORT_LIBDD_ENABLEDconfiguration variable is removed. As a result of this change, profiling for 32-bit Linux is not supported. Please file an issue or open a support ticket if you need profiling for 32-bit Linux.
- The native profile exporter is now the default profile exporter, and the legacy Python exporter is removed. The
-
Single Step Instrumentation:
- Updates library injection guardrails to use safe instrumentation patching feature
DD_TRACE_SAFE_INSTRUMENTATION_ENABLED. This change ensures that instrumentation patching is only applied to for supported versions of packages, leaving unsupported package versions unpatched. - Additional fields have been added to the telemetry forwarder used during Single Step to surface troubleshooting insights in the Datadog UI.
- Updates library injection guardrails to use safe instrumentation patching feature