Skip to content
Open
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,12 @@ While LLM Observability provides a few out-of-the-box evaluations for your trace
* Unicode is not supported.
* Evaluation labels must not exceed 200 characters. Fewer than 100 is preferred from a UI perspective.

<div class="alert alert-info">Evaluation labels must be unique for a given LLM application (<code>ml_app</code>) and organization.</div>
<div class="alert alert-info">

- Evaluation labels must be unique for a given LLM application (<code>ml_app</code>) and organization.
- External evaluations are not currently supported for [OpenTelemetry spans][5].

</div>

## Submitting external evaluations with the SDK

Expand Down Expand Up @@ -104,3 +109,4 @@ You can use the evaluations API provided by LLM Observability to send evaluation
[2]: /llm_observability/setup/api/?tab=model#evaluations-api
[3]: /llm_observability/setup/sdk/python/#evaluations
[4]: /llm_observability/setup/sdk/nodejs/#evaluations
[5]: /llm_observability/instrumentation/otel_instrumentation
Original file line number Diff line number Diff line change
@@ -1,23 +1,18 @@
---
title: OpenTelemetry Instrumentation
is_beta: true
private: true
---

{{< callout url="#" btn_hidden="true" >}}
OpenTelemetry instrumentation for LLM Observability is in Preview. For access, <a href="/help">contact Datadog Support</a>.
{{< /callout >}}

## Overview
By using OpenTelemetry's standardized semantic conventions for generative AI operations, you can instrument your LLM applications with any OpenTelemetry-compatible library or framework and visualize the traces in LLM Observability.

LLM Observability supports ingesting OpenTelemetry traces that follow the [OpenTelemetry 1.37 semantic conventions for generative AI][1]. This allows you to send LLM traces directly from OpenTelemetry-instrumented applications to Datadog without requiring the Datadog LLM Observability SDK or a Datadog Agent.
LLM Observability supports ingesting OpenTelemetry traces that follow the [OpenTelemetry 1.37+ semantic conventions for generative AI][1]. This allows you to send LLM traces directly from OpenTelemetry-instrumented applications to Datadog without requiring the Datadog LLM Observability SDK or a Datadog Agent.

## Prerequisites

- A [Datadog API key][2]
- An application instrumented with OpenTelemetry that emits traces following the [OpenTelemetry 1.37 semantic conventions for generative AI][1]
- Access to the OpenTelemetry instrumentation Preview feature ([contact support][4] to request access)
- An application instrumented with OpenTelemetry that emits traces following the [OpenTelemetry 1.37+ semantic conventions for generative AI][1]

<div class="alert alert-info">[External evaluations][6] in LLM Observability are not currently applied to OpenTelemetry spans. Evaluations are only available for spans generated with the Datadog LLM Observability SDK or submitted directly to the HTTP API intake.</div>

## Setup

Expand All @@ -41,38 +36,41 @@ If your framework previously supported a pre-1.37 OpenTelemetry specification ve
OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental
```

This environment variable enables version 1.37-compliant OpenTelemetry traces for frameworks that now support the version 1.37 semantic conventions, but previously supported older versions (such as [strands-agents][5]).
This environment variable enables version 1.37+-compliant OpenTelemetry traces for frameworks that now support the version 1.37+ semantic conventions, but previously supported older versions (such as [strands-agents][5]).

**Note**: If you are using an OpenTelemetry library other than the default OpenTelemetry SDK, you may need to configure the endpoint, protocol, and headers differently depending on the library's API. Refer to your library's documentation for the appropriate configuration method.

#### Using strands-agents

If you are using the [`strands-agents` library][5], you need to set an additional environment variable to enable traces that are compliant with OpenTelemetry v1.37:
If you are using the [`strands-agents` library][5], you need to set an additional environment variable to enable traces that are compliant with OpenTelemetry v1.37+:

```
OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental
```

This environment variable ensures that `strands-agents` emits traces following the OpenTelemetry v1.37 semantic conventions for generative AI, which are required by LLM Observability.
This environment variable ensures that `strands-agents` emits traces following the OpenTelemetry v1.37+ semantic conventions for generative AI, which are required by LLM Observability.

### Instrumentation

To generate traces compatible with LLM Observability, do one of the following:

- Use an OpenTelemetry library or instrumentation package that emits spans following the [OpenTelemetry 1.37 semantic conventions for generative AI][1].
- Use an OpenTelemetry library or instrumentation package that emits spans following the [OpenTelemetry 1.37+ semantic conventions for generative AI][1].
- Create custom OpenTelemetry instrumentation that produces spans with the required `gen_ai.*` attributes, as defined in the semantic conventions.

After your application starts sending data, the traces automatically appear in the [**LLM Observability Traces** page][3]. To search for your traces in the UI, use the `ml_app` attribute, which is automatically set to the value of your OpenTelemetry root span's `service` attribute.

<div class="alert alert-danger">OpenInference and OpenLLMetry are not supported, as they have not been updated to support OpenTelemetry 1.37 semantic conventions for generative AI.</a></div>
<div class="alert alert-danger">

**Note**: There may be a 3-5 minute delay between sending traces and seeing them appear on the LLM Observability Traces page. If you have APM enabled, traces appear immediately in the APM Traces page.
- OpenInference and OpenLLMetry are not supported, as they have not been updated to support OpenTelemetry 1.37+ semantic conventions for generative AI.
- There may be a 3-5 minute delay between sending traces and seeing them appear on the LLM Observability Traces page. If you have APM enabled, traces appear immediately in the APM Traces page.

</div>

### Examples

#### Using strands-agents
#### Using Strands Agents

The following example demonstrates a complete application using strands-agents with the OpenTelemetry integration. This same approach works with any framework that supports OpenTelemetry version 1.37 semantic conventions for generative AI.
The following example demonstrates a complete application using [Strands Agents][7] with the OpenTelemetry integration. This same approach works with any framework that supports OpenTelemetry version 1.37+ semantic conventions for generative AI.

```python
from strands import Agent
Expand Down Expand Up @@ -203,7 +201,7 @@ After running this example, search for `ml_app:simple-llm-example` in the LLM Ob

## Supported semantic conventions

LLM Observability supports spans that follow the OpenTelemetry 1.37 semantic conventions for generative AI, including:
LLM Observability supports spans that follow the OpenTelemetry 1.37+ semantic conventions for generative AI, including:

- LLM operations with `gen_ai.provider.name`, `"gen_ai.operation.name"`, `gen_ai.request.model`, and other gen_ai attributes
- Operation inputs/outputs on direct span attributes or via span events
Expand All @@ -217,4 +215,6 @@ For the complete list of supported attributes and their specifications, see the
[3]: https://app.datadoghq.com/llm/traces
[4]: /help/
[5]: https://pypi.org/project/strands-agents/
[6]: /llm_observability/evaluations/external_evaluations
[7]: https://strandsagents.com/latest/

Loading