Skip to content

Commit b1f0478

Browse files
OpenAI Agents Framework instrumentation (#917)
Co-authored-by: Alex Hall <[email protected]>
1 parent 94caac9 commit b1f0478

File tree

22 files changed

+4539
-105
lines changed

22 files changed

+4539
-105
lines changed
116 KB
Loading
86.4 KB
Loading

docs/integrations/llms/openai.md

Lines changed: 87 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,12 @@ integration: logfire
44

55
## Introduction
66

7-
Logfire supports instrumenting calls to OpenAI with one extra line of code.
7+
We support instrumenting both the [standard OpenAI SDK](https://github.com/openai/openai-python) package and [OpenAI "agents"](https://github.com/openai/openai-agents-python) framework.
8+
9+
### OpenAI SDK
10+
11+
Logfire supports instrumenting calls to OpenAI with one extra line of code, here's an example of instrumenting
12+
the OpenAI SDK:
813

914
```python hl_lines="7"
1015
import openai
@@ -45,14 +50,15 @@ With that you get:
4550
<figcaption>Span arguments including response details</figcaption>
4651
</figure>
4752

48-
## Methods covered
53+
### Methods covered
4954

5055
The following OpenAI methods are covered:
5156

5257
- [`client.chat.completions.create`](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) — with and without `stream=True`
5358
- [`client.completions.create`](https://platform.openai.com/docs/guides/text-generation/completions-api) — with and without `stream=True`
5459
- [`client.embeddings.create`](https://platform.openai.com/docs/guides/embeddings/how-to-get-embeddings)
5560
- [`client.images.generate`](https://platform.openai.com/docs/guides/images/generations)
61+
- [`client.responses.create`](https://platform.openai.com/docs/api-reference/responses)
5662

5763
All methods are covered with both `openai.Client` and `openai.AsyncClient`.
5864

@@ -87,7 +93,7 @@ Gives:
8793
<figcaption>OpenAI image generation span</figcaption>
8894
</figure>
8995

90-
## Streaming Responses
96+
### Streaming Responses
9197

9298
When instrumenting streaming responses, Logfire creates two spans — one around the initial request and one
9399
around the streamed response.
@@ -134,3 +140,81 @@ Shows up like this in Logfire:
134140
![Logfire OpenAI Streaming](../../images/logfire-screenshot-openai-stream.png){ width="500" }
135141
<figcaption>OpenAI streaming response</figcaption>
136142
</figure>
143+
144+
## OpenAI Agents
145+
146+
We also support instrumenting the [OpenAI "agents"](https://github.com/openai/openai-agents-python) framework.
147+
148+
```python hl_lines="5"
149+
import logfire
150+
from agents import Agent, Runner
151+
152+
logfire.configure()
153+
logfire.instrument_openai_agents()
154+
155+
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
156+
157+
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
158+
print(result.final_output)
159+
```
160+
161+
_For more information, see the [`instrument_openai_agents()` API reference][logfire.Logfire.instrument_openai_agents]._
162+
163+
Which shows up like this in Logfire:
164+
165+
<figure markdown="span">
166+
![Logfire OpenAI Agents](../../images/logfire-screenshot-openai-agents.png){ width="500" }
167+
<figcaption>OpenAI Agents</figcaption>
168+
</figure>
169+
170+
In this example we add a function tool to the agents:
171+
172+
```python
173+
from typing_extensions import TypedDict
174+
175+
import logfire
176+
from httpx import AsyncClient
177+
from agents import RunContextWrapper, Agent, function_tool, Runner
178+
179+
logfire.configure()
180+
logfire.instrument_openai_agents()
181+
182+
183+
class Location(TypedDict):
184+
lat: float
185+
long: float
186+
187+
188+
@function_tool
189+
async def fetch_weather(ctx: RunContextWrapper[AsyncClient], location: Location) -> str:
190+
"""Fetch the weather for a given location.
191+
192+
Args:
193+
ctx: Run context object.
194+
location: The location to fetch the weather for.
195+
"""
196+
r = await ctx.context.get('https://httpbin.org/get', params=location)
197+
return 'sunny' if r.status_code == 200 else 'rainy'
198+
199+
200+
agent = Agent(name='weather agent', tools=[fetch_weather])
201+
202+
203+
async def main():
204+
async with AsyncClient() as client:
205+
logfire.instrument_httpx(client)
206+
result = await Runner.run(agent, 'Get the weather at lat=51 lng=0.2', context=client)
207+
print(result.final_output)
208+
209+
210+
if __name__ == '__main__':
211+
import asyncio
212+
asyncio.run(main())
213+
```
214+
215+
We see spans from within the function call nested within the agent spans:
216+
217+
<figure markdown="span">
218+
![Logfire OpenAI Agents](../../images/logfire-screenshot-openai-agents-tools.png){ width="500" }
219+
<figcaption>OpenAI Agents</figcaption>
220+
</figure>

logfire-api/logfire_api/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,8 @@ def instrument_anthropic(self, *args, **kwargs) -> ContextManager[None]:
139139
def instrument_openai(self, *args, **kwargs) -> ContextManager[None]:
140140
return nullcontext()
141141

142+
def instrument_openai_agents(self, *args, **kwargs) -> None: ...
143+
142144
def instrument_aiohttp_client(self, *args, **kwargs) -> None: ...
143145

144146
def instrument_system_metrics(self, *args, **kwargs) -> None: ...
@@ -168,6 +170,7 @@ def shutdown(self, *args, **kwargs) -> None: ...
168170
instrument_pydantic = DEFAULT_LOGFIRE_INSTANCE.instrument_pydantic
169171
instrument_fastapi = DEFAULT_LOGFIRE_INSTANCE.instrument_fastapi
170172
instrument_openai = DEFAULT_LOGFIRE_INSTANCE.instrument_openai
173+
instrument_openai_agents = DEFAULT_LOGFIRE_INSTANCE.instrument_openai_agents
171174
instrument_anthropic = DEFAULT_LOGFIRE_INSTANCE.instrument_anthropic
172175
instrument_asyncpg = DEFAULT_LOGFIRE_INSTANCE.instrument_asyncpg
173176
instrument_celery = DEFAULT_LOGFIRE_INSTANCE.instrument_celery

logfire/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
instrument_wsgi = DEFAULT_LOGFIRE_INSTANCE.instrument_wsgi
3131
instrument_fastapi = DEFAULT_LOGFIRE_INSTANCE.instrument_fastapi
3232
instrument_openai = DEFAULT_LOGFIRE_INSTANCE.instrument_openai
33+
instrument_openai_agents = DEFAULT_LOGFIRE_INSTANCE.instrument_openai_agents
3334
instrument_anthropic = DEFAULT_LOGFIRE_INSTANCE.instrument_anthropic
3435
instrument_asyncpg = DEFAULT_LOGFIRE_INSTANCE.instrument_asyncpg
3536
instrument_httpx = DEFAULT_LOGFIRE_INSTANCE.instrument_httpx
@@ -117,6 +118,7 @@ def loguru_handler() -> Any:
117118
'instrument_pydantic',
118119
'instrument_fastapi',
119120
'instrument_openai',
121+
'instrument_openai_agents',
120122
'instrument_anthropic',
121123
'instrument_asyncpg',
122124
'instrument_httpx',

logfire/_internal/exporters/test.py

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
11
from __future__ import annotations
22

3+
import json
34
import os
45
import re
56
import sys
67
import typing
78
from collections.abc import Sequence
89
from functools import partial
10+
from json import JSONDecodeError
911
from pathlib import Path
1012
from typing import Any, Mapping, cast
1113

@@ -47,6 +49,7 @@ def exported_spans_as_dict(
4749
include_instrumentation_scope: bool = False,
4850
_include_pending_spans: bool = False,
4951
_strip_function_qualname: bool = True,
52+
parse_json_attributes: bool = False,
5053
) -> list[dict[str, Any]]:
5154
"""The exported spans as a list of dicts.
5255
@@ -55,6 +58,7 @@ def exported_spans_as_dict(
5558
strip_filepaths: Whether to strip the filepaths from the exported spans.
5659
include_resources: Whether to include the resource attributes in the exported spans.
5760
include_instrumentation_scope: Whether to include the instrumentation scope in the exported spans.
61+
parse_json_attributes: Whether to parse strings containing JSON arrays/objects.
5862
5963
Returns:
6064
A list of dicts representing the exported spans.
@@ -64,6 +68,7 @@ def exported_spans_as_dict(
6468
fixed_line_number=fixed_line_number,
6569
strip_filepaths=strip_filepaths,
6670
strip_function_qualname=_strip_function_qualname,
71+
parse_json_attributes=parse_json_attributes,
6772
)
6873

6974
def build_context(context: trace.SpanContext) -> dict[str, Any]:
@@ -131,6 +136,7 @@ def process_attribute(
131136
strip_filepaths: bool,
132137
fixed_line_number: int | None,
133138
strip_function_qualname: bool,
139+
parse_json_attributes: bool = False,
134140
) -> Any:
135141
if name == 'code.filepath' and strip_filepaths:
136142
try:
@@ -148,6 +154,11 @@ def process_attribute(
148154
if name == ResourceAttributes.SERVICE_INSTANCE_ID:
149155
if re.match(r'^[0-9a-f]{32}$', value):
150156
return '0' * 32
157+
if parse_json_attributes and isinstance(value, str) and (value.startswith('{') or value.startswith('[')):
158+
try:
159+
return json.loads(value)
160+
except JSONDecodeError: # pragma: no cover
161+
pass
151162
return value
152163

153164

@@ -156,11 +167,12 @@ def build_attributes(
156167
strip_filepaths: bool,
157168
fixed_line_number: int | None,
158169
strip_function_qualname: bool,
170+
parse_json_attributes: bool,
159171
) -> dict[str, Any] | None:
160172
if attributes is None: # pragma: no cover
161173
return None
162174
attributes = {
163-
k: process_attribute(k, v, strip_filepaths, fixed_line_number, strip_function_qualname)
175+
k: process_attribute(k, v, strip_filepaths, fixed_line_number, strip_function_qualname, parse_json_attributes)
164176
for k, v in attributes.items()
165177
}
166178
if 'telemetry.sdk.version' in attributes:
@@ -191,12 +203,14 @@ def exported_logs_as_dicts(
191203
include_resources: bool = False,
192204
include_instrumentation_scope: bool = False,
193205
_strip_function_qualname: bool = True,
206+
parse_json_attributes: bool = False,
194207
) -> list[dict[str, Any]]:
195208
_build_attributes = partial(
196209
build_attributes,
197210
fixed_line_number=fixed_line_number,
198211
strip_filepaths=strip_filepaths,
199212
strip_function_qualname=_strip_function_qualname,
213+
parse_json_attributes=parse_json_attributes,
200214
)
201215

202216
def build_log(log_data: LogData) -> dict[str, Any]:

logfire/_internal/integrations/llm_providers/llm_provider.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,8 @@ def _instrumentation_setup(**kwargs: Any) -> Any:
7777
return None, None, kwargs
7878

7979
message_template, span_data, stream_state_cls = get_endpoint_config_fn(kwargs['options'])
80+
if not message_template:
81+
return None, None, kwargs
8082

8183
span_data['async'] = is_async
8284

logfire/_internal/integrations/llm_providers/openai.py

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
from openai.types.completion import Completion
1010
from openai.types.create_embedding_response import CreateEmbeddingResponse
1111
from openai.types.images_response import ImagesResponse
12+
from opentelemetry.sdk.trace import ReadableSpan
13+
from opentelemetry.trace import get_current_span
1214

1315
from ...utils import handle_internal_errors
1416
from .types import EndpointConfig, StreamState
@@ -36,11 +38,22 @@ def get_endpoint_config(options: FinalRequestOptions) -> EndpointConfig:
3638
json_data = {}
3739

3840
if url == '/chat/completions':
41+
if is_current_agent_span('Chat completion with {gen_ai.request.model!r}'):
42+
return EndpointConfig(message_template='', span_data={})
43+
3944
return EndpointConfig(
4045
message_template='Chat Completion with {request_data[model]!r}',
4146
span_data={'request_data': json_data},
4247
stream_state_cls=OpenaiChatCompletionStreamState,
4348
)
49+
elif url == '/responses':
50+
if is_current_agent_span('Responses API'):
51+
return EndpointConfig(message_template='', span_data={})
52+
53+
return EndpointConfig( # pragma: no cover
54+
message_template='Responses API with {request_data[model]!r}',
55+
span_data={'request_data': json_data},
56+
)
4457
elif url == '/completions':
4558
return EndpointConfig(
4659
message_template='Completion with {request_data[model]!r}',
@@ -64,6 +77,16 @@ def get_endpoint_config(options: FinalRequestOptions) -> EndpointConfig:
6477
)
6578

6679

80+
def is_current_agent_span(span_name: str):
81+
current_span = get_current_span()
82+
return (
83+
isinstance(current_span, ReadableSpan)
84+
and current_span.instrumentation_scope
85+
and current_span.instrumentation_scope.name == 'logfire.openai_agents'
86+
and current_span.name == span_name
87+
)
88+
89+
6790
def content_from_completions(chunk: Completion | None) -> str | None:
6891
if chunk and chunk.choices:
6992
return chunk.choices[0].text

0 commit comments

Comments
 (0)