-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
Run the MCP sampling example from the PydanticAI docs (https://ai.pydantic.dev/mcp/client/#mcp-sampling) with OpenTelemetry instrumentation.
I would expect the sampling LLM call to appear in the trace of my agent run, ideally nested under the tool call that made that sampling request.
Instead, the sampling LLM call appears as it's own separate trace, with no way to link it to the tool or the trace it was part of. Here the result is shown in langfuse, where agent run is the main trace, and chat acd-openai... is the sampling call:
Example Code
Simply instrument the example in the docs with opentelemetry.Python, Pydantic AI & LLM client version
Python 3.11
Pydantic AI 1.1.0
Azure OpenAI (latest)
mtblanton and raywlfun
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working