Skip to content

Commit 57221d1

Browse files
nacxnutanix-Hrushikesh
authored andcommitted
docs: add pyhton agent tracing example (envoyproxy#1295)
**Description** Adds an example Python agent that is fully instrumented to easily showcase how tracing works. This PR borrows the agent code from: elastic/observability-examples#90 **Related Issues/PRs (if applicable)** N/A **Special notes for reviewers (if applicable)** N/A --------- Signed-off-by: Ignasi Barrera <[email protected]> Signed-off-by: Hrushikesh Patil <[email protected]>
1 parent f14724b commit 57221d1

File tree

2 files changed

+108
-0
lines changed

2 files changed

+108
-0
lines changed

examples/mcp/README.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -221,3 +221,26 @@ From here, you can start using the tools inside Claude.
221221
| LAX → SFO | 13/09 16:41 → 18:13 (1h 32m) | Economy | $129 | https://on.kiwi.com/F8u8Sc |
222222
| LAX → SFO | 13/09 16:41 → 18:13 (1h 32m) | Economy | $134 | https://on.kiwi.com/NBMEYO |
223223
```
224+
225+
## Tracing example with a Python agent
226+
227+
The [agent.py](agent.py) file contains a basic Python agent example that evaluates the given prompt on the configured
228+
OpenAI compatible provider.
229+
230+
Refer to the [cmd/aigw](../../cmd/aigw) directory README and Docker files for details and examples on
231+
how to start the different example OpenTelemetry compatible services.
232+
233+
Once you have everything running you can start the agent by passing a prompt file or directly typing it
234+
into the terminal.
235+
236+
```shell
237+
$ uv run --exact -q --env-file <environment file>> agent.py /path/to/prompt.txt
238+
```
239+
240+
or
241+
242+
```shell
243+
$ uv run --exact -q --env-file <environment file>> agent.py <<EOF
244+
> your prompt here
245+
EOF
246+
```

examples/mcp/agent.py

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Copyright Envoy AI Gateway Authors
2+
# SPDX-License-Identifier: Apache-2.0
3+
# The full text of the Apache license is available in the LICENSE file at
4+
# the root of the repo.
5+
6+
# run like this: uv run --exact -q --env-file .env agent.py
7+
#
8+
# Customizing the ".env" like:
9+
#
10+
# OPENAI_BASE_URL=http://localhost:1975/v1
11+
# OPENAI_API_KEY=unused
12+
# CHAT_MODEL=qwen3:4b
13+
#
14+
# MCP_URL=http://localhost:1975/mcp
15+
#
16+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
17+
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
18+
#
19+
# /// script
20+
# dependencies = [
21+
# "openai-agents",
22+
# "httpx",
23+
# "mcp",
24+
# "openinference-instrumentation-openai-agents",
25+
# "opentelemetry-instrumentation-httpx",
26+
# "openinference-instrumentation-mcp",
27+
# ]
28+
# ///
29+
30+
from opentelemetry.instrumentation import auto_instrumentation
31+
32+
# This must precede any other imports you want to instrument!
33+
auto_instrumentation.initialize()
34+
35+
import argparse
36+
import asyncio
37+
import os
38+
import sys
39+
40+
from agents import (
41+
Agent,
42+
OpenAIProvider,
43+
RunConfig,
44+
Runner,
45+
Tool,
46+
)
47+
from agents.mcp import MCPServerStreamableHttp, MCPUtil
48+
49+
50+
async def run_agent(prompt: str, model_name: str, tools: list[Tool]):
51+
model = OpenAIProvider(use_responses=False).get_model(model_name)
52+
agent = Agent(name="Assistant", model=model, tools=tools)
53+
result = await Runner.run(
54+
starting_agent=agent,
55+
input=prompt,
56+
run_config=RunConfig(workflow_name="envoy-ai-gateway"),
57+
)
58+
print(result.final_output)
59+
60+
61+
async def main(prompt: str, model_name: str, mcp_url: str):
62+
if not mcp_url:
63+
await run_agent(prompt, model_name, [])
64+
return
65+
66+
async with MCPServerStreamableHttp({"url": mcp_url,"timeout": 300.0},cache_tools_list=True) as server:
67+
tools = await server.list_tools()
68+
util = MCPUtil()
69+
tools = [util.to_function_tool(tool, server, False) for tool in tools]
70+
await run_agent(prompt, model_name, tools)
71+
72+
73+
if __name__ == "__main__":
74+
parser = argparse.ArgumentParser("Example Agent with Tools")
75+
parser.add_argument("prompt", help="Prompt to be evaluated.", default=sys.stdin, type=argparse.FileType('r'), nargs='?')
76+
parser.add_argument("--model", help="Model to use.", default=os.getenv("CHAT_MODEL"), type=str)
77+
parser.add_argument("--mcp-url", help="MCP Server to connect to.", default=os.getenv("MCP_URL"), type=str)
78+
args = parser.parse_args()
79+
prompt = args.prompt.read()
80+
81+
print(f"Prompt: {prompt}")
82+
print(f"Using model: {args.model}")
83+
print(f"Using MCP URL: {args.mcp_url}")
84+
85+
asyncio.run(main(prompt, args.model, args.mcp_url))

0 commit comments

Comments
 (0)