Skip to content

Commit a433482

Browse files
committed
feat: Google ADK, LangGraph, OpenAI Agents, PydanticAI examples
0 parents  commit a433482

File tree

12 files changed

+2469
-0
lines changed

12 files changed

+2469
-0
lines changed

.env.example

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
GEMINI_API_KEY=
2+
OPENAI_API_KEY=
3+
LOGFIRE_TOKEN=

.gitignore

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# Python-generated files
2+
__pycache__/
3+
*.py[oc]
4+
build/
5+
dist/
6+
wheels/
7+
*.egg-info
8+
9+
# Virtual environments
10+
.venv
11+
12+
# Environment variables
13+
.env

.python-version

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
3.13

README.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
# Pydantic AI Evaluations with MCP
2+
3+
This repository demonstrates the usage of a simple Model Context Protocol (MCP) server with several frameworks:
4+
- Google Agent Development Toolkit (ADK)
5+
- LangGraph Agents
6+
- OpenAI Agents
7+
- Pydantic-AI Agents
8+
9+
Tracing is done through Pydantic Logfire.
10+
11+
# Quickstart
12+
13+
`cp .env.example .env`
14+
- Add `GEMINI_API_KEY` and/or `OPENAI_API_KEY`
15+
- Individual scripts can be adjusted to use models from any provider supported by the specifi framework
16+
- By default only [basic_mcp_use/oai-agent_mcp.py](basic_mcp_use/oai-agent_mcp.py) requires `OPENAI_API_KEY`
17+
- All other scripts require `GEMINI_API_KEY`
18+
- Add `LOGFIRE_TOKEN` to visualise evaluations in Logfire web ui
19+
20+
## Project Overview
21+
22+
This project aims to teach:
23+
1. How to use MCP with multiple LLM Agent frameworks
24+
- Example MCP tools for adding numbers, getting current time
25+
2. How to see traces LLM Agents with Logfire
26+
27+
![Logfire UI](docs/images/logfire_ui.png)
28+
29+
30+
## Repository Structure
31+
32+
- **basic_mcp_use/** - Contains basic examples of MCP usage:
33+
- `adk_mcp.py` - Example of using MCP with Google's Agent Development Kit (ADK)
34+
- `langgraph_mcp.py` - Example of using MCP with LangGraph
35+
- `oai-agent_mcp.py` - Examoke of using MCP with OpenAI Agents
36+
- `pydantic_mcp.py` - Example of using MCP with Pydantic-AI
37+
38+
39+
## What is MCP?
40+
41+
The Model Context Protocol allows applications to provide context for LLMs in a standardised way, separating the concerns of providing context from the actual LLM interaction.
42+
43+
Learn more: https://modelcontextprotocol.io/introduction
44+
45+
## Setup Instructions
46+
47+
1. Clone this repository
48+
2. Install required packages:
49+
```bash
50+
uv sync
51+
```
52+
3. Set up your environment variables in a `.env` file:
53+
```
54+
LOGFIRE_TOKEN=your_logfire_token
55+
GEMINI_API_KEY=your_gemini_api_key
56+
OPENAI_API_KEY=your_openai_api_key
57+
```
58+
4. Run any of the sample scripts to see a simple MCP server being used via an Agent framework
59+
- Google Agent Development Toolkit (ADK)
60+
- [basic_mcp_use/adk_mcp.py](basic_mcp_use/adk_mcp.py)
61+
- LangGraph Agents
62+
- [basic_mcp_use/langgraph_mcp.py](basic_mcp_use/langgraph_mcp.py)
63+
- OpenAI Agents
64+
- [basic_mcp_use/oai-agent_mcp.py](basic_mcp_use/oai-agent_mcp.py)
65+
- Pydantic-AI Agents
66+
- [basic_mcp_use/pydantic_mcp.py](basic_mcp_use/pydantic_mcp.py)
67+
68+
## About Logfire
69+
70+
[Logfire](https://github.com/pydantic/logfire) is an observability platform from the team behind Pydantic that makes monitoring AI applications straightforward. Features include:
71+
72+
- Simple yet powerful dashboard
73+
- Python-centric insights, including rich display of Python objects
74+
- SQL-based querying of your application data
75+
- OpenTelemetry support for leveraging existing tooling
76+
- Pydantic integration for analytics on validations
77+
78+
Logfire gives you visibility into how your code is running, which is especially valuable for LLM applications where understanding model behavior is critical.

basic_mcp_use/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Basic MCP usage examples package

basic_mcp_use/adk_mcp.py

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
import asyncio
2+
import os
3+
import logfire
4+
5+
from dotenv import load_dotenv
6+
from google.adk.agents.llm_agent import LlmAgent
7+
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
8+
from google.adk.runners import Runner
9+
from google.adk.sessions import InMemorySessionService
10+
from google.genai import types
11+
12+
load_dotenv()
13+
14+
# Set API key for Google AI API from environment variable
15+
os.environ["GOOGLE_API_KEY"] = os.getenv("GEMINI_API_KEY")
16+
17+
# Configure logging if LOGFIRE_TOKEN is set
18+
logfire.configure(send_to_logfire="if-token-present")
19+
logfire.instrument_mcp()
20+
21+
22+
async def main(query: str = "Greet Andrew and give him the current time") -> None:
23+
"""
24+
Main function to run the agent
25+
26+
Args:
27+
query (str): The query to run the agent with
28+
"""
29+
# Set up MCP server connection
30+
server_params = StdioServerParameters(
31+
command="uv",
32+
args=["run", "run_server.py", "stdio"],
33+
)
34+
35+
tools, exit_stack = await MCPToolset.from_server(connection_params=server_params)
36+
print(f"Connected to MCP server. Found {len(tools)} tools.")
37+
38+
# Create the agent
39+
root_agent = LlmAgent(
40+
model="gemini-2.5-pro-preview-03-25",
41+
name="mcp_pydantic_assistant",
42+
tools=tools,
43+
)
44+
45+
# Set up session
46+
session_service = InMemorySessionService()
47+
session = session_service.create_session(
48+
app_name="mcp_pydantic_app",
49+
user_id="aginns",
50+
)
51+
52+
# Create the runner
53+
runner = Runner(
54+
app_name="mcp_pydantic_app",
55+
agent=root_agent,
56+
session_service=session_service,
57+
)
58+
59+
# Run the agent with a query
60+
content = types.Content(role="user", parts=[types.Part(text=query)])
61+
62+
print("Running agent...")
63+
try:
64+
events_async = runner.run_async(
65+
session_id=session.id, user_id=session.user_id, new_message=content
66+
)
67+
68+
async for event in events_async:
69+
print(f"Event received: {event}")
70+
finally:
71+
print("Closing MCP server connection...")
72+
await exit_stack.aclose()
73+
print("Cleanup complete.")
74+
75+
76+
if __name__ == "__main__":
77+
asyncio.run(main())

basic_mcp_use/langgraph_mcp.py

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
import asyncio
2+
import os
3+
import logfire
4+
5+
from dotenv import load_dotenv
6+
from langchain_google_genai import ChatGoogleGenerativeAI
7+
from langchain_mcp_adapters.tools import load_mcp_tools
8+
from langgraph.prebuilt import create_react_agent
9+
from mcp import ClientSession, StdioServerParameters
10+
from mcp.client.stdio import stdio_client
11+
12+
load_dotenv()
13+
14+
# Configure logging if LOGFIRE_TOKEN is set
15+
logfire.configure(send_to_logfire="if-token-present")
16+
logfire.instrument_mcp()
17+
18+
19+
# Create server parameters for stdio connection
20+
server = StdioServerParameters(
21+
command="uv",
22+
args=["run", "run_server.py", "stdio"],
23+
)
24+
25+
model = ChatGoogleGenerativeAI(
26+
model="gemini-2.5-pro-preview-03-25", google_api_key=os.getenv("GEMINI_API_KEY")
27+
)
28+
29+
30+
async def main(query: str = "Greet Andrew and give him the current time") -> None:
31+
"""
32+
Main function to run the agent
33+
34+
Args:
35+
query (str): The query to run the agent with
36+
"""
37+
async with stdio_client(server) as (read, write):
38+
async with ClientSession(read, write) as session:
39+
# Initialize the connection
40+
await session.initialize()
41+
42+
# Get tools
43+
tools = await load_mcp_tools(session)
44+
45+
# Create agent
46+
agent = create_react_agent(model, tools)
47+
agent_response = await agent.ainvoke(
48+
{
49+
"messages": query,
50+
}
51+
)
52+
print(agent_response["messages"][-1].content)
53+
54+
55+
if __name__ == "__main__":
56+
asyncio.run(main())

basic_mcp_use/oai-agent_mcp.py

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
import os
2+
import asyncio
3+
4+
import logfire
5+
from dotenv import load_dotenv
6+
from agents import Agent, Runner
7+
from agents.mcp import MCPServerStdio
8+
9+
load_dotenv()
10+
11+
# Configure Logfire
12+
logfire.configure(send_to_logfire="if-token-present")
13+
logfire.instrument_mcp()
14+
logfire.instrument_openai_agents()
15+
16+
17+
async def main(query: str = "Greet Andrew and give him the current time") -> None:
18+
"""
19+
Main function to run the agent
20+
21+
Args:
22+
query (str): The query to run the agent with
23+
"""
24+
# Create and use the MCP server in an async context
25+
async with MCPServerStdio(
26+
params={
27+
"command": "uv",
28+
"args": ["run", "run_server.py", "stdio"],
29+
}
30+
) as server:
31+
# Initialize the agent with the server
32+
agent = Agent(
33+
name="MCP agent",
34+
model="o4-mini",
35+
mcp_servers=[server],
36+
)
37+
38+
result = await Runner.run(
39+
starting_agent=agent,
40+
input=query,
41+
)
42+
43+
print(result.final_output)
44+
45+
46+
if __name__ == "__main__":
47+
asyncio.run(main())

basic_mcp_use/pydantic_mcp.py

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
import asyncio
2+
import os
3+
4+
import logfire
5+
from dotenv import load_dotenv
6+
from pydantic_ai import Agent
7+
from pydantic_ai.mcp import MCPServerStdio
8+
9+
load_dotenv()
10+
11+
# Configure logging to logfire if LOGFIRE_TOKEN is set in environment
12+
logfire.configure(send_to_logfire="if-token-present")
13+
logfire.instrument_mcp()
14+
logfire.instrument_pydantic_ai()
15+
16+
server = MCPServerStdio(
17+
command="uv",
18+
args=[
19+
"run",
20+
"run_server.py",
21+
"stdio",
22+
],
23+
)
24+
agent = Agent("gemini-2.5-pro-preview-03-25", mcp_servers=[server])
25+
Agent.instrument_all()
26+
27+
28+
async def main(query: str = "Greet Andrew and give him the current time") -> None:
29+
"""
30+
Main function to run the agent
31+
32+
Args:
33+
query (str): The query to run the agent with
34+
"""
35+
async with agent.run_mcp_servers():
36+
result = await agent.run(query)
37+
print(result.output)
38+
39+
40+
if __name__ == "__main__":
41+
asyncio.run(main())

docs/images/logfire_ui.png

441 KB
Loading

0 commit comments

Comments
 (0)