Skip to content

Commit c294f0b

Browse files
committed
docs: Detail specific to each level of the repo
1 parent 16d0baa commit c294f0b

File tree

3 files changed

+281
-69
lines changed

3 files changed

+281
-69
lines changed

README.md

Lines changed: 18 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Model Context Protocol (MCP) Agent Frameworks Demo
22

3-
This repository demonstrates the usage of a simple Model Context Protocol (MCP) server with several frameworks:
3+
This repository demonstrates the usage of Model Context Protocol (MCP) servers with several frameworks:
44
- Google Agent Development Toolkit (ADK)
55
- LangGraph Agents
66
- OpenAI Agents
@@ -17,7 +17,7 @@ Tracing is done through Pydantic Logfire.
1717

1818
`cp .env.example .env`
1919
- Add `GEMINI_API_KEY` and/or `OPENAI_API_KEY`
20-
- Individual scripts can be adjusted to use models from any provider supported by the specifi framework
20+
- Individual scripts can be adjusted to use models from any provider supported by the specific framework
2121
- By default only [basic_mcp_use/oai-agent_mcp.py](basic_mcp_use/oai-agent_mcp.py) requires `OPENAI_API_KEY`
2222
- All other scripts require `GEMINI_API_KEY` (Free tier key can be created at https://aistudio.google.com/apikey)
2323
- [Optional] Add `LOGFIRE_TOKEN` to visualise evaluations in Logfire web ui
@@ -33,83 +33,32 @@ Check console or Logfire for output
3333

3434
This project aims to teach:
3535
1. How to use MCP with multiple LLM Agent frameworks
36-
- Example MCP tools for adding numbers, getting current time
36+
- Single MCP server usage and Multi-MCP server usage
3737
2. How to see traces LLM Agents with Logfire
38+
3. How to evaluate LLMs with PydanticAI evals
3839

3940
![Logfire UI](docs/images/logfire_ui.png)
4041

41-
## MCP Architecture
42-
43-
```mermaid
44-
graph LR
45-
User((User)) --> |"Run script<br>(e.g., pydantic_mcp.py)"| Agent
46-
47-
subgraph "Agent Frameworks"
48-
Agent[Agent]
49-
ADK["Google ADK<br>(adk_mcp.py)"]
50-
LG["LangGraph<br>(langgraph_mcp.py)"]
51-
OAI["OpenAI Agents<br>(oai-agent_mcp.py)"]
52-
PYD["Pydantic-AI<br>(pydantic_mcp.py)"]
53-
54-
Agent --> ADK
55-
Agent --> LG
56-
Agent --> OAI
57-
Agent --> PYD
58-
end
59-
60-
subgraph "MCP Server"
61-
MCP["Model Context Protocol Server<br>(run_server.py)"]
62-
Tools["Tools<br>- add(a, b)<br>- get_current_time()"]
63-
Resources["Resources<br>- greeting://{name}"]
64-
MCP --- Tools
65-
MCP --- Resources
66-
end
67-
68-
subgraph "LLM Providers"
69-
OAI_LLM["OpenAI Models"]
70-
GEM["Google Gemini Models"]
71-
OTHER["Other LLM Providers..."]
72-
end
73-
74-
Logfire[("Logfire<br>Tracing")]
75-
76-
ADK --> MCP
77-
LG --> MCP
78-
OAI --> MCP
79-
PYD --> MCP
80-
81-
MCP --> OAI_LLM
82-
MCP --> GEM
83-
MCP --> OTHER
84-
85-
ADK --> Logfire
86-
LG --> Logfire
87-
OAI --> Logfire
88-
PYD --> Logfire
89-
90-
LLM_Response[("Response")] --> User
91-
OAI_LLM --> LLM_Response
92-
GEM --> LLM_Response
93-
OTHER --> LLM_Response
94-
95-
style MCP fill:#f9f,stroke:#333,stroke-width:2px
96-
style User fill:#bbf,stroke:#338,stroke-width:2px
97-
style Logfire fill:#bfb,stroke:#383,stroke-width:2px
98-
style LLM_Response fill:#fbb,stroke:#833,stroke-width:2px
99-
```
100-
101-
The diagram illustrates how MCP serves as a standardised interface between different agent frameworks and LLM providers.The flow shows how users interact with the system by running a specific agent script, which then leverages MCP to communicate with LLM providers, while Logfire provides tracing and observability.
102-
10342
## Repository Structure
10443

105-
- **basic_mcp_use/** - Contains basic examples of MCP usage:
44+
- **agents_mcp_usage/basic_mcp/basic_mcp_use/** - Contains basic examples of single MCP usage:
10645
- `adk_mcp.py` - Example of using MCP with Google's Agent Development Kit (ADK)
10746
- `langgraph_mcp.py` - Example of using MCP with LangGraph
108-
- `oai-agent_mcp.py` - Examoke of using MCP with OpenAI Agents
47+
- `oai-agent_mcp.py` - Example of using MCP with OpenAI Agents
10948
- `pydantic_mcp.py` - Example of using MCP with Pydantic-AI
11049

111-
- `run_server.py` - Simple MCP server that runs locally implemented in Python
50+
- **agents_mcp_usage/basic_mcp/eval_basic_mcp_use/** - Contains evaluation examples for single MCP usage:
51+
- `evals_adk_mcp.py` - Evaluation of MCP with Google's ADK
52+
- `evals_langchain_mcp.py` - Evaluation of MCP with LangGraph
53+
- `evals_pydantic_mcp.py` - Evaluation of MCP with Pydantic-AI
54+
55+
- **agents_mcp_usage/multi_mcp/** - Contains advanced examples of multi-MCP usage:
56+
- `multi_mcp_use/pydantic_mcp.py` - Example of using multiple MCP servers with Pydantic-AI
57+
- `eval_multi_mcp/evals_pydantic_mcp.py` - Example of evaluating the use of multiple MCP servers with Pydantic-AI
58+
- `mermaid_diagrams.py` - Generates Mermaid diagrams for visualizing MCP architecture
11259

60+
- **Demo Python MCP Server**
61+
- `run_server.py` - Simple MCP server that runs locally, implemented in Python
11362

11463
## What is MCP?
11564

@@ -128,7 +77,7 @@ A key advantage highlighted is flexibility; MCP allows developers to more easily
12877
1. Clone this repository
12978
2. Install required packages:
13079
```bash
131-
uv sync
80+
make install
13281
```
13382
3. Set up your environment variables in a `.env` file:
13483
```
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# Basic MCP Usage Examples
2+
3+
This directory contains examples of integrating Model Context Protocol (MCP) with various LLM agent frameworks.
4+
5+
Each script demonstrates how to connect to a single local MCP server and use it with a different agent framework.
6+
7+
### Basic MCP Architecture
8+
9+
```mermaid
10+
graph LR
11+
User((User)) --> |"Run script<br>(e.g., pydantic_mcp.py)"| Agent
12+
13+
subgraph "Agent Frameworks"
14+
Agent[Agent]
15+
ADK["Google ADK<br>(adk_mcp.py)"]
16+
LG["LangGraph<br>(langgraph_mcp.py)"]
17+
OAI["OpenAI Agents<br>(oai-agent_mcp.py)"]
18+
PYD["Pydantic-AI<br>(pydantic_mcp.py)"]
19+
20+
Agent --> ADK
21+
Agent --> LG
22+
Agent --> OAI
23+
Agent --> PYD
24+
end
25+
26+
subgraph "Python MCP Server"
27+
MCP["Model Context Protocol Server<br>(run_server.py)"]
28+
Tools["Tools<br>- add(a, b)<br>- get_current_time()"]
29+
Resources["Resources<br>- greeting://{name}"]
30+
MCP --- Tools
31+
MCP --- Resources
32+
end
33+
34+
subgraph "LLM Providers"
35+
OAI_LLM["OpenAI Models"]
36+
GEM["Google Gemini Models"]
37+
OTHER["Other LLM Providers..."]
38+
end
39+
40+
Logfire[("Logfire<br>Tracing")]
41+
42+
ADK --> MCP
43+
LG --> MCP
44+
OAI --> MCP
45+
PYD --> MCP
46+
47+
MCP --> OAI_LLM
48+
MCP --> GEM
49+
MCP --> OTHER
50+
51+
ADK --> Logfire
52+
LG --> Logfire
53+
OAI --> Logfire
54+
PYD --> Logfire
55+
56+
LLM_Response[("Response")] --> User
57+
OAI_LLM --> LLM_Response
58+
GEM --> LLM_Response
59+
OTHER --> LLM_Response
60+
61+
style MCP fill:#f9f,stroke:#333,stroke-width:2px
62+
style User fill:#bbf,stroke:#338,stroke-width:2px
63+
style Logfire fill:#bfb,stroke:#383,stroke-width:2px
64+
style LLM_Response fill:#fbb,stroke:#833,stroke-width:2px
65+
```
66+
67+
The diagram illustrates how MCP serves as a standardised interface between different agent frameworks and LLM providers.The flow shows how users interact with the system by running a specific agent script, which then leverages MCP to communicate with LLM providers, while Logfire provides tracing and observability.
68+
69+
### Google Agent Development Kit (ADK)
70+
71+
**File:** `adk_mcp.py`
72+
73+
This example demonstrates how to use MCP with Google's Agent Development Kit (ADK).
74+
75+
```bash
76+
uv run agents_mcp_usage/basic_mcp/basic_mcp_use/adk_mcp.py
77+
```
78+
79+
Key features:
80+
- Uses `MCPToolset` for connecting to the MCP server
81+
- Configures a Gemini model using ADK's `LlmAgent`
82+
- Sets up session handling and runner for agent execution
83+
- Includes Logfire instrumentation for tracing
84+
85+
### LangGraph
86+
87+
**File:** `langgraph_mcp.py`
88+
89+
This example demonstrates how to use MCP with LangGraph agents.
90+
91+
```bash
92+
uv run agents_mcp_usage/basic_mcp/basic_mcp_use/langgraph_mcp.py
93+
```
94+
95+
Key features:
96+
- Uses LangChain MCP adapters to load tools
97+
- Creates a ReAct agent with LangGraph
98+
- Demonstrates stdio-based client connection to MCP server
99+
- Uses Gemini model for agent reasoning
100+
101+
### OpenAI Agents
102+
103+
**File:** `oai-agent_mcp.py`
104+
105+
This example demonstrates how to use MCP with OpenAI's Agents package.
106+
107+
```bash
108+
uv run agents_mcp_usage/basic_mcp/basic_mcp_use/oai-agent_mcp.py
109+
```
110+
111+
Key features:
112+
- Uses OpenAI's Agent and Runner classes
113+
- Connects to MCP server through MCPServerStdio
114+
- Uses OpenAI's o4-mini model
115+
- Includes Logfire instrumentation for both MCP and OpenAI Agents
116+
117+
### Pydantic-AI
118+
119+
**File:** `pydantic_mcp.py`
120+
121+
This example demonstrates how to use MCP with the Pydantic-AI agent framework.
122+
123+
```bash
124+
uv run agents_mcp_usage/basic_mcp/basic_mcp_use/pydantic_mcp.py
125+
```
126+
127+
Key features:
128+
- Uses the simplified Pydantic-AI Agent interface
129+
- Configures MCPServerStdio for MCP communication
130+
- Employs context manager for server lifecycle management
131+
- Includes comprehensive instrumentation for both MCP and Pydantic-AI
132+
133+
134+
## Understanding the Examples
135+
136+
Each example follows a similar pattern:
137+
138+
1. **Environment Setup**: Loading environment variables and configuring logging
139+
2. **Server Connection**: Establishing a connection to the local MCP server
140+
3. **Agent Configuration**: Setting up an agent with the appropriate model
141+
4. **Execution**: Running the agent with a query and handling the response
142+
143+
The examples are designed to be as similar as possible, allowing you to compare how different frameworks approach MCP integration.
144+
145+
## MCP Server
146+
147+
All examples connect to the same MCP server defined in `run_server.py` at the project root. This server provides:
148+
149+
- An addition tool (`add(a, b)`)
150+
- A time tool (`get_current_time()`)
151+
- A dynamic greeting resource (`greeting://{name}`)
152+
153+
You can modify the MCP server to add your own tools and resources for experimentation.
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
2+
# Multi-MCP Usage Examples
3+
4+
This directory contains advanced examples demonstrating the integration of multiple Model Context Protocol (MCP) servers with agent frameworks.
5+
6+
Unlike the basic examples that use a single MCP server, these examples show how to connect to and coordinate between multiple specialized MCP servers simultaneously.
7+
8+
### Multi-MCP Architecture
9+
10+
```mermaid
11+
graph LR
12+
User((User)) --> |"Run script<br>(e.g., pydantic_mcp.py)"| Agent
13+
14+
subgraph "Agent Framework"
15+
Agent["Pydantic-AI Agent<br>(pydantic_mcp.py)"]
16+
end
17+
18+
subgraph "MCP Servers"
19+
PythonMCP["Python MCP Server<br>(run_server.py)"]
20+
NodeMCP["Node.js MCP Server<br>(mermaid-validator)"]
21+
22+
Tools["Tools<br>- add(a, b)<br>- get_current_time()"]
23+
Resources["Resources<br>- greeting://{name}"]
24+
MermaidValidator["Mermaid Diagram<br>Validation Tools"]
25+
26+
PythonMCP --- Tools
27+
PythonMCP --- Resources
28+
NodeMCP --- MermaidValidator
29+
end
30+
31+
subgraph "LLM Providers"
32+
LLMs["PydanticAI LLM call"]
33+
end
34+
35+
Logfire[("Logfire<br>Tracing")]
36+
37+
Agent --> PythonMCP
38+
Agent --> NodeMCP
39+
40+
PythonMCP --> LLMs
41+
NodeMCP --> LLMs
42+
43+
Agent --> Logfire
44+
45+
LLM_Response[("Response")] --> User
46+
LLMs --> LLM_Response
47+
```
48+
49+
This diagram illustrates how an agent can leverage multiple specialized MCP servers simultaneously, each providing distinct tools and resources.
50+
51+
## Example Files
52+
53+
### Pydantic-AI Multi-MCP
54+
55+
**File:** `multi_mcp_use/pydantic_mcp.py`
56+
57+
This example demonstrates how to use multiple MCP servers with Pydantic-AI agents.
58+
59+
```bash
60+
uv run agents_mcp_usage/multi_mcp/multi_mcp_use/pydantic_mcp.py
61+
```
62+
63+
Key features:
64+
- Connects to multiple specialized MCP servers simultaneously
65+
- Organizes tools and resources by domain
66+
- Shows how to coordinate between different MCP servers
67+
- Includes Logfire instrumentation for comprehensive tracing
68+
69+
### Multi-MCP Evaluation
70+
71+
**File:** `eval_multi_mcp/evals_pydantic_mcp.py`
72+
73+
This example demonstrates how to evaluate the effectiveness of using multiple MCP servers.
74+
75+
```bash
76+
uv run agents_mcp_usage/multi_mcp/eval_multi_mcp/evals_pydantic_mcp.py
77+
```
78+
79+
Key features:
80+
- Evaluates agent performance when using multiple specialized MCP servers
81+
- Uses PydanticAI evaluation tools to measure outcomes
82+
- Compares results with single-MCP approaches
83+
- Generates performance metrics viewable in Logfire
84+
85+
### Mermaid Diagrams Generator
86+
87+
**File:** `mermaid_diagrams.py`
88+
89+
A utility for generating Mermaid diagrams to visualize MCP architecture.
90+
91+
```bash
92+
uv run agents_mcp_usage/multi_mcp/mermaid_diagrams.py
93+
```
94+
95+
Key features:
96+
- Creates visualization of MCP architectures
97+
- Helps understand the flow between agents, MCP servers, and LLMs
98+
- Customizable to represent different configurations
99+
100+
## Benefits of Multi-MCP Architecture
101+
102+
Using multiple specialized MCP servers offers several advantages:
103+
104+
1. **Domain Separation**: Each MCP server can focus on a specific domain or set of capabilities.
105+
2. **Modularity**: Add, remove, or update capabilities without disrupting the entire system.
106+
3. **Scalability**: Distribute load across multiple servers for better performance.
107+
4. **Specialization**: Optimize each MCP server for its specific use case.
108+
5. **Security**: Control access to sensitive tools or data through separate servers.
109+
110+
This approach provides a more flexible and maintainable architecture for complex agent systems.

0 commit comments

Comments
 (0)