Skip to content

Commit ac94198

Browse files
committed
feat: add LangChain & Agent Framework MCP examples, update prompts, deps, and docs
1 parent 8967db7 commit ac94198

File tree

11 files changed

+2159
-54
lines changed

11 files changed

+2159
-54
lines changed

.env-sample

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# API Host Selection
2+
# Options: github, azure, ollama, openai
3+
API_HOST=github
4+
5+
# GitHub Models Configuration
6+
GITHUB_TOKEN=your_github_token_here
7+
GITHUB_MODEL=gpt-4o
8+
9+
# Azure OpenAI Configuration
10+
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
11+
AZURE_OPENAI_CHAT_DEPLOYMENT=your-deployment-name
12+
AZURE_OPENAI_VERSION=2024-02-15-preview
13+
14+
# Ollama Configuration
15+
OLLAMA_MODEL=llama3.1
16+
OLLAMA_ENDPOINT=http://localhost:11434/v1
17+
OLLAMA_API_KEY=ollama
18+
19+
# OpenAI Configuration (default if API_HOST not set)
20+
OPENAI_MODEL=gpt-4o-mini
21+
OPENAI_API_KEY=your_openai_api_key_here

.vscode/launch.json

Lines changed: 38 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,29 @@
22
"version": "0.2.0",
33
"configurations": [
44
{
5-
"name": "Attach to MCP Server",
5+
"name": "Launch MCP HTTP Server (Debug)",
6+
"type": "debugpy",
7+
"request": "launch",
8+
"program": "${workspaceFolder}/basic_mcp_http.py",
9+
"console": "integratedTerminal",
10+
"cwd": "${workspaceFolder}",
11+
"env": {
12+
"PYTHONUNBUFFERED": "1"
13+
}
14+
},
15+
{
16+
"name": "Launch MCP stdio Server (Debug)",
17+
"type": "debugpy",
18+
"request": "launch",
19+
"program": "${workspaceFolder}/basic_mcp_stdio.py",
20+
"console": "integratedTerminal",
21+
"cwd": "${workspaceFolder}",
22+
"env": {
23+
"PYTHONUNBUFFERED": "1"
24+
}
25+
},
26+
{
27+
"name": "Attach to MCP Server (stdio)",
628
"type": "debugpy",
729
"request": "attach",
830
"connect": {
@@ -15,6 +37,21 @@
1537
"remoteRoot": "${workspaceFolder}"
1638
}
1739
]
40+
},
41+
{
42+
"name": "Attach to MCP Server (HTTP)",
43+
"type": "debugpy",
44+
"request": "attach",
45+
"connect": {
46+
"host": "localhost",
47+
"port": 5679
48+
},
49+
"pathMappings": [
50+
{
51+
"localRoot": "${workspaceFolder}",
52+
"remoteRoot": "${workspaceFolder}"
53+
}
54+
]
1855
}
1956
]
2057
}

.vscode/mcp.json

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,25 @@
3333
"expenses-mcp-http": {
3434
"type": "http",
3535
"url": "http://localhost:8000/mcp"
36+
},
37+
"expenses-mcp-http-debug": {
38+
"type": "stdio",
39+
"command": "uv",
40+
"cwd": "${workspaceFolder}",
41+
"args": [
42+
"run",
43+
"--",
44+
"python",
45+
"-m",
46+
"debugpy",
47+
"--listen",
48+
"0.0.0.0:5679",
49+
"--wait-for-client",
50+
"basic_mcp_http.py"
51+
],
52+
"env": {
53+
"PYTHONUNBUFFERED": "1"
54+
}
3655
}
3756
},
3857
"inputs": []

README.md

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
# Python MCP Demo
2+
3+
A demonstration project showcasing Model Context Protocol (MCP) implementations using FastMCP, with examples of stdio, HTTP transports, and integration with LangChain and Agent Framework.
4+
5+
## Table of Contents
6+
7+
- [Prerequisites](#prerequisites)
8+
- [Setup](#setup)
9+
- [Python Scripts](#python-scripts)
10+
- [MCP Server Configuration](#mcp-server-configuration)
11+
- [Debugging](#debugging)
12+
- [License](#license)
13+
14+
## Prerequisites
15+
16+
- Python 3.13 or higher
17+
- [uv](https://docs.astral.sh/uv/)
18+
- API access to one of the following:
19+
- GitHub Models (GitHub token)
20+
- Azure OpenAI (Azure credentials)
21+
- Ollama (local installation)
22+
- OpenAI API (API key)
23+
24+
## Setup
25+
26+
1. Install dependencies using `uv`:
27+
28+
```bash
29+
uv sync
30+
```
31+
32+
2. Copy `.env-sample` to `.env` and configure your environment variables:
33+
34+
```bash
35+
cp .env-sample .env
36+
```
37+
38+
3. Edit `.env` with your API credentials. Choose one of the following providers by setting `API_HOST`:
39+
- `github` - GitHub Models (requires `GITHUB_TOKEN`)
40+
- `azure` - Azure OpenAI (requires Azure credentials)
41+
- `ollama` - Local Ollama instance
42+
- `openai` - OpenAI API (requires `OPENAI_API_KEY`)
43+
44+
## Python Scripts
45+
46+
Run any script with: `uv run <script_name>`
47+
48+
- **basic_mcp_http.py** - MCP server with HTTP transport on port 8000
49+
- **basic_mcp_stdio.py** - MCP server with stdio transport for VS Code integration
50+
- **langchainv1_mcp_http.py** - LangChain agent with MCP tool integration and temporal context handling
51+
- **agentframework_mcp_learn.py** - Microsoft Agent Framework integration with MCP
52+
53+
## MCP Server Configuration
54+
55+
### Using with MCP Inspector
56+
57+
The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is a developer tool for testing and debugging MCP servers.
58+
59+
> **Note:** While HTTP servers can technically work with port forwarding in Codespaces/Dev Containers, the setup for MCP Inspector and debugger attachment is not straightforward. For the best development experience with full debugging capabilities, we recommend running this project locally.
60+
61+
**For stdio servers:**
62+
63+
```bash
64+
npx @modelcontextprotocol/inspector uv run basic_mcp_stdio.py
65+
```
66+
67+
**For HTTP servers:**
68+
69+
1. Start the HTTP server:
70+
```bash
71+
uv run basic_mcp_http.py
72+
```
73+
74+
2. In another terminal, run the inspector:
75+
```bash
76+
npx @modelcontextprotocol/inspector http://localhost:8000/mcp
77+
```
78+
79+
The inspector provides a web interface to:
80+
- View available tools, resources, and prompts
81+
- Test tool invocations with custom parameters
82+
- Inspect server responses and errors
83+
- Debug server communication
84+
85+
### Using with GitHub Copilot
86+
87+
The `.vscode/mcp.json` file configures MCP servers for GitHub Copilot integration:
88+
89+
**Available Servers:**
90+
91+
- **expenses-mcp**: stdio transport server for production use
92+
- **expenses-mcp-debug**: stdio server with debugpy on port 5678
93+
- **expenses-mcp-http**: HTTP transport server at `http://localhost:8000/mcp`
94+
- **expenses-mcp-http-debug**: stdio server with debugpy on port 5679
95+
96+
**Switching Servers:**
97+
98+
Configure which server GitHub Copilot uses by selecting it in the Chat panel selecting the tools icon.
99+
100+
## Debugging
101+
102+
### Debug Configurations
103+
104+
The `.vscode/launch.json` provides four debug configurations:
105+
106+
#### Launch Configurations (Start server with debugging)
107+
108+
1. **Launch MCP HTTP Server (Debug)**
109+
- Directly starts `basic_mcp_http.py` with debugger attached
110+
- Best for: Standalone testing and LangChain script debugging
111+
112+
2. **Launch MCP stdio Server (Debug)**
113+
- Directly starts `basic_mcp_stdio.py` with debugger attached
114+
- Best for: Testing stdio communication
115+
116+
#### Attach Configurations (Attach to running server)
117+
118+
3. **Attach to MCP Server (stdio)** - Port 5678
119+
- Attaches to server started via `expenses-mcp-debug` in `mcp.json`
120+
- Best for: Debugging during GitHub Copilot Chat usage
121+
122+
4. **Attach to MCP Server (HTTP)** - Port 5679
123+
- Attaches to server started via `expenses-mcp-http-debug` in `mcp.json`
124+
- Best for: Debugging HTTP server during Copilot usage
125+
126+
### Debugging Workflow
127+
128+
#### Option 1: Launch and Debug (Standalone)
129+
130+
Use this approach for debugging with MCP Inspector or LangChain scripts:
131+
132+
1. Set breakpoints in `basic_mcp_http.py` or `basic_mcp_stdio.py`
133+
2. Press `Cmd+Shift+D` to open Run and Debug
134+
3. Select "Launch MCP HTTP Server (Debug)" or "Launch MCP stdio Server (Debug)"
135+
4. Press `F5` or click the green play button
136+
5. Connect MCP Inspector or run your LangChain script to trigger breakpoints
137+
- For HTTP: `npx @modelcontextprotocol/inspector http://localhost:8000/mcp`
138+
- For stdio: `npx @modelcontextprotocol/inspector uv run basic_mcp_stdio.py` (start without debugger first)
139+
140+
#### Option 2: Attach to Running Server (Copilot Integration)
141+
142+
1. Set breakpoints in your MCP server file
143+
1. Start the debug server via `mcp.json` configuration:
144+
- Select `expenses-mcp-debug` or `expenses-mcp-http-debug`
145+
1. Press `Cmd+Shift+D` to open Run and Debug
146+
1. Select appropriate "Attach to MCP Server" configuration
147+
1. Press `F5` to attach
148+
1. Select correct expense mcp server in GitHub Copilot Chat tools
149+
1. Use GitHub Copilot Chat to trigger the MCP tools
150+
1. Debugger pauses at breakpoints
151+
152+
## License
153+
154+
MIT

agentframework_mcp_learn.py

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
import logging
5+
import os
6+
7+
8+
from dotenv import load_dotenv
9+
from rich import print
10+
from rich.logging import RichHandler
11+
from agent_framework import ChatAgent, MCPStreamableHTTPTool
12+
from agent_framework.azure import AzureOpenAIChatClient
13+
from agent_framework.openai import OpenAIChatClient
14+
from azure.identity import DefaultAzureCredential
15+
16+
logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()])
17+
logger = logging.getLogger("learn_mcp_lang")
18+
19+
load_dotenv(override=True)
20+
API_HOST = os.getenv("API_HOST", "github")
21+
22+
if API_HOST == "azure":
23+
client = AzureOpenAIChatClient(
24+
credential=DefaultAzureCredential(),
25+
deployment_name=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
26+
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
27+
api_version=os.environ.get("AZURE_OPENAI_VERSION"),
28+
)
29+
elif API_HOST == "github":
30+
client = OpenAIChatClient(
31+
base_url="https://models.github.ai/inference",
32+
api_key=os.environ["GITHUB_TOKEN"],
33+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
34+
)
35+
elif API_HOST == "ollama":
36+
client = OpenAIChatClient(
37+
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
38+
api_key="none",
39+
model_id=os.environ.get("OLLAMA_MODEL", "llama3.1:latest"),
40+
)
41+
else:
42+
client = OpenAIChatClient(
43+
api_key=os.environ.get("OPENAI_API_KEY"), model_id=os.environ.get("OPENAI_MODEL", "gpt-4o")
44+
)
45+
46+
47+
async def http_mcp_example():
48+
async with (
49+
MCPStreamableHTTPTool(
50+
name="Microsoft Learn MCP",
51+
url="https://learn.microsoft.com/api/mcp",
52+
headers={"Authorization": "Bearer your-token"},
53+
) as mcp_server,
54+
ChatAgent(
55+
chat_client=client,
56+
name="DocsAgent",
57+
instructions="You help with Microsoft documentation questions.",
58+
) as agent,
59+
):
60+
result = await agent.run(
61+
"How to create an Azure storage account using az cli?",
62+
tools=mcp_server
63+
)
64+
print(result)
65+
66+
if __name__ == "__main__":
67+
asyncio.run(http_mcp_example())

basic_mcp_http.py

Lines changed: 27 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -101,30 +101,37 @@ async def get_expenses_data():
101101

102102

103103
@mcp.prompt
104-
def create_expense_prompt(
105-
date: str,
106-
amount: float,
107-
category: str,
108-
description: str,
109-
payment_method: str
104+
def analyze_spending_prompt(
105+
category: str | None = None,
106+
start_date: str | None = None,
107+
end_date: str | None = None
110108
) -> str:
109+
"""Generate a prompt to analyze spending patterns with optional filters."""
111110

112-
"""Generate a prompt to add a new expense using the add_expense tool."""
113-
114-
logger.info(f"Expense prompt created for: {description}")
115-
116-
return f"""
117-
Please add the following expense:
118-
- Date: {date}
119-
- Amount: ${amount}
120-
- Category: {category}
121-
- Description: {description}
122-
- Payment Method: {payment_method}
123-
Use the `add_expense` tool to record this transaction.
124-
"""
111+
filters = []
112+
if category:
113+
filters.append(f"Category: {category}")
114+
if start_date:
115+
filters.append(f"From: {start_date}")
116+
if end_date:
117+
filters.append(f"To: {end_date}")
118+
119+
filter_text = f" ({', '.join(filters)})" if filters else ""
120+
121+
return f"""
122+
Please analyze my spending patterns{filter_text} and provide:
123+
124+
1. Total spending breakdown by category
125+
2. Average daily/weekly spending
126+
3. Most expensive single transaction
127+
4. Payment method distribution
128+
5. Spending trends or unusual patterns
129+
6. Recommendations for budget optimization
130+
131+
Use the expense data to generate actionable insights.
132+
"""
125133

126134

127135
if __name__ == "__main__":
128136
logger.info("MCP Expenses server starting (HTTP mode on port 8000)")
129-
# Run in HTTP mode on port 8000
130137
mcp.run(transport="http", host="0.0.0.0", port=8000)

0 commit comments

Comments
 (0)