Skip to content

Commit ea7fb18

Browse files
authored
Merge pull request Azure-Samples#8 from pamelafox/reorg
Re-organize into "agents" and "servers" folder, update READMEs to match current repo state
2 parents 365e26c + ad8b409 commit ea7fb18

File tree

13 files changed

+1114
-1064
lines changed

13 files changed

+1114
-1064
lines changed

.vscode/mcp.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"cwd": "${workspaceFolder}",
77
"args": [
88
"run",
9-
"basic_mcp_stdio.py"
9+
"servers/basic_mcp_stdio.py"
1010
]
1111
},
1212
"expenses-mcp-http": {
@@ -25,7 +25,7 @@
2525
"debugpy",
2626
"--listen",
2727
"0.0.0.0:5678",
28-
"basic_mcp_stdio.py"
28+
"servers/basic_mcp_stdio.py"
2929
]
3030
}
3131
},

README.md

Lines changed: 42 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ A demonstration project showcasing Model Context Protocol (MCP) implementations
99
- [Python Scripts](#python-scripts)
1010
- [MCP Server Configuration](#mcp-server-configuration)
1111
- [Debugging](#debugging)
12-
- [License](#license)
1312

1413
## Prerequisites
1514

@@ -25,15 +24,15 @@ A demonstration project showcasing Model Context Protocol (MCP) implementations
2524

2625
1. Install dependencies using `uv`:
2726

28-
```bash
29-
uv sync
30-
```
27+
```bash
28+
uv sync
29+
```
3130

3231
2. Copy `.env-sample` to `.env` and configure your environment variables:
3332

34-
```bash
35-
cp .env-sample .env
36-
```
33+
```bash
34+
cp .env-sample .env
35+
```
3736

3837
3. Edit `.env` with your API credentials. Choose one of the following providers by setting `API_HOST`:
3938
- `github` - GitHub Models (requires `GITHUB_TOKEN`)
@@ -43,14 +42,14 @@ cp .env-sample .env
4342

4443
## Python Scripts
4544

46-
Run any script with: `uv run <script_name>`
45+
Run any script with: `uv run <script_path>`
4746

48-
- **basic_mcp_http.py** - MCP server with HTTP transport on port 8000
49-
- **basic_mcp_stdio.py** - MCP server with stdio transport for VS Code integration
50-
- **langchainv1_mcp_http.py** - LangChain agent with MCP integration
51-
- **langchainv1_mcp_github.py** - LangChain tool filtering demo with GitHub MCP (requires `GITHUB_TOKEN`)
52-
- **openai_agents_tool_filtering.py** - OpenAI Agents SDK tool filtering demo with Microsoft Learn MCP
53-
- **agentframework_mcp_learn.py** - Microsoft Agent Framework integration with MCP
47+
- **servers/basic_mcp_http.py** - MCP server with HTTP transport on port 8000
48+
- **servers/basic_mcp_stdio.py** - MCP server with stdio transport for VS Code integration
49+
- **agents/langchainv1_http.py** - LangChain agent with MCP integration
50+
- **agents/langchainv1_github.py** - LangChain tool filtering demo with GitHub MCP (requires `GITHUB_TOKEN`)
51+
- **agents/agentframework_learn.py** - Microsoft Agent Framework integration with MCP
52+
- **agents/agentframework_http.py** - Microsoft Agent Framework integration with local Expenses MCP server
5453

5554
## MCP Server Configuration
5655

@@ -63,22 +62,25 @@ The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is a deve
6362
**For stdio servers:**
6463

6564
```bash
66-
npx @modelcontextprotocol/inspector uv run basic_mcp_stdio.py
65+
npx @modelcontextprotocol/inspector uv run servers/basic_mcp_stdio.py
6766
```
6867

6968
**For HTTP servers:**
7069

7170
1. Start the HTTP server:
72-
```bash
73-
uv run basic_mcp_http.py
74-
```
71+
72+
```bash
73+
uv run servers/basic_mcp_http.py
74+
```
7575

7676
2. In another terminal, run the inspector:
77-
```bash
78-
npx @modelcontextprotocol/inspector http://localhost:8000/mcp
79-
```
77+
78+
```bash
79+
npx @modelcontextprotocol/inspector http://localhost:8000/mcp
80+
```
8081

8182
The inspector provides a web interface to:
83+
8284
- View available tools, resources, and prompts
8385
- Test tool invocations with custom parameters
8486
- Inspect server responses and errors
@@ -92,65 +94,37 @@ The `.vscode/mcp.json` file configures MCP servers for GitHub Copilot integratio
9294

9395
- **expenses-mcp**: stdio transport server for production use
9496
- **expenses-mcp-debug**: stdio server with debugpy on port 5678
95-
- **expenses-mcp-http**: HTTP transport server at `http://localhost:8000/mcp`
96-
- **expenses-mcp-http-debug**: stdio server with debugpy on port 5679
97+
- **expenses-mcp-http**: HTTP transport server at `http://localhost:8000/mcp`. You must start this server manually with `uv run servers/basic_mcp_http.py` before using it.
9798

9899
**Switching Servers:**
99100

100-
Configure which server GitHub Copilot uses by selecting it in the Chat panel selecting the tools icon.
101+
Configure which server GitHub Copilot uses by opening the Chat panel, selecting the tools icon, and choosing the desired MCP server from the list.
101102

102-
## Debugging
103-
104-
### Debug Configurations
105-
106-
The `.vscode/launch.json` provides four debug configurations:
107-
108-
#### Launch Configurations (Start server with debugging)
109-
110-
1. **Launch MCP HTTP Server (Debug)**
111-
- Directly starts `basic_mcp_http.py` with debugger attached
112-
- Best for: Standalone testing and LangChain script debugging
113-
114-
2. **Launch MCP stdio Server (Debug)**
115-
- Directly starts `basic_mcp_stdio.py` with debugger attached
116-
- Best for: Testing stdio communication
103+
![Servers selection dialog](readme_serverselect.png)
117104

118-
#### Attach Configurations (Attach to running server)
105+
**Example input**
119106

120-
3. **Attach to MCP Server (stdio)** - Port 5678
121-
- Attaches to server started via `expenses-mcp-debug` in `mcp.json`
122-
- Best for: Debugging during GitHub Copilot Chat usage
107+
Use a query like this to test the expenses MCP server:
123108

124-
4. **Attach to MCP Server (HTTP)** - Port 5679
125-
- Attaches to server started via `expenses-mcp-http-debug` in `mcp.json`
126-
- Best for: Debugging HTTP server during Copilot usage
109+
```
110+
Log expense for 50 bucks of pizza on my amex today
111+
```
127112

128-
### Debugging Workflow
113+
![Example GitHub Copilot Chat Input](readme_samplequery.png)
129114

130-
#### Option 1: Launch and Debug (Standalone)
115+
## Debugging
131116

132-
Use this approach for debugging with MCP Inspector or LangChain scripts:
117+
The `.vscode/launch.json` provides one debug configuration:
133118

134-
1. Set breakpoints in `basic_mcp_http.py` or `basic_mcp_stdio.py`
135-
2. Press `Cmd+Shift+D` to open Run and Debug
136-
3. Select "Launch MCP HTTP Server (Debug)" or "Launch MCP stdio Server (Debug)"
137-
4. Press `F5` or click the green play button
138-
5. Connect MCP Inspector or run your LangChain script to trigger breakpoints
139-
- For HTTP: `npx @modelcontextprotocol/inspector http://localhost:8000/mcp`
140-
- For stdio: `npx @modelcontextprotocol/inspector uv run basic_mcp_stdio.py` (start without debugger first)
119+
**Attach to MCP Server (stdio)**: Attaches to server started via `expenses-mcp-debug` in `mcp.json`
141120

142-
#### Option 2: Attach to Running Server (Copilot Integration)
121+
To debug an MCP server with GitHub Copilot Chat:
143122

144-
1. Set breakpoints in your MCP server file
145-
1. Start the debug server via `mcp.json` configuration:
146-
- Select `expenses-mcp-debug` or `expenses-mcp-http-debug`
123+
1. Set breakpoints in the MCP server code in `servers/basic_mcp_stdio.py`
124+
1. Start the debug server via `mcp.json` configuration by selecting `expenses-mcp-debug`
147125
1. Press `Cmd+Shift+D` to open Run and Debug
148-
1. Select appropriate "Attach to MCP Server" configuration
149-
1. Press `F5` to attach
150-
1. Select correct expense mcp server in GitHub Copilot Chat tools
126+
1. Select "Attach to MCP Server (stdio)" configuration
127+
1. Press `F5` or the play button to start the debugger
128+
1. Select the expenses-mcp-debug server in GitHub Copilot Chat tools
151129
1. Use GitHub Copilot Chat to trigger the MCP tools
152-
1. Debugger pauses at breakpoints
153-
154-
## License
155-
156-
MIT
130+
1. Debugger pauses at breakpoints

agents/agentframework_http.py

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
import logging
5+
import os
6+
7+
from azure.identity import DefaultAzureCredential
8+
from dotenv import load_dotenv
9+
from rich import print
10+
from rich.logging import RichHandler
11+
12+
from agent_framework import ChatAgent, MCPStreamableHTTPTool
13+
from agent_framework.azure import AzureOpenAIChatClient
14+
from agent_framework.openai import OpenAIChatClient
15+
16+
# Configure logging
17+
logging.basicConfig(
18+
level=logging.WARNING,
19+
format="%(message)s",
20+
datefmt="[%X]",
21+
handlers=[RichHandler()]
22+
)
23+
logger = logging.getLogger("agentframework_mcp_http")
24+
25+
# Load environment variables
26+
load_dotenv(override=True)
27+
28+
# Constants
29+
MCP_SERVER_URL = "http://localhost:8000/mcp/"
30+
31+
# Configure chat client based on API_HOST
32+
API_HOST = os.getenv("API_HOST", "github")
33+
34+
if API_HOST == "azure":
35+
client = AzureOpenAIChatClient(
36+
credential=DefaultAzureCredential(),
37+
deployment_name=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
38+
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
39+
api_version=os.environ.get("AZURE_OPENAI_VERSION"),
40+
)
41+
elif API_HOST == "github":
42+
client = OpenAIChatClient(
43+
base_url="https://models.github.ai/inference",
44+
api_key=os.environ["GITHUB_TOKEN"],
45+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
46+
)
47+
elif API_HOST == "ollama":
48+
client = OpenAIChatClient(
49+
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
50+
api_key="none",
51+
model_id=os.environ.get("OLLAMA_MODEL", "llama3.1:latest"),
52+
)
53+
else:
54+
client = OpenAIChatClient(
55+
api_key=os.environ.get("OPENAI_API_KEY"), model_id=os.environ.get("OPENAI_MODEL", "gpt-4o")
56+
)
57+
58+
59+
async def http_mcp_example() -> None:
60+
"""
61+
Demonstrate MCP integration with the local Expenses MCP server.
62+
63+
Creates an agent that can help users log expenses
64+
using the Expenses MCP server at http://localhost:8000/mcp/.
65+
"""
66+
async with (
67+
MCPStreamableHTTPTool(
68+
name="Expenses MCP Server",
69+
url=MCP_SERVER_URL
70+
) as mcp_server,
71+
ChatAgent(
72+
chat_client=client,
73+
name="Expenses Agent",
74+
instructions="You help users to log expenses.",
75+
) as agent,
76+
):
77+
user_query = "yesterday I bought a laptop for $1200 using my visa."
78+
result = await agent.run(user_query, tools=mcp_server)
79+
print(result)
80+
81+
82+
if __name__ == "__main__":
83+
asyncio.run(http_mcp_example())
File renamed without changes.
File renamed without changes.
File renamed without changes.

readme_samplequery.png

32.2 KB
Loading

readme_serverselect.png

48.2 KB
Loading
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)