A demonstration project showcasing Model Context Protocol (MCP) implementations using FastMCP, with examples of stdio and HTTP transports, integration with LangChain and Agent Framework, and deployment to Azure Container Apps.
- Getting started
- Run local MCP servers
- Run local Agents <-> MCP
- Deploy to Azure
- Deploy to Azure with private networking
- Deploy to Azure with Keycloak authentication
You have a few options for setting up this project. The quickest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.
You can run this project virtually by using GitHub Codespaces. Click the button to open a web-based VS Code instance in your browser:
Once the Codespace is open, open a terminal window and continue with the deployment steps.
A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
- Start Docker Desktop (install it if not already installed)
- Open the project:
- In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
- Continue with the deployment steps.
If you're not using one of the above options, then you'll need to:
-
Make sure the following tools are installed:
-
Clone the repository and open the project folder.
-
Create a Python virtual environment and activate it.
-
Install the dependencies:
uv sync
-
Copy
.env-sampleto.envand configure your environment variables:cp .env-sample .env
-
Edit
.envwith your API credentials. Choose one of the following providers by settingAPI_HOST:github- GitHub Models (requiresGITHUB_TOKEN)azure- Azure OpenAI (requires Azure credentials)ollama- Local Ollama instanceopenai- OpenAI API (requiresOPENAI_API_KEY)
This project includes MCP servers in the servers/ directory:
| File | Description |
|---|---|
| servers/basic_mcp_stdio.py | MCP server with stdio transport for VS Code integration |
| servers/basic_mcp_http.py | MCP server with HTTP transport on port 8000 |
| servers/deployed_mcp.py | MCP server for Azure deployment with Cosmos DB and optional Keycloak auth |
The local servers (basic_mcp_stdio.py and basic_mcp_http.py) implement an "Expenses Tracker" with a tool to add expenses to a CSV file.
The .vscode/mcp.json file configures MCP servers for GitHub Copilot integration:
Available Servers:
- expenses-mcp: stdio transport server for production use
- expenses-mcp-debug: stdio server with debugpy on port 5678
- expenses-mcp-http: HTTP transport server at
http://localhost:8000/mcp. You must start this server manually withuv run servers/basic_mcp_http.pybefore using it.
Switching Servers:
Configure which server GitHub Copilot uses by opening the Chat panel, selecting the tools icon, and choosing the desired MCP server from the list.
Example input:
Use a query like this to test the expenses MCP server:
Log expense for 50 bucks of pizza on my amex today
The .vscode/launch.json provides a debug configuration to attach to an MCP server.
To debug an MCP server with GitHub Copilot Chat:
- Set breakpoints in the MCP server code in
servers/basic_mcp_stdio.py - Start the debug server via
mcp.jsonconfiguration by selectingexpenses-mcp-debug - Press
Cmd+Shift+Dto open Run and Debug - Select "Attach to MCP Server (stdio)" configuration
- Press
F5or the play button to start the debugger - Select the expenses-mcp-debug server in GitHub Copilot Chat tools
- Use GitHub Copilot Chat to trigger the MCP tools
- Debugger pauses at breakpoints
The MCP Inspector is a developer tool for testing and debugging MCP servers.
Note: While HTTP servers can technically work with port forwarding in Codespaces/Dev Containers, the setup for MCP Inspector and debugger attachment is not straightforward. For the best development experience with full debugging capabilities, we recommend running this project locally.
For stdio servers:
npx @modelcontextprotocol/inspector uv run servers/basic_mcp_stdio.pyFor HTTP servers:
-
Start the HTTP server:
uv run servers/basic_mcp_http.py
-
In another terminal, run the inspector:
npx @modelcontextprotocol/inspector http://localhost:8000/mcp
The inspector provides a web interface to:
- View available tools, resources, and prompts
- Test tool invocations with custom parameters
- Inspect server responses and errors
- Debug server communication
This project includes example agents in the agents/ directory that demonstrate how to connect AI agents to MCP servers:
| File | Description |
|---|---|
| agents/agentframework_learn.py | Microsoft Agent Framework integration with MCP |
| agents/agentframework_http.py | Microsoft Agent Framework integration with local Expenses MCP server |
| agents/langchainv1_http.py | LangChain agent with MCP integration |
| agents/langchainv1_github.py | LangChain tool filtering demo with GitHub MCP (requires GITHUB_TOKEN) |
To run an agent:
-
First start the HTTP MCP server:
uv run servers/basic_mcp_http.py
-
In another terminal, run an agent:
uv run agents/agentframework_http.py
The agents will connect to the MCP server and allow you to interact with the expense tracking tools through a chat interface.
This project can be deployed to Azure Container Apps using the Azure Developer CLI (azd). The deployment provisions:
- Azure Container Apps - Hosts both the MCP server and agent
- Azure OpenAI - Provides the LLM for the agent
- Azure Cosmos DB - Stores expenses data
- Azure Container Registry - Stores container images
- Log Analytics - Monitoring and diagnostics
- Sign up for a free Azure account and create an Azure Subscription.
- Check that you have the necessary permissions:
- Your Azure account must have
Microsoft.Authorization/roleAssignments/writepermissions, such as Role Based Access Control Administrator, User Access Administrator, or Owner. - Your Azure account also needs
Microsoft.Resources/deployments/writepermissions on the subscription level.
- Your Azure account must have
-
Login to Azure:
azd auth login
For GitHub Codespaces users, if the previous command fails, try:
azd auth login --use-device-code
-
Create a new azd environment:
azd env new
This will create a folder inside
.azurewith the name of your environment. -
Provision and deploy the resources:
azd up
It will prompt you to select a subscription and location. This will take several minutes to complete.
-
Once deployment is complete, a
.envfile will be created with the necessary environment variables to run the agents locally against the deployed resources.
Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage.
You can try the Azure pricing calculator for the resources:
- Azure OpenAI Service: S0 tier, GPT-4o-mini model. Pricing is based on token count. Pricing
- Azure Container Apps: Consumption tier. Pricing
- Azure Container Registry: Standard tier. Pricing
- Azure Cosmos DB: Serverless tier. Pricing
- Log Analytics (Optional): Pay-as-you-go tier. Costs based on data ingested. Pricing
azd down.
To demonstrate enhanced security for production deployments, this project supports deploying with a virtual network (VNet) configuration that restricts public access to Azure resources.
-
Set these azd environment variables to set up a virtual network and private endpoints for the Container App, Cosmos DB, and OpenAI resources:
azd env set USE_VNET true azd env set USE_PRIVATE_INGRESS true
The Log Analytics and ACR resources will still have public access enabled, so that you can deploy and monitor the app without needing a VPN. In production, you would typically restrict these as well.
-
Provision and deploy:
azd up
When using VNet configuration, additional Azure resources are provisioned:
- Virtual Network: Pay-as-you-go tier. Costs based on data processed. Pricing
- Azure Private DNS Resolver: Pricing per month, endpoints, and zones. Pricing
- Azure Private Endpoints: Pricing per hour per endpoint. Pricing
This project supports deploying with OAuth 2.0 authentication using Keycloak as the identity provider, implementing the MCP OAuth specification with Dynamic Client Registration (DCR).
| Component | Description |
|---|---|
| Keycloak Container App | Keycloak 26.0 with pre-configured realm |
| HTTP Route Configuration | Rule-based routing: /auth/* → Keycloak, /* → MCP Server |
| OAuth-protected MCP Server | FastMCP with JWT validation against Keycloak's JWKS endpoint |
-
Set the Keycloak admin password (required):
azd env set KEYCLOAK_ADMIN_PASSWORD "YourSecurePassword123!"
-
Optionally customize the realm name (default:
mcp):azd env set KEYCLOAK_REALM_NAME "mcp"
-
Deploy to Azure:
azd up
This will create the Azure Container Apps environment, deploy Keycloak with the pre-configured realm, deploy the MCP server with OAuth validation, and configure HTTP route-based routing.
-
Verify deployment by checking the outputs:
azd env get-value MCP_SERVER_URL azd env get-value KEYCLOAK_DIRECT_URL azd env get-value KEYCLOAK_ADMIN_CONSOLE
-
Visit the Keycloak admin console to verify the realm is configured:
https://<your-mcproutes-url>/auth/adminLogin with
adminand your configured password.
-
Generate the local environment file (automatically created after
azd up):./infra/write_env.sh
This creates
.envwithKEYCLOAK_REALM_URL,MCP_SERVER_URL, and Azure OpenAI settings. -
Run the agent:
uv run agents/agentframework_http.py
The agent automatically detects
KEYCLOAK_REALM_URLin the environment and authenticates via DCR + client credentials. On success, it will add an expense and print the result.
| Item | Current | Production Recommendation | Why |
|---|---|---|---|
| Keycloak mode | start-dev |
start with proper config |
Dev mode has relaxed security defaults |
| Database | H2 in-memory | PostgreSQL | H2 doesn't persist data across restarts |
| Replicas | 1 (due to H2) | Multiple with shared DB | H2 is in-memory, can't share state |
| Keycloak access | Public (direct URL) | Internal only via routes | Route URL isn't known until after deployment |
| DCR | Open (anonymous) | Require initial access token | Any client can register without auth |
Note: Keycloak must be publicly accessible because its URL is dynamically generated by Azure. Token issuer validation requires a known URL, but the mcproutes URL isn't available until after deployment. Using a custom domain would fix this.

