Skip to content

Commit 6376c17

Browse files
committed
Initial commit
1 parent b8d6883 commit 6376c17

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+3827
-9
lines changed

.gitignore

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# CDK asset staging directory
2+
.cdk.staging
3+
cdk.out
4+
cdk.context.json
5+
*.swp
6+
package-lock.json
7+
8+
# Node
9+
*.js
10+
!jest.config.js
11+
*.d.ts
12+
node_modules
13+
14+
# Python
15+
__pycache__/
16+
*.py[cod]
17+
*$py.class
18+
*.so
19+
.Python
20+
build/
21+
develop-eggs/
22+
dist/
23+
downloads/
24+
eggs/
25+
.eggs/
26+
#lib/
27+
lib64/
28+
parts/
29+
sdist/
30+
var/
31+
wheels/
32+
*.egg-info/
33+
.installed.cfg
34+
*.egg
35+
.coverage
36+
htmlcov
37+
38+
# Virtual Environment
39+
venv/
40+
env/
41+
ENV/
42+
.env
43+
.venv
44+
45+
# IDE
46+
.idea/
47+
.vscode/
48+
*.swp
49+
*.swo
50+
.DS_Store
51+
52+
# Project specific
53+
*.db
54+
*.sqlite3
55+
*.log

README.md

Lines changed: 219 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,227 @@
1-
## My Project
1+
# Model Context Protocol with AWS Lambda
22

3-
TODO: Fill this README out!
3+
This project enables you to run [Model Context Protocol](https://modelcontextprotocol.io) servers in AWS Lambda functions.
44

5-
Be sure to:
5+
Currently, most implementations of MCP servers and clients are entirely local on a single machine.
6+
A desktop application such as an IDE or Claude Desktop initiates MCP servers locally as child processes
7+
and communicates with each of those servers over a long-running stdio stream.
68

7-
* Change the title in this README
8-
* Edit your repository description on GitHub
9+
```mermaid
10+
flowchart LR
11+
subgraph "Your Laptop"
12+
Host["Desktop Application<br>with MCP Clients"]
13+
S1["MCP Server A<br>(child process)"]
14+
S2["MCP Server B<br>(child process)"]
15+
Host <-->|"MCP Protocol<br>(over stdio stream)"| S1
16+
Host <-->|"MCP Protocol<br>(over stdio stream)"| S2
17+
end
18+
```
919

10-
## Security
20+
This MCP server adapter for AWS Lambda helps you to wrap existing stdio MCP servers into Lambda functions.
21+
You can invoke these function-based MCP servers from your application using the MCP protocol
22+
over short-lived connections.
23+
Your application can then be a desktop-based app, a distributed system running in the cloud,
24+
or any other architecture.
25+
The only requirement is that your application has access to invoke your Lambda functions.
1126

12-
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
27+
```mermaid
28+
flowchart LR
29+
subgraph "Distributed System"
30+
App["Your Application<br>with MCP Clients"]
31+
S3["MCP Server A<br>(Lambda function)"]
32+
S4["MCP Server B<br>(Lambda function)"]
33+
App <-->|"MCP Protocol<br>(invoke function)"| S3
34+
App <-->|"MCP Protocol<br>(invoke function)"| S4
35+
end
36+
```
1337

14-
## License
38+
## Considerations
1539

16-
This project is licensed under the Apache-2.0 License.
40+
* This package currently supports MCP servers and clients written in Python and Typescript.
41+
Other languages such as Kotlin are not supported.
42+
* The server adapters only adapt stdio MCP servers, not servers written for other protocols such as SSE.
43+
* The server adapters does not maintain any MCP server state across Lambda function invocations.
44+
Only stateless MCP servers are a good fit for using this adapter. For example, MCP servers
45+
that invoke stateless tools like the [time MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/time)
46+
or make stateless web requests like the [fetch MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).
47+
Stateful MCP servers are not a good fit, because they will lose their state on every request.
48+
For example, MCP servers that manage data on disk or in memory such as
49+
the [sqlite MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite),
50+
the [filesystem MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem),
51+
and the [git MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/git).
52+
* The server adapters ignore any MCP protocol notifications from the client to the server.
53+
* The server adapters do not provide mechanisms for managing any secrets needed by the wrapped
54+
MCP server. For example, the [GitHub MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/github)
55+
and the [Brave search MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)
56+
require API keys to make requests to third-party APIs.
57+
You can configure these API keys as
58+
[encrypted environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars-encryption.html)
59+
in the Lambda function's configuration. However, note that anyone with access to invoke the Lambda function
60+
will then have access to use your API key to call the third-party APIs by invoking the function.
61+
We recommend limiting access to the Lambda function using
62+
[least-privilege IAM policies](https://docs.aws.amazon.com/lambda/latest/dg/security-iam.html).
1763

64+
## Examples
65+
66+
### Python server example
67+
68+
This project includes an
69+
[example Python Lambda function](examples/servers/time/function/index.py)
70+
that runs the simple
71+
[MCP 'time' reference server](https://github.com/modelcontextprotocol/servers/tree/main/src/time).
72+
The Lambda function bundles the [mcp-server-time package](https://pypi.org/project/mcp-server-time/).
73+
On each function invocation, the Lambda function will manage the lifecycle of the bundled MCP server.
74+
It will:
75+
1. start the 'time' MCP server as a child process
76+
1. initialize the MCP server
77+
1. forward the incoming request to the local server
78+
1. return the server's response to the function caller
79+
1. shut down the MCP server child process
80+
81+
```python
82+
import sys
83+
from mcp.client.stdio import StdioServerParameters
84+
from mcp_lambda import stdio_server_adapter
85+
86+
server_params = StdioServerParameters(
87+
command=sys.executable,
88+
args=[
89+
"-m",
90+
"mcp_server_time",
91+
"--local-timezone",
92+
"America/New_York",
93+
],
94+
)
95+
96+
97+
def handler(event, context):
98+
return stdio_server_adapter(server_params, event, context)
99+
```
100+
101+
### Typescript server example
102+
103+
This project includes an
104+
[example Node.js Lambda function](examples/servers/weather-alerts/lib/weather-alerts-mcp-server.function.ts)
105+
that runs an [OpenAPI MCP server](https://github.com/snaggle-ai/openapi-mcp-server/)
106+
to provide a single API from [weather.gov](https://www.weather.gov/documentation/services-web-api) as a tool.
107+
The Lambda function bundles the [openapi-mcp-server package](https://www.npmjs.com/package/openapi-mcp-server).
108+
On each function invocation, the Lambda function will manage the lifecycle of the bundled MCP server.
109+
It will:
110+
1. start the 'openapi-mcp-server' MCP server as a child process
111+
1. initialize the MCP server
112+
1. forward the incoming request to the local server
113+
1. return the server's response to the function caller
114+
1. shut down the MCP server child process
115+
116+
```typescript
117+
import { Handler, Context } from 'aws-lambda';
118+
import { stdioServerAdapter } from 'mcp-lambda';
119+
120+
const serverParams = {
121+
command: 'npx',
122+
args: ['--offline', 'openapi-mcp-server', './weather-alerts-openapi.json'],
123+
};
124+
125+
export const handler: Handler = async (event, context: Context) => {
126+
return await stdioServerAdapter(serverParams, event, context);
127+
};
128+
```
129+
130+
### Python client example
131+
132+
This project includes an
133+
[example Python MCP client](examples/chatbot/server_clients/lambda_function.py)
134+
that invokes the 'time' MCP server function from above.
135+
The client invokes a Lambda function named "mcp-server-time" with a payload that is compliant
136+
with the MCP protocol and returns the function's response to the caller.
137+
138+
```python
139+
from mcp import ClientSession
140+
from mcp_lambda import LambdaFunctionParameters, lambda_function_client
141+
142+
server_params = LambdaFunctionParameters(
143+
function_name="mcp-server-time",
144+
region_name="us-east-2",
145+
)
146+
147+
read, write = await lambda_function_client(server_params)
148+
session = ClientSession(read, write)
149+
await session.initialize()
150+
```
151+
152+
### Deploy and run the examples
153+
154+
First, install the [AWS CDK CLI](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install).
155+
156+
Install the mcp-lambda Python module from source:
157+
158+
```bash
159+
cd src/python/
160+
161+
uv venv
162+
source .venv/bin/activate
163+
164+
uv sync --all-extras --dev
165+
166+
# For development
167+
uv run ruff check .
168+
uv run pyright
169+
uv run pytest
170+
```
171+
172+
Deploy the Lambda 'time' function - the deployed function will be named "mcp-server-time".
173+
174+
```bash
175+
cd examples/servers/time/
176+
177+
uv pip install -r requirements.txt
178+
179+
cdk deploy --app 'python3 cdk_stack.py'
180+
```
181+
182+
Build the mcp-lambda Typescript module:
183+
184+
```bash
185+
cd src/typescript/
186+
187+
npm install
188+
189+
npm run build
190+
191+
npm link
192+
```
193+
194+
Deploy the Lambda 'weather-alerts' function - the deployed function will be named "mcp-server-weather-alerts".
195+
196+
```bash
197+
cd examples/servers/weather-alerts/
198+
199+
npm install
200+
201+
npm link mcp-lambda
202+
203+
npm run build
204+
205+
cdk deploy --app 'node lib/weather-alerts-mcp-server.js'
206+
```
207+
208+
Run the chatbot client:
209+
210+
```bash
211+
cd examples/chatbot/
212+
213+
uv pip install -r requirements.txt
214+
215+
python main.py
216+
```
217+
218+
The chatbot client will communicate with three servers:
219+
1. the Lambda function-based 'time' MCP server
220+
2. the Lambda function-based 'weather-alerts' MCP server
221+
3. a [local 'fetch' MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)
222+
223+
To use the remote 'time' server, you can ask the chatbot questions like "What is the current time?".
224+
225+
To use the remote 'weather-alerts' server, you can ask the chatbot questions like "Are there any weather alerts right now?".
226+
227+
To use the local 'fetch' server, you can ask questions like "Who is Tom Cruise?".

e2e_tests/python/chat_session.py

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
import asyncio
2+
import json
3+
import logging
4+
from typing import Optional
5+
6+
from server_clients.server import Server
7+
from server_clients.servers import Servers
8+
from llm_client import LLMClient
9+
10+
11+
class ChatSession:
12+
"""Orchestrates the interaction between user, LLM, and tools."""
13+
14+
def __init__(
15+
self, servers: list[Server], llm_client: LLMClient, user_utterances: list[str]
16+
) -> None:
17+
self.servers: list[Server] = servers
18+
self.llm_client: LLMClient = llm_client
19+
self.user_utterances: list[str] = user_utterances
20+
21+
async def execute_requested_tools(
22+
self, servers_manager: Servers, llm_response
23+
) -> Optional[list[dict]]:
24+
"""Process the LLM response and execute tools if needed.
25+
26+
Args:
27+
llm_response: The response from the Bedrock Converse API.
28+
29+
Returns:
30+
The result of tool execution, if any.
31+
"""
32+
stop_reason = llm_response["stopReason"]
33+
34+
if stop_reason == "tool_use":
35+
try:
36+
tool_responses = []
37+
for content_item in llm_response["output"]["message"]["content"]:
38+
if "toolUse" in content_item:
39+
logging.info(
40+
f'Executing tool: {content_item["toolUse"]["name"]}'
41+
)
42+
logging.info(
43+
f'With arguments: {content_item["toolUse"]["input"]}'
44+
)
45+
response = await servers_manager.execute_tool(
46+
content_item["toolUse"]["name"],
47+
content_item["toolUse"]["toolUseId"],
48+
content_item["toolUse"]["input"],
49+
)
50+
tool_responses.append(response)
51+
return {"role": "user", "content": tool_responses}
52+
except KeyError as e:
53+
raise ValueError(f"Missing required tool use field: {e}")
54+
except Exception as e:
55+
raise ValueError(f"Failed to execute tool: {e}")
56+
else:
57+
# Assume this catches stop reasons "end_turn", "stop_sequence", and "max_tokens"
58+
return None
59+
60+
async def start(self) -> None:
61+
"""Main chat session handler."""
62+
async with Servers(self.servers) as server_manager:
63+
all_tools = await server_manager.list_tools()
64+
tools_description = [tool.format_for_llm() for tool in all_tools]
65+
66+
system_prompt = "You are a helpful assistant."
67+
68+
messages = []
69+
70+
for i, user_input in enumerate(self.user_utterances):
71+
if i != 0:
72+
print("\n**Pausing 5 seconds to avoid Bedrock throttling**")
73+
await asyncio.sleep(5)
74+
75+
print(f"\nYou: {user_input}")
76+
77+
messages.append({"role": "user", "content": [{"text": user_input}]})
78+
79+
llm_response = self.llm_client.get_response(
80+
messages, system_prompt, tools_description
81+
)
82+
logging.debug("\nAssistant: %s", json.dumps(llm_response, indent=2))
83+
print(
84+
f'\nAssistant: {llm_response["output"]["message"]["content"][0]["text"]}'
85+
)
86+
messages.append(llm_response["output"]["message"])
87+
88+
tool_results = await self.execute_requested_tools(
89+
server_manager, llm_response
90+
)
91+
92+
if tool_results:
93+
logging.debug(
94+
"\nTool Results: %s", json.dumps(tool_results, indent=2)
95+
)
96+
messages.append(tool_results)
97+
final_response = self.llm_client.get_response(
98+
messages, system_prompt, tools_description
99+
)
100+
logging.debug(
101+
"\nFinal response: %s", json.dumps(final_response, indent=2)
102+
)
103+
print(
104+
f'\nAssistant: {final_response["output"]["message"]["content"][0]["text"]}'
105+
)
106+
messages.append(final_response["output"]["message"])

0 commit comments

Comments
 (0)