Skip to content

Commit 9fb3daf

Browse files
Make mcp_basic_agent cloud ready (#450)
* Make mcp_basic_agent cloud ready * update mcp-basic-agent README for mcp-agent-cloud * Readding agent to token summary and adding docstring * Minor style fixes to the README
1 parent fd760f4 commit 9fb3daf

File tree

2 files changed

+83
-8
lines changed

2 files changed

+83
-8
lines changed

examples/basic/mcp_basic_agent/README.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,3 +87,67 @@ Run your MCP Agent app:
8787
```bash
8888
uv run main.py
8989
```
90+
91+
## `4` [Beta] Deploy to the cloud
92+
93+
### `a.` Log in to [MCP Agent Cloud](https://docs.mcp-agent.com/cloud/overview)
94+
95+
```bash
96+
uv run mcp-agent login
97+
```
98+
99+
### `b.` Update your `mcp_agent.secrets.yaml` to mark your developer secrets (keys)
100+
101+
```yaml
102+
openai:
103+
api_key: !developer_secret
104+
anthropic:
105+
api_key: !developer_secret
106+
# Other secrets as needed
107+
```
108+
109+
### `c.` Deploy your agent with a single command
110+
```bash
111+
uv run mcp-agent deploy my-first-agent
112+
```
113+
114+
### `d.` Connect to your deployed agent as an MCP server through any MCP client
115+
116+
#### Claude Desktop Integration
117+
118+
Configure Claude Desktop to access your agent servers by updating your `~/.claude-desktop/config.json`:
119+
120+
```json
121+
"my-agent-server": {
122+
"command": "/path/to/npx",
123+
"args": [
124+
"mcp-remote",
125+
"https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse",
126+
"--header",
127+
"Authorization: Bearer ${BEARER_TOKEN}"
128+
],
129+
"env": {
130+
"BEARER_TOKEN": "your-mcp-agent-cloud-api-token"
131+
}
132+
}
133+
```
134+
135+
#### MCP Inspector
136+
137+
Use MCP Inspector to explore and test your agent servers:
138+
139+
```bash
140+
npx @modelcontextprotocol/inspector
141+
```
142+
143+
Make sure to fill out the following settings:
144+
145+
| Setting | Value |
146+
|---|---|
147+
| *Transport Type* | *SSE* |
148+
| *SSE* | *https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse* |
149+
| *Header Name* | *Authorization* |
150+
| *Bearer Token* | *your-mcp-agent-cloud-api-token* |
151+
152+
> [!TIP]
153+
> In the Configuration, change the request timeout to a longer time period. Since your agents are making LLM calls, it is expected that it should take longer than simple API calls.

examples/basic/mcp_basic_agent/main.py

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,11 +46,18 @@
4646
# or loaded from mcp_agent.config.yaml/mcp_agent.secrets.yaml
4747
app = MCPApp(name="mcp_basic_agent") # settings=settings)
4848

49-
50-
async def example_usage():
49+
@app.tool()
50+
async def example_usage()->str:
51+
"""
52+
An example function/tool that uses an agent with access to the fetch and filesystem
53+
mcp servers. The agent will read the contents of mcp_agent.config.yaml, print the
54+
first 2 paragraphs of the mcp homepage, and summarize the paragraphs into a tweet.
55+
The example uses both OpenAI, Anthropic, and simulates a multi-turn conversation.
56+
"""
5157
async with app.run() as agent_app:
5258
logger = agent_app.logger
5359
context = agent_app.context
60+
result = ""
5461

5562
logger.info("Current config:", data=context.config.model_dump())
5663

@@ -68,25 +75,26 @@ async def example_usage():
6875

6976
async with finder_agent:
7077
logger.info("finder: Connected to server, calling list_tools...")
71-
result = await finder_agent.list_tools()
72-
logger.info("Tools available:", data=result.model_dump())
78+
tools_list = await finder_agent.list_tools()
79+
logger.info("Tools available:", data=tools_list.model_dump())
7380

7481
llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)
75-
result = await llm.generate_str(
82+
result += await llm.generate_str(
7683
message="Print the contents of mcp_agent.config.yaml verbatim",
7784
)
7885
logger.info(f"mcp_agent.config.yaml contents: {result}")
7986

8087
# Let's switch the same agent to a different LLM
8188
llm = await finder_agent.attach_llm(AnthropicAugmentedLLM)
8289

83-
result = await llm.generate_str(
90+
result += await llm.generate_str(
8491
message="Print the first 2 paragraphs of https://modelcontextprotocol.io/introduction",
8592
)
8693
logger.info(f"First 2 paragraphs of Model Context Protocol docs: {result}")
94+
result += "\n\n"
8795

8896
# Multi-turn conversations
89-
result = await llm.generate_str(
97+
result += await llm.generate_str(
9098
message="Summarize those paragraphs in a 128 character tweet",
9199
# You can configure advanced options by setting the request_params object
92100
request_params=RequestParams(
@@ -101,8 +109,9 @@ async def example_usage():
101109
logger.info(f"Paragraph as a tweet: {result}")
102110

103111
# Display final comprehensive token usage summary (use app convenience)
104-
await display_token_summary(agent_app, finder_agent)
112+
await display_token_summary(agent_app)
105113

114+
return result
106115

107116
async def display_token_summary(app_ctx: MCPApp, agent: Agent | None = None):
108117
"""Display comprehensive token usage summary using app/agent convenience APIs."""
@@ -129,6 +138,8 @@ async def display_token_summary(app_ctx: MCPApp, agent: Agent | None = None):
129138
)
130139
print(f" Cost: ${data.cost:.4f}")
131140

141+
print("\n" + "=" * 50)
142+
132143
# Optional: show a specific agent's aggregated usage
133144
if agent is not None:
134145
agent_usage = await agent.get_token_usage()

0 commit comments

Comments
 (0)